Site icon Marketing Thought

Net Promoter And Lessons For Academic Research

I value academic work that speaks to the issues of managers and others outside of academia. The Net Promoter Score/System (NPS) is widely used by managers and so it can be valuable when academics look into this metric. What then can we say about Net Promoter and the lessons for academic research?

Reviewing The Literature

I worked with Charan Bagga now at Calgary, and Alina Nastasoiu now at Deliveroo to reviewe the academic literature. We wanted to see what we could learn from the examination of NPS by academics. Disappointingly there isn’t much. With notable exceptions, even the work that has been done isn’t great. This is probably largely a function of incentives. It is hard to get good data to test NPS. It is much easier for academics to focus on testing what they can readily test rather than what might be useful for managers.

Supportive evidence for the use of NPS is available in the literature. That said, it isn’t especially compelling. Furthermore, major criticisms have often been ducked by NPS’ advocates. As such, my reading is that those academics who know about NPS generally aren’t excited by it. Despite this, one challenge we saw is that some academics seek to connect with managers by noting they have heard of NPS but without digging into it. This risks implicitly endorsing claims. We would argue it is better not to repeat claims if one hasn’t properly evaluated them.

Pick An Outcome Variable, Any Outcome Variable

The biggest concern the literature review highlighted was the problem of unconstrained dependent variable choice. Often the academic follows the dictates of circumstances, testing whatever is in the data.

Things are much worse if the academic has a choice. In essence, if you don’t decide what you are hoping to show before starting a test you are very likely to find something you can, post-hoc, decide to highlight. When you don’t mind what you connect a metric to things are easier. Try NPS with profits, then try with revenue, then sales volume, then awareness etc… With enough effort the research is bound to find a positive relationship between the metric and something.

Terrible Academic Moto

Such casualness about metrics is particularly frustrating when research is said to contradict a prior relationship. For example, a paper (by De Haan and colleagues) purported to contradict the earlier research of Keiningham and his team which criticized NPS. The newer paper said it found that NPS is an effective predictor of customer retention. This is infuriating as it inappropriately gives the impression of academic confusion.

Keiningham and his colleagues were testing NPS against revenue growth (Keiningham, Cooil, Andreassen, et al. 2007) to investigate Reichheld’s original claims (Reichheld 2003). De Haan and his colleagues consider NPS’ impact on retention and do not explain why they think retention and revenue growth are the same metric. The results of the two studies cannot be contrasted as the DVs are different.

Bendle, Bagga, and Nastasoiu, 2019

Tests of different relationships can’t really contradict each other. They are just testing different things. We don’t expect tests of a kid’s ability to score penalties in soccer to mirror their ability to do computer coding. If the results are different it doesn’t mean much except they are different tests.

Net Promoter And Lessons For Academic Research

NPS is a popular metric and more research on its uses and misuses would help academics communicate with managers. Such research would be useful but it can only be so if academics are careful about the messages they send.

For more on NPS see here.

Read: Neil Bendle, Charan Bagga, and Alina Nastasoiu (2019) Forging a Stronger Academic-Practitioner Partnership – The Case Of Net Promoter Score (NPS), Journal of Marketing Theory and Practice, 27 (2), https://www.tandfonline.com/doi/full/10.1080/10696679.2019.1577689

Exit mobile version