Predicting Return Behavior and Sales with CX Ratings or NPS

Tom H. C. Anderson
January 12th, 2018

A Customer Experience Case Study Utilizing OdinTexts’ Text and Predictive Analytics (Predicting Actual Return Behavior and Sales with CX Ratings or NPS)

We were honored today to have one of our case studies featured by Greenbook. Though we have several other similar cases like it, it remains one of our favorite uses of Customer Satisfaction/Customer Experience data (whether NPS or any other rating scales are used). The final analysis involved close to a million customers over a two-year period.

In the case study which features Jiffy Lube, we found that contrary to what Bain Consulting had been claiming in Harvard Business Review for over a decade, Customer Satisfaction Ratings (whether NPS, OSAT or any thing else), these metrics have very little correlation with actual return behavior/repurchase, and absolutely NO correlation with sales/revenue (business growth).

The solution to better understanding and modeling both return behavior and sales lies in leveraging both the structured and unstructured text data, something OdinText is uniquely built to do.

You can read the abbreviated case study on Greenbooks’ site here.

Feel free to contact us with any questions or for a slightly more in-depth write up.

OdinTexts’ software has recently been updated and is now even more powerful, in terms of easily handling predictive analytics related to any customer experience metric whether OSAT, NPS or any other metric. You may request information, as well as early access, to our upcoming release here.

Thank you for reading, and thank you to Greenbook for selecting and sharing this interesting case study.


8 thoughts on “Predicting Return Behavior and Sales with CX Ratings or NPS”

  1. Hi Tom,

    This is an interesting blog post. Are you able to say more about the relationship between NPS and your findings with the text analytics? In the Greenbook article, it says:

    “This finding suggested that while the NPS rating alone may not be sufficient to predict actual customer visit behavior or sales, looking at the NPS rating in combination with text provides a meaningful context in which the NPS rating can be used to understand visits and sales.”

    But the actual results discussed don’t show any connection to NPS (although perhaps to other scale measures).

    Looking forward to seeing the new release!


  2. Thanks Marc
    NPS is no different, no more or less magical per se than any other satisfaction likert scale [In fact Jiffy Lube has decided to move away from NPS]. One of the things we see is a lot of companies are asking a whole battery of likert scale questions, and getting very little to no additional information, they are just making their surveys a lot longer. That said, a single OSAT, NPS or other likert scale metric is not completely useless. One good metric can be leveraged together with the text in a number of ways too provide additional information and valence (something we have several patent claims around). I’m happy to discuss this in more detail off line 😉

  3. This case struck a particular chord with me. I have concerns about how firms mis-use NPS scores and see in this study the potential to understand the negative impact of that mis-use. I am not saying the mis-use is malicious or even deliberate. Rather, I think the irrational exuberance over NPS and other scores is an attempt to fit customer experience into existing corporate performance management models. The danger, now I suspect a reality, is a disconnect between the real voice of the customer and corporate financial performance.
    I admire your approach and have learned a lot reading your posts.

  4. This is great. I anticipate our new CEO will ask similar questions about why we do not leverage NPS. How can we take a look at the full case study?

  5. This is an interesting post. The Greenbook article doesn’t explain how the NPS score is even needed in your analysis. Can’t you just identify key drivers from the open end comments? How does the NPS score fit in?

  6. Very interesting results.

    In the case study you say that Ease of Use was not correlated to number of visits, but later that more comments referring to ease was. So is there not a strong correlation between how people rank Ease of Use and their comments about ease?

    Have you done anything comparing Customer Effort Score to your comments based results?

  7. Interesting. Revenue per location would be a combination of population density, amount of competition, and share, along with actually satisfying the customer. I can see where people can be equally happy with a low volume store as in a high volume store.
    The insight delivered by “ease” is intriguing.

Comments are closed.