Posts tagged Overall Satisfaction (OSAT)
What Does the Co-Occurence Graph Tell You?

Text Analytics Tips - Branding What does the co-occurrence graph tell you?Text Analytics Tips by Gosia

The co-occurrence graph in OdinText may look simple at first sight but it is in fact a very complex visualization. Based on an example we are going to show you how to read and interpret this graph. See the attached screenshots of a single co-occurrence graph based on a satisfaction survey of 500 car dealership customers (Fig. 1-4).

The co-occurrence graph is based on multidimensional scaling techniques that allow you to view the similarity between individual cases of data (e.g., automatic terms) taking into account various aspects of the data (i.e., frequency of occurrence, co-occurrence, relationship with the key metric). This graph plots the co-occurrence of words represented by the spatial distance between them, i.e., it plots as well as it can terms which are often mentioned together right next to each other (aka approximate overlap/concurrence).

Figure 1. Co-occurrence graph (all nodes and lines visible).

The attached graph (Fig. 1 above) is based on 50 most frequently occurring automatic terms (words) mentioned by the car dealership customers. Each node represents one term. The node’s size corresponds to the number of occurrences, i.e., in how many customer comments a given word was found (the greater node’s size, the greater the number of occurrences). In this example, green nodes correspond to higher overall satisfaction and red nodes to lower overall satisfaction given by customers who mentioned a given term, whereas brown nodes reflect satisfaction scores close to the metric midpoint. Finally, the thickness of the line connecting two nodes highlights how often the two terms are mentioned together (aka actual overlap/concurrence); the thicker the line, the more often they are mentioned together in a comment.

Figure 2. Co-occurrence graph (“unprofessional” node and lines highlighted).

So what are the most interesting insights based on a quick look at the co-occurrence graph of the car dealership customer satisfaction survey?

  • “Unprofessional” is the most negative term (red node) and it is most often mentioned together with “manager” or “employees” (Fig. 2 above).
  • “Waiting” is a relatively frequently occurring (medium-sized node) and a neutral term (brown node). It is often mentioned together with “room” (another neutral term) as well as “luxurious”, “coffee”, and “best”, which are corresponding to high overall satisfaction (light green node). Thus, it seems that the luxurious waiting room with available coffee is highly appreciated by customers and makes the waiting experience less negative (Fig. 3 below).
  • The dealership “staff” is often mentioned together with such positive terms as “always”, “caring”, “nice”, “trained”, and “quick” (Fig. 4 below). However, staff is also mentioned with more negative terms including “unprofessional”, “trust”, “helpful” suggesting a few negative customer evaluations related to these terms which may need attention and improvement.

    Figure 3. Co-occurrence graph (“waiting” node and lines highlighted).

    Figure 4. Co-occurrence graph (“staff” node and lines highlighted).

    Hopefully, this quick example can help you extract quick and valuable insights based on your own data!

Gosia

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

Customer Satisfaction: What do satisfied vs. dissatisfied customers talk about?

Text Analytics Tips - Branding What do satisfied versus dissatisfied customers talk about? - Group Comparison Example Text Analytics Tips by Gosia

In this post we are going to discuss one of the first questions most researchers tend to explore using OdinText: what do satisfied versus dissatisfied customers talk about? Many market researchers not only seek to find out what the entire population of their survey respondents mentions but it is even more critical for them to understand the strengths mentioned by customers who are happy and the problems mentioned by those who are less happy with the product or service.

To perform this kind of analysis you need to first identify “satisfied” and “dissatisfied” customers in your data. The best way to do it is based on a satisfaction or satisfaction-related metric, e.g., Overall Satisfaction or NPS (Net Promoter Score) Rating (i.e., likelihood to recommend). In this example, satisfied customers are going to be those who answered 4 – “Somewhat satisfied” or 5 – “Very satisfied” to the Overall Satisfaction question (scale 1-5). And dissatisfied customers are those who answered 1 – “Very dissatisfied” or 2 – “Somewhat dissatisfied”.

Next, you can compare the content of comments provided by the two groups of customers (Group Comparison tab). I suggest you first select the frequency of occurrence statistic for your comparison. You can use a dictionary or create your own issues that are meaningful to you and see whether the two groups of customers discuss these issues with different frequency or you can look at any differences in the frequency of most commonly mentioned automatic terms (which OdinText has generated automatically for you).

Figure 1. Frequency of issues mentioned by satisfied (Overall Satisfaction 4-5) versus dissatisfied (Overall Satisfaction 1-2) customers. Descending order of frequency for satisfied customers.Figure 1. Frequency of issues mentioned by satisfied (Overall Satisfaction 4-5) versus dissatisfied (Overall Satisfaction 1-2) customers. Descending order of frequency for satisfied customers.

In the attached figure you can see a chart based on a simple group comparison using a dictionary of terms of a sample service company. There you go, lots of exciting insights to present to your colleagues based on a very quick analysis!

Gosia

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

How to Increase the Amount of Text Data for Analysis

Text Analytics Tips - Branding How to Increase the Amount of Text Data for AnalysisText Analytics Tips by Gosia

If you find yourself slightly disappointed by the quantity or quality of text comments provided by your respondents you are definitely not alone. This is a common problem especially when survey respondents are not compensated for their answers and when they are allowed to leave open-ended questions unanswered.

However, don’t give up and immediately start collecting more data or design a new survey. You current dataset may still contain valuable information in the form of text comments. A good practice is to pool together all text comments from a number of text variables in your dataset. You can select all of them or just a subset that makes the most sense to be analyzed together.

Pooling text data for a richer analysis.

Figure 1. Pooling text data for a richer analysis.

In the attached figure, the bubble on the left represents probably the most frequently analyzed question in customer satisfaction surveys – the open-ended question following a key rating (e.g., Overall Satisfaction Rating or Net Promoters Score Rating). Most of these surveys will have at least one or more very good questions that can compliment the answers given to the open-ended question on the left (see the remaining bubbles on the right of the figure). So why not analyze them altogether? To do that - simply merge these text variables in your data editor remembering to leave a blank space between the content of the columns you are merging.

Conclusion: Enriching your data can be simple and powerful.

This very simple pooling of text data from various open-ended questions will allow you to significantly enrich you analysis in OdinText.

Gosia

 

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

What Your Customer Satisfaction Research Isn’t Telling You and Why You Should Care

Why most customer experience management surveys aren’t very useful

 

Most of your customers, hopefully, are not unhappy with you. But if you’re relying on traditional customer satisfaction research—or Customer Experience Management (CXM) as it’s come to be known—to track your performance in the eyes of your customers, you’re almost guaranteed not to learn much that will enable you to make a meaningful change that will impact your business.Why Are Your Customers Mad At You-revise v2

That’s because the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

Customer Satisfaction Distribution - Misconception: Most Customer Feedback is Negative

To understand what’s going on here, we first need to recognize that the notion that most customer feedback is negative is a widespread myth. Most of us assume incorrectly that unhappy customers are proportionately far more likely than satisfied customers to give feedback.

... the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

In fact, the opposite is true. The distribution of satisfied to dissatisfied customers in the results of the average customer satisfaction survey typically follows a very different distribution. Indeed, most customers who respond in a customer feedback program are actually likely to be very happy with the company.

Generally speaking, for OdinText users that conduct research using conventional customer satisfaction scales and the accompanying comments, about 70-80% of the scores from their customers land in the Top 2 or 3 boxes. In other words, on a 10-point satisfaction scale or 11-point likeliness-to-recommend scale (i.e. Net Promoter Score), customers are giving either a perfect or very good rating.

That leaves only 20% or so of customers, of which about half are neutral and half are very dissatisfied.

So My Survey Says Most of My Customers Are Pretty Satisfied. What’s the Problem?

Our careful analyses of both structured (Likert scale) satisfaction data and unstructured (text comment) data have revealed a couple of important findings that most companies and customer experience management consultancies seem to have missed.

We first identified these issues when we analyzed almost one million Shell Oil customers using OdinText over a two-year period  (view the video or download the case study here), and since then we have seen the same trends again and again, which frankly left us wondering how we could have missed these patterns in earlier work.

1.  Structured/Likert scale data is duplicative and nearly meaningless

We’ve seen that there is very little real variance in structured customer experience data. Variance is what companies should really be looking for.

The goal, of course, is to better understand where to prioritize scarce resources to maximize ROI, and to use multivariate statistics to tease out more complex relationships. Yet we hardly ever tie this data to real behavior or revenue. If we did, we would probably discover that it usually does NOT predict real behavior. Why?

2.  Satisficing: Everything gets answered the same way

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful. But the respondent has either had the pleasant experience she expected with you  OR in some (hopefully) rare instances a not-so-pleasant experience.

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful.

In the former case her outlook will be generally positive. This outlook will carry over to just about every structured question you ask her. Consider the typical set of customer sat survey questions…

  • Q. How satisfied were you with your overall experience?
  • Q. How likely to recommend the company are you?
  • Q. How satisfied were you with the time it took?
  • Q. How knowledgeable were the employees?
  • Q. How friendly were the employees? Etc…

Jane's Experience: Jane, who had a positive experience, answers the first two or three questions with some modicum of thought, but they really ask the same thing in a slightly different way, and therefore they get very similar ratings. Very soon the questions—none of which is especially relevant to Jane—dissolve into one single, increasingly boring exercise.

But since Jane did have a positive experience and she is a diligent and conscientious person who usually finishes what she starts, she quickly completes the survey with minimal thought giving you the same Top 1, 2 or 3 box scores across all attributes.

John's Experience: Next is John, who belongs to the fewer than 10% of customers who had a dissatisfying experience. He basically straightlines the survey like Jane did; only he checks the lower boxes. But he really wishes he could just tell you in a few seconds what irritated him and how you could improve.

Instead, he is subjected to a battery of 20 or 30 largely irrelevant questions until he finally gets an opportunity to tell you his problem in the single text question at the end. If he gets that far and has any patience left, he’ll tell you what you need to know right there.

Sadly, many companies won’t do much if anything with this last bit of crucial information. Instead they’ll focus on the responses from the Likert scale questions, all of which Jane and John answered with a similar lack of thought and differentiation between the questions.

3.  Text Comments Tell You How to Improve

So, structured data—that is, again, the aggregated responses from Likert-scale-type survey questions—won’t tell you how to improve. For example, a restaurant customer sat survey may help you identify a general problem area—food quality, service, value for the money, cleanliness, etc.—but the only thing that data will tell you is that you need to conduct more research.

For those who really do want to improve their business results, no other variable in the data can be used to predict actual customer behavior (and ultimately revenue) better than the free-form text response to the right open-ended question, because text comments enable customers to tell you exactly what they feel you need to hear.

4.  Why Most Customer Satisfaction or NPS Open-End Comment Questions Fail

Let’s assume your company appreciates the importance of customer experience management and you’ve invested in the latest text analytics software and sentiment tools. You’ve even shortened your survey because you recognize that the be Overall Satisfaction (OSAT) and most predictive answers come from text questions and not from the structured data.

You’re all set, right? Wrong.

Unfortunately, we see a lot of clients make one final, common mistake that can be easily remedied. Specifically, they ask the recommended Net Promoter Score (NPS) or Overall Satisfaction (OSAT) open-end follow-up question: “Why did you give that rating?” And they ask only this question.

There’s nothing ostensibly wrong with this question, except that you get back what you ask. So when you ask the 80% of customers who just gave you a positive rating why they gave you that rating, you will at best get a short positive about your business. Those fewer than 10% who slammed you will give you  problem area certainly, but this gives you very little to work with other than a few pronounced problems that you probably knew were important anyway.

What you really need is information that you didn’t know and that will enable you to improve in a way that matters to customers and offers a competitive advantage.

An Easy Fix

The solution is actually quite simple: Ask a follow-up probe question like, “What, if anything, could we do better?”

This can then be text analyzed separately, or better yet, combined with the original text comment which as mentioned earlier usually reads Q. “Why did you give the satisfaction score you gave? And due to the Possion distribution in customer satisfaction yields almost only positive comments with few ideas for improvement.  This one-two question combination when text analyzed together fives a far more complete picture to the question about how customer view your company and how you can improve.

Final Tip: Make the comment question mandatory. Everyone should be able to answer this question, even if it means typing an “NA” in some rare cases.

Good luck!

Ps. To learn more about how OdinText can help you learn what really matters to your customers and predict real behavior,  please contact us or request a Free Demo here >

 

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

Peaks and Valleys or Critical Moments Analysis

Text Analytics Tips - Branding Peaks and Valleys or Critical Moments Analysis Text Analytics Tips by Gosia

 

 How can you gain interesting insights just from looking at descriptive charts based on your data? Select a key metric of interest like Overall Satisfaction (scale 1-5) and using a text analytics software allowing you to plot text data as well as numeric data longitudinally (e.g., OdinText) view your metric averages across time. Next, view the plot using different time intervals (e.g, the plot could display daily, weekly, bi-weekly, or monthly overall satisfaction averages) and look for obvious “peaks” (sudden increases in the average score) or “valleys” (sudden decreases in the average score). Note down the time periods in which you have observed any peaks or valleys and try to identify reasons or events associated with these trends, e.g., changes in management, a new advertising campaign, customer service quality, etc. The next step is to plot average overall satisfaction scores for selected themes and see how they relate to the identified “peaks” or “valleys” as these themes may provide you with potential answers to the identified critical moments in your longitudinal analysis.

In the figure below you can see how the average overall satisfaction of a sample company varied during approximately one month of time (each data point/column represents one day in a given month). Whereas no “peaks” were found in the average overall satisfaction curve, there was one significant “valley” visible at the beginning of the studied month (see plot 1 in Figure 1). It represented a sudden drop from the average satisfaction of 5.0 (day 1) to 3.1 (day 2) and 3.5 (day 3) before again rising up and oscillating around the average satisfaction of 4.3 for the rest of the days that month. So what could be the reason for this sudden and deep drop in customer satisfaction?

Text Anaytics Tip 2a OdinText Gosia

Text Analytics Tip 2b Gosia OdinText

Text NAalytics Tip 2c Gosia OdinText

Figure 1. Annotated OdinText screenshots showing an example of a exploratory analysis using longitudinal data (Overall Satisfaction).

Whereas a definite answer requires more advanced predictive analyses (also available in OdinText), a quick and very easy way to explore potential answers is possible simply by plotting the average satisfaction scores associated with a few themes identified earlier. In this sample scenario, average satisfaction scores among customers who mentioned “customer service” (green bar; second plot) overlap very well with the overall satisfaction trendline (orange line) suggesting that customer service complaints may have been the reason for lowered satisfaction ratings on days 2 and 3. Another theme plotted, “fast service” (see plot 3), did not at all follow the overall satisfaction trendline as customers mentioning this theme were highly satisfied almost on every day except day 6.

This kind of simple exploratory analysis can be very powerful in showing you what factors might have effects on customer satisfaction and may serve as a crucial step for subsequent quantitative analysis of your text and numeric data.

 

Text Analytics Tips with Gosi

 

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

A Text Analytics Question

How many survey questions are needed? Yesterday's Q&A with BRB prompted an interesting discussions on one of the research related LinkedIn groups I belong to. As a result today I have a semi hypothetical question I'd like to put out to the marketing research community.

Let's assume that a customer satisfaction survey has these four quite common questions:

  1. What is your likelihood to recommend the brand/product? (10 point scale)
  2. What is your likelihood to try the brand/product again? (10 point scale)
  3. What is your overall satisfaction with the brand/product? (10 point scale)
  4. Please explain why you are satisfied/dissatisfied with the brand/product? (text comment)

If you could predict the average of questions 1-3 (with say 80% accuracy), by analyzing just question Q4, would you bother asking Q1-Q3?

What if together with question 4 and any single one of the other likert scale questions you could predict the other two questions with >90% accuracy, would you still ask the other two questions?

@TomHCAnderson @OdinText