Posts tagged surveys
What Does the Co-Occurence Graph Tell You?

Text Analytics Tips - Branding What does the co-occurrence graph tell you?Text Analytics Tips by Gosia

The co-occurrence graph in OdinText may look simple at first sight but it is in fact a very complex visualization. Based on an example we are going to show you how to read and interpret this graph. See the attached screenshots of a single co-occurrence graph based on a satisfaction survey of 500 car dealership customers (Fig. 1-4).

The co-occurrence graph is based on multidimensional scaling techniques that allow you to view the similarity between individual cases of data (e.g., automatic terms) taking into account various aspects of the data (i.e., frequency of occurrence, co-occurrence, relationship with the key metric). This graph plots the co-occurrence of words represented by the spatial distance between them, i.e., it plots as well as it can terms which are often mentioned together right next to each other (aka approximate overlap/concurrence).

Figure 1. Co-occurrence graph (all nodes and lines visible).

The attached graph (Fig. 1 above) is based on 50 most frequently occurring automatic terms (words) mentioned by the car dealership customers. Each node represents one term. The node’s size corresponds to the number of occurrences, i.e., in how many customer comments a given word was found (the greater node’s size, the greater the number of occurrences). In this example, green nodes correspond to higher overall satisfaction and red nodes to lower overall satisfaction given by customers who mentioned a given term, whereas brown nodes reflect satisfaction scores close to the metric midpoint. Finally, the thickness of the line connecting two nodes highlights how often the two terms are mentioned together (aka actual overlap/concurrence); the thicker the line, the more often they are mentioned together in a comment.

Figure 2. Co-occurrence graph (“unprofessional” node and lines highlighted).

So what are the most interesting insights based on a quick look at the co-occurrence graph of the car dealership customer satisfaction survey?

  • “Unprofessional” is the most negative term (red node) and it is most often mentioned together with “manager” or “employees” (Fig. 2 above).
  • “Waiting” is a relatively frequently occurring (medium-sized node) and a neutral term (brown node). It is often mentioned together with “room” (another neutral term) as well as “luxurious”, “coffee”, and “best”, which are corresponding to high overall satisfaction (light green node). Thus, it seems that the luxurious waiting room with available coffee is highly appreciated by customers and makes the waiting experience less negative (Fig. 3 below).
  • The dealership “staff” is often mentioned together with such positive terms as “always”, “caring”, “nice”, “trained”, and “quick” (Fig. 4 below). However, staff is also mentioned with more negative terms including “unprofessional”, “trust”, “helpful” suggesting a few negative customer evaluations related to these terms which may need attention and improvement.

    Figure 3. Co-occurrence graph (“waiting” node and lines highlighted).

    Figure 4. Co-occurrence graph (“staff” node and lines highlighted).

    Hopefully, this quick example can help you extract quick and valuable insights based on your own data!

Gosia

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

Five Reasons to NEVER Design a Survey without a Comment Field

Marketing Research Confessions Part II - Researchers Say Open-Ends Are Critical!

My last post focused on the alarmingly high number of marketing researchers (~30%) who, as a matter of policy, either do not include a section for respondent comments (a.k.a. “open-ended” questions) in their surveys or who field surveys with a comment section but discard the responses.

The good news is that most researchers do, in fact, understand and appreciate the value of comment data from open-ended questions.

Indeed, many say feedback in consumers’ own words is indispensable.

Among researchers we recently polled:

  • 70% would NEVER launch tracker OR even an ad-hoc (66%) survey without a comment field
  • 80% DO NOT agree that analyzing only a subset of the comment data is sufficient
  • 59% say comment data is AT LEAST as important as the numeric ratings data (and many state they are the most important data points)
  • 58% ALWAYS allocate time to analyze comment data after fielding

In Their Own Words: “Essential”

In contrast to the flippancy we saw in comments from those who don’t see any need for open-ended survey questions, researchers who value open-ends felt pretty strongly about them.

Consider these two verbatim responses, which encapsulate the general sentiment expressed by researchers in our survey:

“Absolutely ESSENTIAL. Without [customer comments] you can easily draw the wrong conclusion from the overall survey.”

“Open-ended questions are essential. There is no easy shortcut to getting at the nuanced answers and ‘ah-ha!’ findings present in written text.”

As it happens, respondents to our survey provided plenty of detailed and thoughtful responses to our open-ended questions.

We, of course, ran these responses through OdinText and our analysis identified five common reasons for researchers’ belief that comment data from open-ended questions is critically important.

So here’s why, ranked chronologically in ascending order by preponderance of mentions and in their own words

 Top Five Reasons to Always Include an Open-End

 

#5 Proxy for Quality & Fraud

“They are essential in sussing out fraud—in quality control.”

“For data quality to determine satisficing and fraudulent behavior

“…to verify a reasonable level of engagement in the survey…”

 

#4 Understand the ‘Why’ Behind the Numbers

“Very beneficial when trying to identify cause and effect

“Open ends are key to understand the meaning of all the other answers. They provide context, motivations, details. Market Research cannot survive without open ends”

Extremely useful to understand what is truly driving decisions. In closed-end questions people tend to agree with statements that seem a reasonable, logical answer, even if they have not considered them before at all

“It's so critical for me to understand WHY people choose the hard codes, or why they behave the way the big data says they behave. Inferences from quant data only get you so far - you need to hear it from the horse’s mouth...AT SCALE!”

“OEs are windows into the consumer thought process, and I find them invaluable in providing meaning when interpreting the closed-ended responses.”

 

#3 Freedom from Quant Limitations

“They allow respondents more freedom to answer a question how they want to—not limited to a list that might or might not be relevant.”

“Extremely important to gather data the respondent wants to convey but cannot in the limited context of closed ends.”

“Open-enders allow the respondent to give a full explanation without being constrained by pre-defined and pre-conceived codes and structures. With the use of modern text analytics tools these comments can be analyzed and classified with ease and greater accuracy as compared to previous manual processes.”

“…fixed answer options might be too narrow.  Product registration, satisfaction surveys and early product concept testing are the best candidates…”

allowing participants to comment on what's important to them

 

#2 Avoiding Wrong Conclusions

“We code every single response, even on trackers [longitudinal data] where we have thousands of responses across 5 open-end questions… you can draw the wrong conclusion without open-ends. I've got lots of examples!”

“Essential - mitigate risk of (1) respondents misunderstanding questions and (2) analysts jumping to wrong conclusions and (3) allowing for learnings not included in closed-ended answer categories”

“Open ended if done correctly almost always generate more right results than closed ended.  Checking a box is cheap, but communicating an original thought is more valuable.”

 

#1 Unearthing Unknowns – What We Didn’t Know We Didn’t Know

“They can give rich, in-depth insights or raise awareness of unknown insights or concerns.”

“This info can prove valuable to the research in unexpected ways.”

“They are critical to capture the voice of the customer and provide a huge amount of insight that would otherwise be missed.”

“Extremely useful.  I design them to try and get to the unexpected reasons behind the closed-end data.”

“To capture thoughts and ideas, in their own words, the research may have missed.”

“It can give good complementary information. It can also give information about something the researcher missed in his other questions.”

“Highly useful. They allow the interviewee to offer unanticipated and often most valuable observations.”

 

Ps. Additional Reasons…

Although it didn’t make the top five, several researchers cited one other notable reason for valuing open-ended questions, summarized in the following comment:

“They provide the rich unaided insights that often are the most interesting to our clients

 

Next Steps: How to Get Value from Open-Ended Questions

I think we’ve established that most researchers recognize the tremendous value of feedback from open-ended questions and the reasons why, but there’s more to be said on the subject.

Conducting good research takes knowledge and skill. I’ve spent the last decade working with unstructured data and will be among the first to admit that while the quality of tools to tackle this data have radically improved, understanding what kind of analysis to undertake, or how to better ask the questions are just as important as the technology.

Sadly many researchers and just about all text analytics firms I’ve run into understand very little about these more explicit techniques in how to actually collect better data.

Therefore I aim to devote at least one if not more posts over the next few weeks to delve into some of the problems in working with unstructured data brought up by some of our researchers.

Stay tuned!

@TomHCAnderson

 

Ignoring Customer Comments: A Disturbing Trend

One-Third of Researchers Think Survey Ratings Are All They Need

You’d be hard-pressed to find anyone who doesn’t think customer feedback matters, but it seems an alarming number of researchers don’t believe they really need to hear what people have to say!

 

2in5 openends read

In fact, almost a third of market researchers we recently polled either don’t give consumers the opportunity to comment or flat out ignore their responses.

  • 30% of researchers report they do not include an option for customer comments in longitudinal customer experience trackers because they “don’t want to deal with the coding/analysis.” Almost as many (34%) admit the same for ad hoc surveys.
  • 42% of researchers also admit launching surveys that contain an option for customer comments with no intention of doing anything with the comments they receive.

Customer Comments Aren’t Necessary?

2 in 5 researchers it is sufficient to analyze only a small subset of my customers comments

Part of the problem—as the first bullet indicates—is that coding/analysis of responses to open-ended questions has historically been a time-consuming and labor-intensive process. (Happily, this is no longer the case.)

But a more troubling issue, it seems, is a widespread lack of recognition for the value of unstructured customer feedback, especially compared to quantitative survey data.

  • Almost half (41%) of researchers said actual voice-of-customer comments are of secondary importance to structured rating questions.
  • Of those who do read/analyze customer comments, 20% said it’s sufficient to just read/code a small subset of the comments rather than each and every

In short, we can conclude that many researchers omit or ignore customer comments because they believe they can get the same or better insights from quantitative ratings data.

This assumption is absolutely WRONG.

Misconception: Ratings Are Enough

I’ve posted on the serious problems with relying exclusively on quantitative data for insights before here.

But before I discovered text analytics, I used to be in the same camp as the researchers highlighted in our survey.

My first mistake was that I assumed I would always be able to frame the right questions and conceive of all possible relevant answers.

I also believed, naively, that respondents actually consider all questions equally and that the decimal point differences in mean ratings from (frequently onerous) attribute batteries are meaningful, especially if we can apply a T Test and the 0.23% difference is deemed “significant” (even if only at a directional 80% confidence level).

Since then, I have found time and time again that nothing predicts actual customer behavior better than the comment data from a well-crafted open-end.

For a real world example, I invite you to have a look at the work we did with Jiffy Lube.

There are real dollars attached to what our customers can tell us if we let them use their own words. If you’re not letting them speak, your opportunity cost is probably much higher than you realize.

Thank you for your readership,

I look forward to your COMMENTS!

@TomHCAnderson

[PS. Over 200 marketing researchers professionals completed the survey in just the first week in field (statistics above), and the survey is still fielding here. What I was most impressed with so far was ironically the quality and thought fullness of the two open ended comments that were provided. Thus I will be doing initial analysis and reporting here on the blog during the next few days. So come back soon to see part II and maybe even a part III of the analysis to this very short but interesting survey of research professionals]

Let’s Connect at IIEX 2016!

OdinText Presentations at 2016 Insight Innovation Exchange

I’m looking forward to the Insight Innovation Exchange (IIEX) in Atlanta this coming week.

In just a few years it’s become one of the best marketing research trade events and probably my favorite when it comes to meeting those interested in Next Generation Market Research.

IIeX 2016

If you’re attending please let me know. I’d love to meet up briefly and say hello in person. My colleague Sean Timmins and I would love to meet up, hear what you’re working on and see whether OdinText might be something that could help you get to better insights faster.

[PSST If you would like to attend IIEX feel free to use our Speaker discount code ODINTEXT!]

There are so many cool sessions at the conference, and the venue and the neighborhood are great (love the Atlanta food options).  In case you are still considering which sessions to attend I’d love to invite you to our sessions:

1. Monday 2:00-3:00 pm / Making Data Science More Accessible

Monday 2:00-3:00 In the Grand Ballroom please come support our mission of making data science more accessible in the Insight Innovation Competition. If you are at IIEX, this is THE session you don’t want to miss! [We blogged about this exciting session earlier here].

2. Tuesday 12:00-2:00 pm / Interactive Roundtable

Tuesday 12:00-2:00 also in the Grand Ballroom I will be hosting an interactive roundtable on Text Analytics & Text Mining. In this discussion group, I will be hosting an informative and lively discussion on where and how this very powerful technology is best deployed now and how it will change the future of analytics. This effects everything from social media monitoring, and survey data, to email and call center log analysis and a whole lot more…

3. Tuesday 5:00 pm / Special Panel 

Tuesday 5:00 in a special analysis of survey panelists I will be joining Kerry Hecht Labsuirs, Director of Research Services at Recollective and Jessica Broome, Research Guru at Jessica Broome Research in an investigation of survey panelists. The session is entitled Exploring the Participant Experience. (sneak peek here!)

OdinText was used to analyze the unstructured data from this research, and so I will help by reviewing some of those findings briefly. You can read about some of the initial results here on the blog. We plan to follow up with a second post after the conference.

Again, we really hope to see you at the conference. Please reach out ahead of time and let us know if you’ll be there so we can plan to grab a coffee.  If you can’t make it to the event, and any of the above interests you let us know, I’d be happy to schedule a call.

See you in Atlanta!


Tom H.C. Anderson

@TomHCanderson @OdinText

Tom H.C. Anderson

To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior,  please contact us or request a Free Demo here >

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

 

Preventing Customer Churn with Text Analytics

3 Ways You Can Improve Your Lost Customer Analysis

Preventing Customer Churn with Text Analytics

Lapsed Customers, Customer Churn, Customer Attrition, Customer Defection, Lost Customers, Non-Renewals, whatever you call them this kind of customer research is becoming more relevant everywhere, and we are seeing more and more companies turning to text analytics in order to better answer how to retain more customers longer.  Why are they turning to text analytics? Because no structured survey data does a better job predicting customer behavior as well as actual voice of customer text comments!

Today’s post will highlight 3 mistakes we often see being made in this kind of research.

1. Most Customer Loss/Churn Analysis is done on the customers who leave, in isolation from customers who stay. Understandable since it would make little sense to ask a customer who is still with you a survey question such as “Why have you stopped buying from us?”. But customer churn analysis can be so much more powerful if you are able to compare customers who are still with you to those who have left. There are a couple of ways to do this:

  • Whether or not you conduct a separate lapsed customer survey among those who are no longer purchasing, also consider doing a separate post-hoc analysis of your customer satisfaction survey data. It doesn’t have to be current. Just take a time period of say the last 6-9 months and analyze the comment data from those customers who have left VS those who are still with you. What did the two groups say differently just before the lapsed customers left? Can these results be used to predict who is likely to churn ahead of time? The answer is very likely yes, and in many cases you can do something about it!
  • Whenever possible text questions should be asked of all customers, not just a subgroup such as the leavers. Here sampling as well as how you ask the questions both come into play.

Consider expanding your sampling frame to include not just customers who are no longer purchasing from you, but also customers who are still purchasing from you (especially those who are purchasing more) as well as those still purchasing, but purchasing less. What you really want to understand after all is what is driving purchasing – who gives a damn if they claim they are more or less likely to recommend you – promoter and detractor analysis is over hyped!

Reducing Customer Churn

You may also consider casting an even wider sampling net than just past and current customers. Why not use a panel sample provider and try to include some competitor’s customer as well? You will need to draw the line somewhere for scope and budget, but you get the idea. The survey should be short and concise and should have the text questions up front, starting very broad (top of mind unaided) and then probe.

Begin with a question such as “Q. How, if at all, has your purchasing of Category X changed over the last couple of months?” and/or “Q. You indicated your purchasing of category X has changed, why? (Please be as specific as possible)”. Or perhaps even better, “Q. How if at all has your purchasing of category X changed over the past couple of months? If it has not changed please also explain why it hasn’t changed? (please be as specific as possible)”. As you can see, almost anyone can answer these questions no matter how much or little they have purchased. This is exactly what is needed for predictive text analytics! Having only leaver’s data will be insufficient!

2. Include other structured (real behavior data in the analysis). Some researchers analyze their survey data in isolation. Mixed data usually adds predictive power, especially if it’s real behavior data from your CRM database, and not just stated/recall behavior from your survey. In either case, the key to unlocking meaning and predictability is likely to come from the unstructured comment data. Nothing else can do a better job explaining what happened to them.

3. PLEASE PLEASE, Resist the urge to start your leaver survey with a structured question asking a battery of “check all that apply” reasons for leaving/shopping less. Your various pre-defined reasons, even if you include an “Other Specify_____” will have several negative effects on your data quality.

First, the customer will often forget their primary reason for their change in purchase frequency, they will assume incorrectly that you are most interested in these reasons you have pre-identified. Second there will be no way for you to tell which of these several reasons they are now likely to check, is truly the most important to them. Third, some customers will repeat themselves in the other specify, while others will decide not to answer it at all since they checked so many of your boxes. Either way, you’ve just destroyed the best chance you had in accurately understanding why your customers purchasing has changed!

These are many other ways to improve your insights in lapsed customer survey research by asking fewer yet better comment questions in the right order.  I hope the above tips have given you some things to consider. We’re happy to give you additional tips if you like, and we often find that as customers begin using OdinText their use of survey data both structured and unstructured improves greatly along with their understanding of their customers.

@TomHCanderson

Look Who’s Talking, Part 1: Who Are the Most Frequently Mentioned Research Panels?

Survey Takers Average Two Panel Memberships and Name Names

Who exactly is taking your survey?

It’s an important question beyond the obvious reasons and odds are your screener isn’t providing all of the answers.

Today’s blog post will be the first in a series previewing some key findings from a new study exploring the characteristics of survey research panelists.

The study was designed and conducted by Kerry Hecht, Director of Research at Ramius. OdinText was enlisted to analyze the text responses to the open-ended questions in the survey.

Today I’ll be sharing an OdinText analysis of results from one simple but important question: Which research companies are you signed up with?

Note: The full findings of this rather elaborate study will be released in June in a special workshop at IIEX North America (Insight Innovation Exchange) in Atlanta, GA. The workshop will be led by Kerry Hecht, Jessica Broome and yours truly. For more information, click here.

About the Data

The dataset we’ve used OdinText to analyze today is a survey of research panel members with just over 1,500 completes.

The sample was sourced in three equal parts from leading research panel providers Critical Mix and Schlesinger Associates and from third-party loyalty reward site Swagbucks, respectively.

The study’s author opted to use an open-ended question (“Which research companies are you signed up with?”) instead of a “select all that apply” variation for a couple of reasons, not the least of which being that the latter would’ve needed to list more than a thousand possible panel choices.

Only those panels that were mentioned by at least five respondents (0.3%) were included in the analysis. As it turned out, respondents identified more than 50 panels by name.

How Many Panels Does the Average Panelist Belong To?

The overwhelming majority of respondents—approx. 80%—indicated they belong to only one or two panels. (The average number of panels mentioned among those who could recall specific panel names was 2.3.)

Less than 2% told us they were members of 10 or more panels.

Finally, even fewer respondents told us they were members of as many as 20+ panels; others could not recall the name of a single panel when asked. Some declined to answer the question.

Naming Names…Here’s Who

Caption: To see the data more closely, please click this screenshot for an Excel file. 

In Figure 1 we have the 50 most frequently mentioned panel companies by respondents in this survey.

It is interesting to note that even though every respondent was signed up with at least one of the three companies from which we sourced the sample, a third of respondents failed to name that company.

Who Else? Average Number of Other Panels Mentioned

Caption: To see the data more closely, please click this screenshot for an Excel file.

As expected—and, again, taking the fact that the sample comes from each of just three firms we mentioned earlier—larger panels are more likely than smaller, niche panels to contain respondents who belong to other panels (Figure 2).

Panel Overlap/Correlation

Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed.

Caption: To see the data more closely, please click this screenshot for an Excel file.

Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed. In a few cases where correlation ishigher, it may be that these panels tend to recruit in the same place online or that there is a relationship between the companies.

What’s Next?

Again, all of the data provided above are the result of analyzing just a single, short open-ended question using OdinText.

In subsequent posts, we will look into what motivates these panelists to participate in research, as well as what they like and don’t like about the research process. We’ll also look more closely at demographics and psychographics.

You can also look forward to deeper insights from a qualitative leg provided by Kerry Hecht and her team in the workshop at IIEX in June.


Thank you for your readership. As always, I encourage your feedback and look forward to your comments!

@TomHCanderson @OdinText

Tom H.C. Anderson

PS. Just a reminder that OdinText is participating in the IIEX 2016 Insight Innovation Competition!

Voting ends Today! Please visit MAKE DATA ACCESSIBLE and VOTE OdinText!

 

[If you would like to attend IIEX feel free to use our Speaker discount code ODINTEXT]

To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior,  please contact us or request a Free Demo here >

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

 

How to Increase the Amount of Text Data for Analysis

Text Analytics Tips - Branding How to Increase the Amount of Text Data for AnalysisText Analytics Tips by Gosia

If you find yourself slightly disappointed by the quantity or quality of text comments provided by your respondents you are definitely not alone. This is a common problem especially when survey respondents are not compensated for their answers and when they are allowed to leave open-ended questions unanswered.

However, don’t give up and immediately start collecting more data or design a new survey. You current dataset may still contain valuable information in the form of text comments. A good practice is to pool together all text comments from a number of text variables in your dataset. You can select all of them or just a subset that makes the most sense to be analyzed together.

Pooling text data for a richer analysis.

Figure 1. Pooling text data for a richer analysis.

In the attached figure, the bubble on the left represents probably the most frequently analyzed question in customer satisfaction surveys – the open-ended question following a key rating (e.g., Overall Satisfaction Rating or Net Promoters Score Rating). Most of these surveys will have at least one or more very good questions that can compliment the answers given to the open-ended question on the left (see the remaining bubbles on the right of the figure). So why not analyze them altogether? To do that - simply merge these text variables in your data editor remembering to leave a blank space between the content of the columns you are merging.

Conclusion: Enriching your data can be simple and powerful.

This very simple pooling of text data from various open-ended questions will allow you to significantly enrich you analysis in OdinText.

Gosia

 

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

What Your Customer Satisfaction Research Isn’t Telling You and Why You Should Care

Why most customer experience management surveys aren’t very useful

 

Most of your customers, hopefully, are not unhappy with you. But if you’re relying on traditional customer satisfaction research—or Customer Experience Management (CXM) as it’s come to be known—to track your performance in the eyes of your customers, you’re almost guaranteed not to learn much that will enable you to make a meaningful change that will impact your business.Why Are Your Customers Mad At You-revise v2

That’s because the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

Customer Satisfaction Distribution - Misconception: Most Customer Feedback is Negative

To understand what’s going on here, we first need to recognize that the notion that most customer feedback is negative is a widespread myth. Most of us assume incorrectly that unhappy customers are proportionately far more likely than satisfied customers to give feedback.

... the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

In fact, the opposite is true. The distribution of satisfied to dissatisfied customers in the results of the average customer satisfaction survey typically follows a very different distribution. Indeed, most customers who respond in a customer feedback program are actually likely to be very happy with the company.

Generally speaking, for OdinText users that conduct research using conventional customer satisfaction scales and the accompanying comments, about 70-80% of the scores from their customers land in the Top 2 or 3 boxes. In other words, on a 10-point satisfaction scale or 11-point likeliness-to-recommend scale (i.e. Net Promoter Score), customers are giving either a perfect or very good rating.

That leaves only 20% or so of customers, of which about half are neutral and half are very dissatisfied.

So My Survey Says Most of My Customers Are Pretty Satisfied. What’s the Problem?

Our careful analyses of both structured (Likert scale) satisfaction data and unstructured (text comment) data have revealed a couple of important findings that most companies and customer experience management consultancies seem to have missed.

We first identified these issues when we analyzed almost one million Shell Oil customers using OdinText over a two-year period  (view the video or download the case study here), and since then we have seen the same trends again and again, which frankly left us wondering how we could have missed these patterns in earlier work.

1.  Structured/Likert scale data is duplicative and nearly meaningless

We’ve seen that there is very little real variance in structured customer experience data. Variance is what companies should really be looking for.

The goal, of course, is to better understand where to prioritize scarce resources to maximize ROI, and to use multivariate statistics to tease out more complex relationships. Yet we hardly ever tie this data to real behavior or revenue. If we did, we would probably discover that it usually does NOT predict real behavior. Why?

2.  Satisficing: Everything gets answered the same way

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful. But the respondent has either had the pleasant experience she expected with you  OR in some (hopefully) rare instances a not-so-pleasant experience.

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful.

In the former case her outlook will be generally positive. This outlook will carry over to just about every structured question you ask her. Consider the typical set of customer sat survey questions…

  • Q. How satisfied were you with your overall experience?
  • Q. How likely to recommend the company are you?
  • Q. How satisfied were you with the time it took?
  • Q. How knowledgeable were the employees?
  • Q. How friendly were the employees? Etc…

Jane's Experience: Jane, who had a positive experience, answers the first two or three questions with some modicum of thought, but they really ask the same thing in a slightly different way, and therefore they get very similar ratings. Very soon the questions—none of which is especially relevant to Jane—dissolve into one single, increasingly boring exercise.

But since Jane did have a positive experience and she is a diligent and conscientious person who usually finishes what she starts, she quickly completes the survey with minimal thought giving you the same Top 1, 2 or 3 box scores across all attributes.

John's Experience: Next is John, who belongs to the fewer than 10% of customers who had a dissatisfying experience. He basically straightlines the survey like Jane did; only he checks the lower boxes. But he really wishes he could just tell you in a few seconds what irritated him and how you could improve.

Instead, he is subjected to a battery of 20 or 30 largely irrelevant questions until he finally gets an opportunity to tell you his problem in the single text question at the end. If he gets that far and has any patience left, he’ll tell you what you need to know right there.

Sadly, many companies won’t do much if anything with this last bit of crucial information. Instead they’ll focus on the responses from the Likert scale questions, all of which Jane and John answered with a similar lack of thought and differentiation between the questions.

3.  Text Comments Tell You How to Improve

So, structured data—that is, again, the aggregated responses from Likert-scale-type survey questions—won’t tell you how to improve. For example, a restaurant customer sat survey may help you identify a general problem area—food quality, service, value for the money, cleanliness, etc.—but the only thing that data will tell you is that you need to conduct more research.

For those who really do want to improve their business results, no other variable in the data can be used to predict actual customer behavior (and ultimately revenue) better than the free-form text response to the right open-ended question, because text comments enable customers to tell you exactly what they feel you need to hear.

4.  Why Most Customer Satisfaction or NPS Open-End Comment Questions Fail

Let’s assume your company appreciates the importance of customer experience management and you’ve invested in the latest text analytics software and sentiment tools. You’ve even shortened your survey because you recognize that the be Overall Satisfaction (OSAT) and most predictive answers come from text questions and not from the structured data.

You’re all set, right? Wrong.

Unfortunately, we see a lot of clients make one final, common mistake that can be easily remedied. Specifically, they ask the recommended Net Promoter Score (NPS) or Overall Satisfaction (OSAT) open-end follow-up question: “Why did you give that rating?” And they ask only this question.

There’s nothing ostensibly wrong with this question, except that you get back what you ask. So when you ask the 80% of customers who just gave you a positive rating why they gave you that rating, you will at best get a short positive about your business. Those fewer than 10% who slammed you will give you  problem area certainly, but this gives you very little to work with other than a few pronounced problems that you probably knew were important anyway.

What you really need is information that you didn’t know and that will enable you to improve in a way that matters to customers and offers a competitive advantage.

An Easy Fix

The solution is actually quite simple: Ask a follow-up probe question like, “What, if anything, could we do better?”

This can then be text analyzed separately, or better yet, combined with the original text comment which as mentioned earlier usually reads Q. “Why did you give the satisfaction score you gave? And due to the Possion distribution in customer satisfaction yields almost only positive comments with few ideas for improvement.  This one-two question combination when text analyzed together fives a far more complete picture to the question about how customer view your company and how you can improve.

Final Tip: Make the comment question mandatory. Everyone should be able to answer this question, even if it means typing an “NA” in some rare cases.

Good luck!

Ps. To learn more about how OdinText can help you learn what really matters to your customers and predict real behavior,  please contact us or request a Free Demo here >

 

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

OdinText Wins American Marketing Association Lavidge Global Marketing Research Prize

OdinTextAnalyticsAwardAMA AMA Honors Cloud-Based Text Analytics Software Provider OdinText for Making Data Science Accessible to Marketers

OdinText Inc., developer of the Next Generation Text Analytics SaaS (software-as-a-service) platform of the same name, today was named winner of the American Marketing Association’s  2016 Robert J. Lavidge Global Marketing Research Prize for innovation in the field.

The Lavidge Prize, which includes a $5000 cash award, globally recognizes a marketing research/consumer insight procedure or solution that has been successfully implemented and has a practical application for marketers.

According to Chris Chapman, President of the AMA Marketing Insights Council, OdinText earned the award for its contribution to advancing the practice of marketing by making data science accessible to non-data scientists.

“Consumers are creating oceans of unstructured text data, but putting this tremendously valuable information to practical use has posed a significant challenge for marketers and companies,” said Chapman.

“The nominations for OdinText highlighted how the company has distilled very complex applied analytics processes into an intuitive tool that enables marketers to run sophisticated predictive analyses and simulations by themselves, quickly and easily. This is exactly the kind of practical advancement we look for in awarding the Lavidge Prize,” added Chapman

The cloud-based OdinText software platform enables marketers with no advanced training or data science expertise to harness vast quantities of complex, unstructured text data—survey open-ends, call center transcripts, email, social media, discussion boards—and to rapidly mine valuable insights that would not have been otherwise obtainable without a data scientist.

“Marketing is evolving, getting both broader and deeper in terms of skill sets needed to succeed,” said FreshDirect Vice President of Business Intelligence and Analytics Jim DeMarco, who nominated OdinText for the Lavidge Prize.

“OdinText provides marketers with the capability to access more advanced analysis faster and helps the business they work on gain an information advantage. This is exactly the kind of innovation our industry needs right now,” DeMarco said.

The Lavidge Prize was presented in a special ceremony today at the AMA’s 2016 Analytics with Purpose Conference in Scottsdale, AZ. OdinText CEO Tom H. C. Anderson—a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research—accepted the award on behalf of the firm.

“One of our goals in creating OdinText was to build the tool from an analyst’s perspective, not a software developer’s, so that a marketer armed with OdinText could derive the same insights but faster than a data scientist using traditional techniques and tools,” said Anderson.

“To be recognized for this achievement by the AMA—one of the largest and most prestigious professional associations for marketers in the world, which has devoted itself to leading the way forward into a new era of marketing excellence—is deeply gratifying,” said Anderson.

 

ABOUT ODINTEXT

OdinText is a patented SaaS (software-as-a-service) platform for natural language processing and advanced text analysis. Fortune 500 companies such as Disney and Shell Oil use OdinText to mine insights from complex, unstructured text data easily and rapidly. The technology is available through the venture-backed Stamford, CT firm of the same name founded by CEO Tom H. C. Anderson, a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research. He tweets under the handle @tomhcanderson.

For more information, visit OdinText Info Request

ABOUT THE AMERICAN MARKETING ASSOCIATION

With a global network of over 30,000 members, the American Marketing Association (AMA) serves as one of the largest marketing associations in the world.  The AMA is the leading professional association for marketers and academics involved in the practice, teaching, and study of marketing worldwide.  Members of the AMA count on the association to be their most credible marketing resource, helping them to establish valuable professional connections and stay relevant in the industry with knowledge, training, and tools to enhance lifelong learning.

For more information, visit www.ama.org

Attensity, Clarabridge vs. OdinText: What’s the Difference?

Attensity Clarabridge Text Analytics Software Comparison - The Printing Press Still Prints, But Who Would Want To? I’m always a bit reluctant to talk about competitors because I don’t want to disparage anyone, but people often ask me: what differentiates OdinText from your two big, well-known text analytics software competitors, Attensity and Clarabridge?

A RULE-BASED APPROACH

Attensity and Clarabridge are traditional text analytics tools, but they adhere to an outmoded, rules-based approach. This means they require costly and time-consuming expert customization before they can be useful to a client.

Furthermore, once these rules-based dictionaries are created, they only apply to the data used to create the rules. So, if you attempt to use the tool in another industry, category or company or for a different data set, critical exceptions to these rules creep up that render them useless.

Attensiity Clarabridge Text Analytics Software Comparison

ODINTEXT - LESS SETUP, FASTER INSIGHT

In contrast, we built OdinText from an analyst’s perspective—not a developer’s—so that it’s intuitive, adaptive, data agnostic and fast. It doesn’t need all of this extensive priming and it works great right out of the box, which cuts the speed to insight dramatically.

The platform is easy to use, trainable to everyone and flexible in order to provide long-term value across an organization. This is part of the reason why we refer to our solution as Next Generation Text AnalyticsTM.

BUILT BY ANALYSTS, FOR ANALYSTS

OdinText is the culmination of more than a decade of applied text analytics experience as a user of multiple text mining software platforms for large clients, including social media giants like Facebook and LinkedIn.

We realized that all of these platforms were built on an approach that required custom dictionaries and linguistic rules—they are more similar than different—and on the analytics side they all lacked fundamental capabilities to perform the tasks for which researchers like us needed them.

CLEANER DATA, BETTER INSIGHT

The exclusive advantages to OdinText’s empirically-based, patented approach include what we refer to as Contextual Sentiment and ESC (Noise Reduction). Put simply, OdinText automatically filters out noise and brings important verbatim issues and relationships in the data to the user’s attention, allowing them to easily discover what they may not otherwise even have known to look for.

[Contact us for additional information on OdinText Contextual Sentiment/Noise Reduction]

Attensity Clarabridge Text Analytics Softwaer Comparison

The real innovation in word processing was not the technology, but its impact: Word processing simplified and democratized publishing.

OdinText doesn’t require a team of linguists, data scientists or expert consultants to set up before you can use it or reuse it. OdinText enables anyone in your organization to quickly, easily conduct sophisticated analyses of any unstructured text data—survey open-ends, call center transcripts, email, social media, discussion boards—to deliver immediate insights.

IN SUMMARY

In short, OdinText is built for the Analyst in mind - faster setup, cleaner data, better insight, all within a simple interface everyone -- especially Analysts -- can use.

Find out for yourself. Contact us for a demo.

 

Yours fondly, @TomHCAnderson

 

[NOTE: Tom is Founder and CEO of OdinText Inc.. A long time champion of text mining, in 2005 he founded Anderson Analytics LLC, the first consumer insights/marketing research consultancy focused on text analytics. He is a frequent speaker and data science guest lecturer at university and research industry events.]