Posts tagged unstructured data
Whose story are you telling?

In an Age When ‘Story Telling’ is king, Is Yours Based on Substance? As researchers, we love a good quote. We use them to emphasize a point in the findings. They allow us to tell stories and dimensionalize our implications. But sometimes quotes taken from research can take on a life of their own. Without the right context, they can be misinterpreted. It might be difficult for clients to trust that a verbatim is representative of the sample (let alone the greater population). And then there's bias … how easy it is for anyone human to create a biased narrative with just a little data and some imagination.

Yes, a biased narrative. The plague that can hit anyone in the research field. Researchers must mitigate the risk of biases in their research. They work hard to avoid selection bias, especially challenging within segmentation research but necessary. They avoid cognitive bias, analytical bias. And then there's confirmation bias.

Need a refresher? Confirmation bias is the tendency to interpret new evidence as confirmation of one's existing beliefs or theories. With little information, an observer can create a very complex narrative about motivations and emotions that might not be accurately representative of the research. (An example of how easy it is to create narratives is the Heider-Simmel experiment [Heider-Simmel Animation]. Go ahead, watch and reflect).

 

How does this relate to open ended survey questions?

Traditionally, researchers must read -- or must rely on others to read and report to them -- every open-ended response in a study and create a narrative of the key point and themes contained in that unstructured text. Their reports include verbatims or quotes from respondents that support their findings and conclusions. That's why text analytics is so helpful.  A researcher, who knows all about biases and does the best they can to mitigate them, can rely on the very statistical measures they tout to help them do their work better. With text analytics researchers, can find the important themes in the open-ended responses and quantify/validate that data.

Text analytics – and in particular OdinText – allows researchers that want to add both power and validity to their selected verbatims. Avoid bias by confirming that the stories they tell from the unstructured data are actually supported by the data.

In an age where so much emphasis is put on story telling, let’s not forget that stories are just that unless they are supported by data.

Next time you find yourself overwhelmed with unstructured data, and are tempted to just “tell a story”, please reach out. I’d love to show you with your own data how modern text analytics can ensure your story is based on fact!

Tim Lynch

@OdinText

 

 

 

Top 2017 New Year’s Resolutions Text Analyzed (In Their Own Words)

Will it Unstructure? Part I of a New Series of Text Analytics Tests Happy New Year!

As I was preparing to celebrate the New Year with my family and pondering the year ahead, my mind wandered to all of those Top New Year’s Resolutions lists that you see the last week in December every year. It seems to me that the same resolutions with very similar incidence populate those lists each year, usually with something around diet and/or exercise as the most popular resolution.

After spending several minutes investigating, it occurred to me that these lists are almost always compiled using quantitative instruments with static choice answers pre-defined by researchers—therefore limited in options and often biased.

Here’s a good example of a study that has been repeated now for a few years by online financial institution GOBankingRates.com.

While their 2017 survey was focused solely on financial resolutions, their 2016 survey was broader and determined that “Live Life to The Fullest” was the most popular resolution (45.7%), followed by “Live a Healthier life” (41.1%) etc. [see chart below].

NewYearsRessolutionsStructured-300x188.png

The question I had, of course, was what would this look like if you didn’t force people to pick from a handful of arbitrary, pre-defined choices?

Will It Unstructure?

You may be familiar with the outlandish but wildly popular “Will it Blend?” video series by Blendtec, where founder Tom Dickson attempts to blend everything from iPhones to marbles. It’s a wacky, yet compelling way to demonstrate how sturdy these blenders are!

Well, today I’m announcing a new series of experiments that we’re calling “Will it Unstructure?

The idea here is to take structured questions from surveys, polls and so forth we come across and ask: Will it Unstructure? In other words, will asking the same question in an open-ended fashion yield the same or different results?

(In the future, we’ll cover more of these. Please send us suggestions for structured questions you’d like us to test!)

Will New Year’s Resolutions Unstructure? A Text Analytics PollTM

So, back to those Top New Year’s Resolution lists. Let’s find out: Will it Unstructure?

Over New Year’s weekend we surveyed n=1,536 respondents*, asking them the same question that was asked in the GoBankingRates.com example I referenced earlier: “What are your 2017 resolutions?”

*Representative online general population sample sourced via Google Surveys.

Below is a table of the text comments quickly analyzed by OdinText.

WillItUnstructure1OdinText.png

As you can see, there’s a lot more to be learned when you allow people to respond unaided and in their own words. In fact, we see a very different picture of what really matters to people in the coming year.

Note: The GoBankingRates.com survey allowed people to select more than one answer.

Predictably, Health (Diet and/or Exercise) came in first, but with a staggeringly lower incidence of mentions compared to the percent of respondents who selected it on the GoBankingRates.com survey: 19.4% vs. 80.7%.

Moreover, we found that ALL of the top resolution categories in the GoBankingRates.com example actually appeared DRAMATICALLY less frequently when respondents were given the opportunity to answer the same question unaided and in their own words:

  • “Living life to the fullest” = 1.1% vs. 45.7%

  • Financial Improvement (make/save more and/or cut debt) = 2.9% vs. 57.6%

  • Spend more time with family/friends = 0.2% vs. 33.2%

Furthermore, the second most-mentioned resolution in our study didn’t even appear in the GoBankingRates.com example!

What we’ll call “Spirituality” here—a mix of sentiments around being kinder to others, staying positive, and finding inner peace—appeared in 8.3% of responses, eclipsing each of the top resolutions from the GoBankingRates.com example except diet/exercise.

After that we see a wide variety of equally often mentioned and sometimes contradictory resolutions. Now, bear in mind that some of these responses—“Drink more alcohol,” for example—were probably made tongue-in-cheek. Interestingly, even in those cases, more than one person said the same thing, which suggests it may mean something more. (I.e., could this have been filed under “Have Fun/Live Life to the Fullest”?)

These replies are all low incidence, sure, but they certainly provide a fuller picture. For instance, who would’ve predicted that “getting a driver’s license/permit” or “getting married” would be a New Year’s resolution? I would add that among these low incidence mentions, a text analysis a way to understand the relative differences in frequency between various answers.

Disturbingly, 0.3% (five people) said their 2017 resolution is to die. Whether or not these responses were in jest or serious is debatable. Our figure is coincidentally not so far off from estimates from reputable sources with expertise on the subject. For example, according to Emory University, in the past year approx. 1.0% of the U.S. population (2.3 million people) developed a suicide plan and 0.5% (1 million people) attempted suicide.

All of this said, obviously the GoBankingRate.com survey was not a scientific instrument. We selected it at random from a lot of similar “Top New Year’s Resolutions” surveys available.

These results are all, of course, relatively subject to interpretation and we can debate them on a number of fronts, but at the end of the day it’s unmistakably clear that a quantitative instrument with a finite set of choices tells an entirely different story than people do when they have the opportunity to respond unaided and in their own words.

Bonus: Top Three Most Important Events of 2016

Since the whole New Year’s resolutions topic is a little overdone, I ran an additional question just for fun: “Name the Three Most Important Things That Happened in 2016.”

Here are the results from OdinText ranked in order of occurrence in 2016..

MostMemorableEventsOf2016textanalysis.png

If I had to answer this question myself I would probably say Donald Trump winning the U.S. Presidential Election, Russian aggression/hacking and Brexit.

But, again, not everyone places the same weight on events. So here’s yet another example of how much more we can learn when we ask people to reply unaided, in their own words.

Thanks for reading!

REMINDER: Let me know what questions you would like us to use for future posts on the “Will it Unstructure?” series!

Wishing you and yours a happy, healthy new year!

@TomHCAnderson

Tom H. C. Anderson

What Does the Co-Occurence Graph Tell You?

Text Analytics Tips - Branding What does the co-occurrence graph tell you?Text Analytics Tips by Gosia

The co-occurrence graph in OdinText may look simple at first sight but it is in fact a very complex visualization. Based on an example we are going to show you how to read and interpret this graph. See the attached screenshots of a single co-occurrence graph based on a satisfaction survey of 500 car dealership customers (Fig. 1-4).

The co-occurrence graph is based on multidimensional scaling techniques that allow you to view the similarity between individual cases of data (e.g., automatic terms) taking into account various aspects of the data (i.e., frequency of occurrence, co-occurrence, relationship with the key metric). This graph plots the co-occurrence of words represented by the spatial distance between them, i.e., it plots as well as it can terms which are often mentioned together right next to each other (aka approximate overlap/concurrence).

Figure 1. Co-occurrence graph (all nodes and lines visible).

The attached graph (Fig. 1 above) is based on 50 most frequently occurring automatic terms (words) mentioned by the car dealership customers. Each node represents one term. The node’s size corresponds to the number of occurrences, i.e., in how many customer comments a given word was found (the greater node’s size, the greater the number of occurrences). In this example, green nodes correspond to higher overall satisfaction and red nodes to lower overall satisfaction given by customers who mentioned a given term, whereas brown nodes reflect satisfaction scores close to the metric midpoint. Finally, the thickness of the line connecting two nodes highlights how often the two terms are mentioned together (aka actual overlap/concurrence); the thicker the line, the more often they are mentioned together in a comment.

Figure 2. Co-occurrence graph (“unprofessional” node and lines highlighted).

So what are the most interesting insights based on a quick look at the co-occurrence graph of the car dealership customer satisfaction survey?

  • “Unprofessional” is the most negative term (red node) and it is most often mentioned together with “manager” or “employees” (Fig. 2 above).
  • “Waiting” is a relatively frequently occurring (medium-sized node) and a neutral term (brown node). It is often mentioned together with “room” (another neutral term) as well as “luxurious”, “coffee”, and “best”, which are corresponding to high overall satisfaction (light green node). Thus, it seems that the luxurious waiting room with available coffee is highly appreciated by customers and makes the waiting experience less negative (Fig. 3 below).
  • The dealership “staff” is often mentioned together with such positive terms as “always”, “caring”, “nice”, “trained”, and “quick” (Fig. 4 below). However, staff is also mentioned with more negative terms including “unprofessional”, “trust”, “helpful” suggesting a few negative customer evaluations related to these terms which may need attention and improvement.

    Figure 3. Co-occurrence graph (“waiting” node and lines highlighted).

    Figure 4. Co-occurrence graph (“staff” node and lines highlighted).

    Hopefully, this quick example can help you extract quick and valuable insights based on your own data!

Gosia

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

Your Candid Thoughts on Open Ends in Surveys?

Your Candid Thoughts on Open Ends in Surveys?

Hi readers. Today I’m just sharing a very short survey. If you are a user of OdinText’s software this survey is not for you. It’s for marketing researchers in general to get their thoughts on voice of customer comment data and how they deal with it. So if you’re on this site browsing around do feel free to take the short survey here:

https://www.surveymonkey.com/r/6JFC8FN

The survey is only 5 questions and completely anonymous. It’s being fielded in a few marketing research related LinkedIn groups as well, including the NGMR (Next Gen Marketing Research Group).

I’ll be sharing results with you here week after next.

Happy Friday!

 

 


Tom H.C. Anderson | @TomHCanderson @OdinText

Tom H.C. Anderson

To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior,  please contact us or request a Free Demo here >

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

 

Let’s Connect at IIEX 2016!

OdinText Presentations at 2016 Insight Innovation Exchange

I’m looking forward to the Insight Innovation Exchange (IIEX) in Atlanta this coming week.

In just a few years it’s become one of the best marketing research trade events and probably my favorite when it comes to meeting those interested in Next Generation Market Research.

IIeX 2016

If you’re attending please let me know. I’d love to meet up briefly and say hello in person. My colleague Sean Timmins and I would love to meet up, hear what you’re working on and see whether OdinText might be something that could help you get to better insights faster.

[PSST If you would like to attend IIEX feel free to use our Speaker discount code ODINTEXT!]

There are so many cool sessions at the conference, and the venue and the neighborhood are great (love the Atlanta food options).  In case you are still considering which sessions to attend I’d love to invite you to our sessions:

1. Monday 2:00-3:00 pm / Making Data Science More Accessible

Monday 2:00-3:00 In the Grand Ballroom please come support our mission of making data science more accessible in the Insight Innovation Competition. If you are at IIEX, this is THE session you don’t want to miss! [We blogged about this exciting session earlier here].

2. Tuesday 12:00-2:00 pm / Interactive Roundtable

Tuesday 12:00-2:00 also in the Grand Ballroom I will be hosting an interactive roundtable on Text Analytics & Text Mining. In this discussion group, I will be hosting an informative and lively discussion on where and how this very powerful technology is best deployed now and how it will change the future of analytics. This effects everything from social media monitoring, and survey data, to email and call center log analysis and a whole lot more…

3. Tuesday 5:00 pm / Special Panel 

Tuesday 5:00 in a special analysis of survey panelists I will be joining Kerry Hecht Labsuirs, Director of Research Services at Recollective and Jessica Broome, Research Guru at Jessica Broome Research in an investigation of survey panelists. The session is entitled Exploring the Participant Experience. (sneak peek here!)

OdinText was used to analyze the unstructured data from this research, and so I will help by reviewing some of those findings briefly. You can read about some of the initial results here on the blog. We plan to follow up with a second post after the conference.

Again, we really hope to see you at the conference. Please reach out ahead of time and let us know if you’ll be there so we can plan to grab a coffee.  If you can’t make it to the event, and any of the above interests you let us know, I’d be happy to schedule a call.

See you in Atlanta!


Tom H.C. Anderson

@TomHCanderson @OdinText

Tom H.C. Anderson

To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior,  please contact us or request a Free Demo here >

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

 

Look Who’s Talking, Part 1: Who Are the Most Frequently Mentioned Research Panels?

Survey Takers Average Two Panel Memberships and Name Names

Who exactly is taking your survey?

It’s an important question beyond the obvious reasons and odds are your screener isn’t providing all of the answers.

Today’s blog post will be the first in a series previewing some key findings from a new study exploring the characteristics of survey research panelists.

The study was designed and conducted by Kerry Hecht, Director of Research at Ramius. OdinText was enlisted to analyze the text responses to the open-ended questions in the survey.

Today I’ll be sharing an OdinText analysis of results from one simple but important question: Which research companies are you signed up with?

Note: The full findings of this rather elaborate study will be released in June in a special workshop at IIEX North America (Insight Innovation Exchange) in Atlanta, GA. The workshop will be led by Kerry Hecht, Jessica Broome and yours truly. For more information, click here.

About the Data

The dataset we’ve used OdinText to analyze today is a survey of research panel members with just over 1,500 completes.

The sample was sourced in three equal parts from leading research panel providers Critical Mix and Schlesinger Associates and from third-party loyalty reward site Swagbucks, respectively.

The study’s author opted to use an open-ended question (“Which research companies are you signed up with?”) instead of a “select all that apply” variation for a couple of reasons, not the least of which being that the latter would’ve needed to list more than a thousand possible panel choices.

Only those panels that were mentioned by at least five respondents (0.3%) were included in the analysis. As it turned out, respondents identified more than 50 panels by name.

How Many Panels Does the Average Panelist Belong To?

The overwhelming majority of respondents—approx. 80%—indicated they belong to only one or two panels. (The average number of panels mentioned among those who could recall specific panel names was 2.3.)

Less than 2% told us they were members of 10 or more panels.

Finally, even fewer respondents told us they were members of as many as 20+ panels; others could not recall the name of a single panel when asked. Some declined to answer the question.

Naming Names…Here’s Who

Caption: To see the data more closely, please click this screenshot for an Excel file. 

In Figure 1 we have the 50 most frequently mentioned panel companies by respondents in this survey.

It is interesting to note that even though every respondent was signed up with at least one of the three companies from which we sourced the sample, a third of respondents failed to name that company.

Who Else? Average Number of Other Panels Mentioned

Caption: To see the data more closely, please click this screenshot for an Excel file.

As expected—and, again, taking the fact that the sample comes from each of just three firms we mentioned earlier—larger panels are more likely than smaller, niche panels to contain respondents who belong to other panels (Figure 2).

Panel Overlap/Correlation

Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed.

Caption: To see the data more closely, please click this screenshot for an Excel file.

Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed. In a few cases where correlation ishigher, it may be that these panels tend to recruit in the same place online or that there is a relationship between the companies.

What’s Next?

Again, all of the data provided above are the result of analyzing just a single, short open-ended question using OdinText.

In subsequent posts, we will look into what motivates these panelists to participate in research, as well as what they like and don’t like about the research process. We’ll also look more closely at demographics and psychographics.

You can also look forward to deeper insights from a qualitative leg provided by Kerry Hecht and her team in the workshop at IIEX in June.


Thank you for your readership. As always, I encourage your feedback and look forward to your comments!

@TomHCanderson @OdinText

Tom H.C. Anderson

PS. Just a reminder that OdinText is participating in the IIEX 2016 Insight Innovation Competition!

Voting ends Today! Please visit MAKE DATA ACCESSIBLE and VOTE OdinText!

 

[If you would like to attend IIEX feel free to use our Speaker discount code ODINTEXT]

To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior,  please contact us or request a Free Demo here >

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

 

Support OdinText - Make Data Science Accessible!

Take 7 Seconds to Support the OdinText Mission: Help Make Data Science Accessible! I’m excited to announce that OdinText will participate in the IIEX2016 Insight Innovation Competition!

The competition celebrates innovation in market research and provides a platform for young companies and startups to showcase truly novel products and services with the potential to transform the consumer insights field.

Marketing and research are becoming increasingly complex, and the skills needed to thrive in this environment have changed.

To that end, OdinText was designed to make advanced data analytics and data science accessible to marketers and researchers.

Help us in that mission. It only takes 7 seconds.

Please visit http://www.iicompetition.org/idea/view/387 and cast a ballot for OdinText!

You can view and/or vote for the other great companies here if you like.

Thank you for your consideration and support!

Tom

Tom H. C. Anderson Founder - OdinText Inc. www.odintext.com Info/Demo Request

ABOUT ODINTEXT OdinText is a patented SaaS (software-as-a-service) platform for natural language processing and advanced text analysis. Fortune 500 companies such as Disney and Coca-Cola use OdinText to mine insights from complex, unstructured text data. The technology is available through the venture-backed Stamford, CT firm of the same name founded by CEO Tom H. C. Anderson, a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research. The company is the recipient of numerous awards for innovation from industry associations such as ESOMAR, CASRO, the ARF and the American Marketing Association. Anderson tweets under the handle @tomhcanderson.

 

Beyond Sentiment - What Are Emotions, and Why Are They Useful to Analyze?
Text Analytics Tips - Branding

Text Analytics Tips - Branding

Beyond Sentiment - What are emotions and why are they useful to analyze?Text Analytics Tips by Gosia

Emotions - Revealing What Really Matters

Emotions are short-term intensive and subjective feelings directed at something or someone (e.g., fear, joy, sadness). They are different from moods, which last longer, but can be based on the same general feelings of fear, joy, or sadness.

3 Components of Emotion: Emotions result from arousal of the nervous system and consist of three components: subjective feeling (e.g., being scared), physiological response (e.g., a pounding heart), and behavioral response (e.g., screaming). Understanding human emotions is key in any area of research because emotions are one of the primary causes of behavior.

Moreover, emotions tend to reveal what really matters to people. Therefore, tracking primary emotions conveyed in text can have powerful marketing implications.

The Emotion Wheel - 8 Primary Emotions

OdinText can analyze any psychological content of text but the primary attention has been paid to the power of emotions conveyed in text.

8 Primary Emotions: OdinText tracks the following eight primary emotions: joy, trust, fear, surprise, sadness, disgust, anger, and anticipation (see attached figure; primary emotions in bold).

Sentiment Analysis

Sentiment Analysis

Bipolar Nature: These primary emotions have a bipolar nature; joy is opposed to sadness, trust to disgust, fear to anger, and surprise to anticipation. Emotions in the blank spaces are mixtures of the two neighboring primary emotions.

Intensity: The color intensity dimension suggests that each primary emotion can vary in ntensity with darker hues representing a stronger emotion (e.g., terror > fear) and lighter hues representing a weaker emotion (e.g. apprehension < fear). The analogy between theory of emotions and the theory of color has been adopted from the seminal work of Robert Plutchik in 1980s. [All 32 emotions presented in the figure above are a basis for OdinText Emotional Sentiment tracking metric].

Stay tuned for more tips giving details on each of the above emotions.

Gosia

Text Analytics Tips with Gosi

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior. 

What Your Customer Satisfaction Research Isn’t Telling You and Why You Should Care

Why most customer experience management surveys aren’t very useful

 

Most of your customers, hopefully, are not unhappy with you. But if you’re relying on traditional customer satisfaction research—or Customer Experience Management (CXM) as it’s come to be known—to track your performance in the eyes of your customers, you’re almost guaranteed not to learn much that will enable you to make a meaningful change that will impact your business.Why Are Your Customers Mad At You-revise v2

That’s because the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

Customer Satisfaction Distribution - Misconception: Most Customer Feedback is Negative

To understand what’s going on here, we first need to recognize that the notion that most customer feedback is negative is a widespread myth. Most of us assume incorrectly that unhappy customers are proportionately far more likely than satisfied customers to give feedback.

... the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

In fact, the opposite is true. The distribution of satisfied to dissatisfied customers in the results of the average customer satisfaction survey typically follows a very different distribution. Indeed, most customers who respond in a customer feedback program are actually likely to be very happy with the company.

Generally speaking, for OdinText users that conduct research using conventional customer satisfaction scales and the accompanying comments, about 70-80% of the scores from their customers land in the Top 2 or 3 boxes. In other words, on a 10-point satisfaction scale or 11-point likeliness-to-recommend scale (i.e. Net Promoter Score), customers are giving either a perfect or very good rating.

That leaves only 20% or so of customers, of which about half are neutral and half are very dissatisfied.

So My Survey Says Most of My Customers Are Pretty Satisfied. What’s the Problem?

Our careful analyses of both structured (Likert scale) satisfaction data and unstructured (text comment) data have revealed a couple of important findings that most companies and customer experience management consultancies seem to have missed.

We first identified these issues when we analyzed almost one million Shell Oil customers using OdinText over a two-year period  (view the video or download the case study here), and since then we have seen the same trends again and again, which frankly left us wondering how we could have missed these patterns in earlier work.

1.  Structured/Likert scale data is duplicative and nearly meaningless

We’ve seen that there is very little real variance in structured customer experience data. Variance is what companies should really be looking for.

The goal, of course, is to better understand where to prioritize scarce resources to maximize ROI, and to use multivariate statistics to tease out more complex relationships. Yet we hardly ever tie this data to real behavior or revenue. If we did, we would probably discover that it usually does NOT predict real behavior. Why?

2.  Satisficing: Everything gets answered the same way

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful. But the respondent has either had the pleasant experience she expected with you  OR in some (hopefully) rare instances a not-so-pleasant experience.

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful.

In the former case her outlook will be generally positive. This outlook will carry over to just about every structured question you ask her. Consider the typical set of customer sat survey questions…

  • Q. How satisfied were you with your overall experience?
  • Q. How likely to recommend the company are you?
  • Q. How satisfied were you with the time it took?
  • Q. How knowledgeable were the employees?
  • Q. How friendly were the employees? Etc…

Jane's Experience: Jane, who had a positive experience, answers the first two or three questions with some modicum of thought, but they really ask the same thing in a slightly different way, and therefore they get very similar ratings. Very soon the questions—none of which is especially relevant to Jane—dissolve into one single, increasingly boring exercise.

But since Jane did have a positive experience and she is a diligent and conscientious person who usually finishes what she starts, she quickly completes the survey with minimal thought giving you the same Top 1, 2 or 3 box scores across all attributes.

John's Experience: Next is John, who belongs to the fewer than 10% of customers who had a dissatisfying experience. He basically straightlines the survey like Jane did; only he checks the lower boxes. But he really wishes he could just tell you in a few seconds what irritated him and how you could improve.

Instead, he is subjected to a battery of 20 or 30 largely irrelevant questions until he finally gets an opportunity to tell you his problem in the single text question at the end. If he gets that far and has any patience left, he’ll tell you what you need to know right there.

Sadly, many companies won’t do much if anything with this last bit of crucial information. Instead they’ll focus on the responses from the Likert scale questions, all of which Jane and John answered with a similar lack of thought and differentiation between the questions.

3.  Text Comments Tell You How to Improve

So, structured data—that is, again, the aggregated responses from Likert-scale-type survey questions—won’t tell you how to improve. For example, a restaurant customer sat survey may help you identify a general problem area—food quality, service, value for the money, cleanliness, etc.—but the only thing that data will tell you is that you need to conduct more research.

For those who really do want to improve their business results, no other variable in the data can be used to predict actual customer behavior (and ultimately revenue) better than the free-form text response to the right open-ended question, because text comments enable customers to tell you exactly what they feel you need to hear.

4.  Why Most Customer Satisfaction or NPS Open-End Comment Questions Fail

Let’s assume your company appreciates the importance of customer experience management and you’ve invested in the latest text analytics software and sentiment tools. You’ve even shortened your survey because you recognize that the be Overall Satisfaction (OSAT) and most predictive answers come from text questions and not from the structured data.

You’re all set, right? Wrong.

Unfortunately, we see a lot of clients make one final, common mistake that can be easily remedied. Specifically, they ask the recommended Net Promoter Score (NPS) or Overall Satisfaction (OSAT) open-end follow-up question: “Why did you give that rating?” And they ask only this question.

There’s nothing ostensibly wrong with this question, except that you get back what you ask. So when you ask the 80% of customers who just gave you a positive rating why they gave you that rating, you will at best get a short positive about your business. Those fewer than 10% who slammed you will give you  problem area certainly, but this gives you very little to work with other than a few pronounced problems that you probably knew were important anyway.

What you really need is information that you didn’t know and that will enable you to improve in a way that matters to customers and offers a competitive advantage.

An Easy Fix

The solution is actually quite simple: Ask a follow-up probe question like, “What, if anything, could we do better?”

This can then be text analyzed separately, or better yet, combined with the original text comment which as mentioned earlier usually reads Q. “Why did you give the satisfaction score you gave? And due to the Possion distribution in customer satisfaction yields almost only positive comments with few ideas for improvement.  This one-two question combination when text analyzed together fives a far more complete picture to the question about how customer view your company and how you can improve.

Final Tip: Make the comment question mandatory. Everyone should be able to answer this question, even if it means typing an “NA” in some rare cases.

Good luck!

Ps. To learn more about how OdinText can help you learn what really matters to your customers and predict real behavior,  please contact us or request a Free Demo here >

 

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

OdinText Wins American Marketing Association Lavidge Global Marketing Research Prize

OdinTextAnalyticsAwardAMA AMA Honors Cloud-Based Text Analytics Software Provider OdinText for Making Data Science Accessible to Marketers

OdinText Inc., developer of the Next Generation Text Analytics SaaS (software-as-a-service) platform of the same name, today was named winner of the American Marketing Association’s  2016 Robert J. Lavidge Global Marketing Research Prize for innovation in the field.

The Lavidge Prize, which includes a $5000 cash award, globally recognizes a marketing research/consumer insight procedure or solution that has been successfully implemented and has a practical application for marketers.

According to Chris Chapman, President of the AMA Marketing Insights Council, OdinText earned the award for its contribution to advancing the practice of marketing by making data science accessible to non-data scientists.

“Consumers are creating oceans of unstructured text data, but putting this tremendously valuable information to practical use has posed a significant challenge for marketers and companies,” said Chapman.

“The nominations for OdinText highlighted how the company has distilled very complex applied analytics processes into an intuitive tool that enables marketers to run sophisticated predictive analyses and simulations by themselves, quickly and easily. This is exactly the kind of practical advancement we look for in awarding the Lavidge Prize,” added Chapman

The cloud-based OdinText software platform enables marketers with no advanced training or data science expertise to harness vast quantities of complex, unstructured text data—survey open-ends, call center transcripts, email, social media, discussion boards—and to rapidly mine valuable insights that would not have been otherwise obtainable without a data scientist.

“Marketing is evolving, getting both broader and deeper in terms of skill sets needed to succeed,” said FreshDirect Vice President of Business Intelligence and Analytics Jim DeMarco, who nominated OdinText for the Lavidge Prize.

“OdinText provides marketers with the capability to access more advanced analysis faster and helps the business they work on gain an information advantage. This is exactly the kind of innovation our industry needs right now,” DeMarco said.

The Lavidge Prize was presented in a special ceremony today at the AMA’s 2016 Analytics with Purpose Conference in Scottsdale, AZ. OdinText CEO Tom H. C. Anderson—a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research—accepted the award on behalf of the firm.

“One of our goals in creating OdinText was to build the tool from an analyst’s perspective, not a software developer’s, so that a marketer armed with OdinText could derive the same insights but faster than a data scientist using traditional techniques and tools,” said Anderson.

“To be recognized for this achievement by the AMA—one of the largest and most prestigious professional associations for marketers in the world, which has devoted itself to leading the way forward into a new era of marketing excellence—is deeply gratifying,” said Anderson.

 

ABOUT ODINTEXT

OdinText is a patented SaaS (software-as-a-service) platform for natural language processing and advanced text analysis. Fortune 500 companies such as Disney and Shell Oil use OdinText to mine insights from complex, unstructured text data easily and rapidly. The technology is available through the venture-backed Stamford, CT firm of the same name founded by CEO Tom H. C. Anderson, a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research. He tweets under the handle @tomhcanderson.

For more information, visit OdinText Info Request

ABOUT THE AMERICAN MARKETING ASSOCIATION

With a global network of over 30,000 members, the American Marketing Association (AMA) serves as one of the largest marketing associations in the world.  The AMA is the leading professional association for marketers and academics involved in the practice, teaching, and study of marketing worldwide.  Members of the AMA count on the association to be their most credible marketing resource, helping them to establish valuable professional connections and stay relevant in the industry with knowledge, training, and tools to enhance lifelong learning.

For more information, visit www.ama.org