Posts tagged consumer insight
Of Tears and Text Analytics

An OdinText User Story - Text Analytics Tips Guest Post (AI Meets VOC)

Today on the blog we have another first in a soon to be ongoing series. We’re inviting OdinText users to participate more on the Text Analytics Tips blog. Today we have Kelsy Saulsbury guest blogging. Kelsy is a relatively new user of OdinText though she’s jumped right in and is doing some very interesting work.

In her post she ponders the apropos topic, whether automation via artificial intelligence may make some tasks too easy, and what if anything might be lost by not having to read every customer comment verbatim.

 

Of Tears and Text Analytics By Kelsy Saulsbury Manager, Consumer Insights & Analytics

“Are you ok?” the woman sitting next to me on the plane asked.  “Yes, I’m fine,” I answered while wiping the tears from my eyes with my fingers.  “I’m just working,” I said.  She looked at me quizzically and went back to reading her book.

I had just spent the past eight hours in two airports and on two long flights, which might make anyone cry.  Yet the real reason for my tears was that I had been reading hundreds of open-end comments about why customers had decided to buy less from us or stop buying from us altogether.  Granted eight hours hand-coding open ends wasn’t the most accurate way to quantify the comments, but it did allow me to feel our customers’ pain from the death of a spouse to financial hardship with a lost job.  Other reasons for buying less food weren’t quite as sad — children off to college or eating out more after retirement and a lifetime of cooking.

I could also hear the frustration in their voices on the occasions when we let them down.  We failed to deliver when we said we would, leaving the dessert missing from a party.  They took off work to meet us, and we never showed.  Anger at time wasted.

Reading their stories allowed me to feel their pain and better share it with our marketing and operations teams.  However, I couldn’t accurately quantify the issues or easily tie them to other questions in the attrition study.  So this year when our attrition study came around, I utilized a text analytics tool (OdinText) for the text analysis of our open ends around why customers were buying less.

It took 1/10th of the time to see more accurately how many people talked about each issue.  It allowed me to better see how the issues clustered together and how they differed based on levels of overall satisfaction.  It was fast, relatively easy to do, and directly tied to other questions in our study.

I’ve seen the benefits of automation, yet I’m left wondering how we best take advantage of text analytics tools without losing the power of the emotion in the words behind the data.  I missed hearing and internalizing the pain in their voices.  I missed the tears and the urgency they created to improve our customers’ experience.

 

Kelsy Saulsbury Manager, Consumer Insights & Analytics Schwan's Company

 

A big thanks to Kelsy for sharing her thoughts on OdinText's Text Analytics Tips blog. We welcome your thoughts and questions in comment section below.

If you’re an OdinText user and have a story to share please reach out. In the near future we’ll be sharing more user blog posts and case studies.

@OdinText

When Oprah is President We Can Celebrate Family Day While Skiing!

Text Analytics Poll™ Shows What We’d Like Instead of Presidents Day It’s been less than a week since our Valentine’s Day poll unearthed what people dislike most about their sweethearts, and already another holiday is upon us! Though apparently for most of us it’s not much of a holiday at all; well over half of Americans say they do nothing to commemorate ‘Presidents Day.’

You’ll note I put the holiday in single quotes. That’s because there’s some confusion around the name. Federally, it’s recognized as Washington’s Birthday. At the state level, it’s known by a variety of names—President’s Day, Presidents’ Day, Presidents Day and others, again, depending on the state.

But the name is not the only inconsistency about Presidents Day. If you’re a federal employee OR you happen to be a state employee in a state where the holiday is observed OR you work for an employer who honors it, you get the day off work with pay. Schools may or may not be closed, but that again depends on where you live.

As for what we’re observing exactly, well, that also depends on the state, but people generally regard the holiday as an occasion to honor either George Washington, alone, or Washington and Abraham Lincoln, or U.S. presidents, in general.

Perhaps the one consistent aspect of this holiday is the sales? It’s particularly popular among purveyors of automobiles, mattresses, and furniture.

Yes, it’s a patriotic sort of holiday, but on the whole, we suspected that ‘Presidents Day’ fell on the weaker end of the American holiday spectrum, so we investigated a little bit…

About this Text Analytics Poll™

In this example for our ongoing series demonstrating the efficiency, flexibility, and practicality of the Text Analytics Poll™ for insights generation, we opted for a light-hearted poll using a smaller sample* than usual. While text analytics have obvious value when applied to larger-scale data where human reading or coding is impossible or too expensive, you’ll see here that OdinText also works very effectively with smaller samples!

I’ll also emphasize that the goal of these little Text Analytics Polls™ is not to conduct a perfect study, but to very quickly design and field a survey with only one open-ended question, analyze the results with OdinText, and report the findings in here on this blog. (The only thing that takes a little time—usually 2-3 days—is the data collection.)

So while the research is representative of the general online population, and the text analytics coding applied with 100% consistency throughout the entire sample, this very speedy exercise is meant to inspire users of OdinText to use the software in new ways to answer new questions. It is not meant to be an exhaustive exploration of the topic. We welcome our users to comment and suggest improvements in the questions we ask and make suggestions for future topics!

Enough said, on to the results…

A Holiday In Search of a Celebrant in Search of a Holiday…

Poll I: Americans Celebrate on the Slopes, Not in Stores

When we asked Americans how they typically celebrate Presidents Day, the vast majority told us they don’t. And those few of us lucky enough to have the day off from work tend to not do much outside of sleeping.

But the surprise came from those few who actually said they do something on Presidents Day!

We expected people to say they go shopping on Presidents Day, but the most popular activity mentioned (after nothing and sleeping) was skiing! And skiing was followed by 2) barbecuing and 3) spending time with friends—not shopping.

Poll II: Change it to Family Day?

So, maybe as far as holidays go, Presidents Day is a tad lackluster? Could we do better?

We asked Americans:

Q. If we could create a new holiday instead of Presidents Day, what new holiday would you suggest we celebrate?

While some people indicated Presidents Day is fine as is, among those who suggested a new holiday there was no shortage of creativity!

The three most frequently mentioned ideas by large margins for replacement of Presidents Day were 1) Leaders/Heroes Day, 2) Native American Day (this holiday already exists, so maybe it could benefit from some publicity?) and 3) Family Day (which is celebrated in parts of Canada and other countries).

People also seemed to like the idea of shifting the date and making a holiday out of other important annually occurring events that lent themselves to a day off in practical terms like Election Day, Super Bowl Monday and, my personal favorite, Taxpayer Day on April 15!

Poll III: From Celebrity Apprentice to Celebrity POTUS

Donald Trump isn’t the first person in history to have not held elected office before becoming president, but he is definitely the first POTUS to have had his own reality TV show! Being Presidents Day, we thought it might be fun to see who else from outside of politics might interest Americans…

 Q: If you could pick any celebrity outside of politics to be President, who would it be?

 

Looks like we could have our first female president if Oprah ever decides to run. The media mogul’s name just rolled off people’s tongues, followed very closely by George Clooney, with Morgan Freeman in a respectable third.

Let Them Tell You in Their Own Words

In closing, I’ll remind you that none of these data were generated by a multiple-choice instrument, but via unaided text comments from people in their own words.

What never ceases to amaze me about these exercises is how even when we give people license to say whatever crazy thing they can think up—without any prompts or restrictions—people often have the same thoughts. And so open-ends lend themselves nicely to quantification using a platform like OdinText.

If you’re among the lucky folks who have the holiday off, enjoy the slopes!

Until next time, Happy Presidents Day!

@TomHCAnderson

PS.  Do you have an idea for our next Text Analytics Poll™? We’d love to hear from you. Or, why not use OdinText to analyze your own data!

[*Today’s OdinText Text Analytics PollTM sample of n=500 U.S. online representative respondents has been sourced through of Google Surveys. The sample has a confidence interval of +/- 4.38 at the 95% Confidence Level. Larger samples have a smaller confidence level. Subgroup analyses within the sample have a larger confidence interval.]

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR, and the ARF. He was named one of the “Four under 40” market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson

What Does the Co-Occurence Graph Tell You?

Text Analytics Tips - Branding What does the co-occurrence graph tell you?Text Analytics Tips by Gosia

The co-occurrence graph in OdinText may look simple at first sight but it is in fact a very complex visualization. Based on an example we are going to show you how to read and interpret this graph. See the attached screenshots of a single co-occurrence graph based on a satisfaction survey of 500 car dealership customers (Fig. 1-4).

The co-occurrence graph is based on multidimensional scaling techniques that allow you to view the similarity between individual cases of data (e.g., automatic terms) taking into account various aspects of the data (i.e., frequency of occurrence, co-occurrence, relationship with the key metric). This graph plots the co-occurrence of words represented by the spatial distance between them, i.e., it plots as well as it can terms which are often mentioned together right next to each other (aka approximate overlap/concurrence).

Figure 1. Co-occurrence graph (all nodes and lines visible).

The attached graph (Fig. 1 above) is based on 50 most frequently occurring automatic terms (words) mentioned by the car dealership customers. Each node represents one term. The node’s size corresponds to the number of occurrences, i.e., in how many customer comments a given word was found (the greater node’s size, the greater the number of occurrences). In this example, green nodes correspond to higher overall satisfaction and red nodes to lower overall satisfaction given by customers who mentioned a given term, whereas brown nodes reflect satisfaction scores close to the metric midpoint. Finally, the thickness of the line connecting two nodes highlights how often the two terms are mentioned together (aka actual overlap/concurrence); the thicker the line, the more often they are mentioned together in a comment.

Figure 2. Co-occurrence graph (“unprofessional” node and lines highlighted).

So what are the most interesting insights based on a quick look at the co-occurrence graph of the car dealership customer satisfaction survey?

  • “Unprofessional” is the most negative term (red node) and it is most often mentioned together with “manager” or “employees” (Fig. 2 above).
  • “Waiting” is a relatively frequently occurring (medium-sized node) and a neutral term (brown node). It is often mentioned together with “room” (another neutral term) as well as “luxurious”, “coffee”, and “best”, which are corresponding to high overall satisfaction (light green node). Thus, it seems that the luxurious waiting room with available coffee is highly appreciated by customers and makes the waiting experience less negative (Fig. 3 below).
  • The dealership “staff” is often mentioned together with such positive terms as “always”, “caring”, “nice”, “trained”, and “quick” (Fig. 4 below). However, staff is also mentioned with more negative terms including “unprofessional”, “trust”, “helpful” suggesting a few negative customer evaluations related to these terms which may need attention and improvement.

    Figure 3. Co-occurrence graph (“waiting” node and lines highlighted).

    Figure 4. Co-occurrence graph (“staff” node and lines highlighted).

    Hopefully, this quick example can help you extract quick and valuable insights based on your own data!

Gosia

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

Five Reasons to NEVER Design a Survey without a Comment Field

Marketing Research Confessions Part II - Researchers Say Open-Ends Are Critical!

My last post focused on the alarmingly high number of marketing researchers (~30%) who, as a matter of policy, either do not include a section for respondent comments (a.k.a. “open-ended” questions) in their surveys or who field surveys with a comment section but discard the responses.

The good news is that most researchers do, in fact, understand and appreciate the value of comment data from open-ended questions.

Indeed, many say feedback in consumers’ own words is indispensable.

Among researchers we recently polled:

  • 70% would NEVER launch tracker OR even an ad-hoc (66%) survey without a comment field
  • 80% DO NOT agree that analyzing only a subset of the comment data is sufficient
  • 59% say comment data is AT LEAST as important as the numeric ratings data (and many state they are the most important data points)
  • 58% ALWAYS allocate time to analyze comment data after fielding

In Their Own Words: “Essential”

In contrast to the flippancy we saw in comments from those who don’t see any need for open-ended survey questions, researchers who value open-ends felt pretty strongly about them.

Consider these two verbatim responses, which encapsulate the general sentiment expressed by researchers in our survey:

“Absolutely ESSENTIAL. Without [customer comments] you can easily draw the wrong conclusion from the overall survey.”

“Open-ended questions are essential. There is no easy shortcut to getting at the nuanced answers and ‘ah-ha!’ findings present in written text.”

As it happens, respondents to our survey provided plenty of detailed and thoughtful responses to our open-ended questions.

We, of course, ran these responses through OdinText and our analysis identified five common reasons for researchers’ belief that comment data from open-ended questions is critically important.

So here’s why, ranked chronologically in ascending order by preponderance of mentions and in their own words

 Top Five Reasons to Always Include an Open-End

 

#5 Proxy for Quality & Fraud

“They are essential in sussing out fraud—in quality control.”

“For data quality to determine satisficing and fraudulent behavior

“…to verify a reasonable level of engagement in the survey…”

 

#4 Understand the ‘Why’ Behind the Numbers

“Very beneficial when trying to identify cause and effect

“Open ends are key to understand the meaning of all the other answers. They provide context, motivations, details. Market Research cannot survive without open ends”

Extremely useful to understand what is truly driving decisions. In closed-end questions people tend to agree with statements that seem a reasonable, logical answer, even if they have not considered them before at all

“It's so critical for me to understand WHY people choose the hard codes, or why they behave the way the big data says they behave. Inferences from quant data only get you so far - you need to hear it from the horse’s mouth...AT SCALE!”

“OEs are windows into the consumer thought process, and I find them invaluable in providing meaning when interpreting the closed-ended responses.”

 

#3 Freedom from Quant Limitations

“They allow respondents more freedom to answer a question how they want to—not limited to a list that might or might not be relevant.”

“Extremely important to gather data the respondent wants to convey but cannot in the limited context of closed ends.”

“Open-enders allow the respondent to give a full explanation without being constrained by pre-defined and pre-conceived codes and structures. With the use of modern text analytics tools these comments can be analyzed and classified with ease and greater accuracy as compared to previous manual processes.”

“…fixed answer options might be too narrow.  Product registration, satisfaction surveys and early product concept testing are the best candidates…”

allowing participants to comment on what's important to them

 

#2 Avoiding Wrong Conclusions

“We code every single response, even on trackers [longitudinal data] where we have thousands of responses across 5 open-end questions… you can draw the wrong conclusion without open-ends. I've got lots of examples!”

“Essential - mitigate risk of (1) respondents misunderstanding questions and (2) analysts jumping to wrong conclusions and (3) allowing for learnings not included in closed-ended answer categories”

“Open ended if done correctly almost always generate more right results than closed ended.  Checking a box is cheap, but communicating an original thought is more valuable.”

 

#1 Unearthing Unknowns – What We Didn’t Know We Didn’t Know

“They can give rich, in-depth insights or raise awareness of unknown insights or concerns.”

“This info can prove valuable to the research in unexpected ways.”

“They are critical to capture the voice of the customer and provide a huge amount of insight that would otherwise be missed.”

“Extremely useful.  I design them to try and get to the unexpected reasons behind the closed-end data.”

“To capture thoughts and ideas, in their own words, the research may have missed.”

“It can give good complementary information. It can also give information about something the researcher missed in his other questions.”

“Highly useful. They allow the interviewee to offer unanticipated and often most valuable observations.”

 

Ps. Additional Reasons…

Although it didn’t make the top five, several researchers cited one other notable reason for valuing open-ended questions, summarized in the following comment:

“They provide the rich unaided insights that often are the most interesting to our clients

 

Next Steps: How to Get Value from Open-Ended Questions

I think we’ve established that most researchers recognize the tremendous value of feedback from open-ended questions and the reasons why, but there’s more to be said on the subject.

Conducting good research takes knowledge and skill. I’ve spent the last decade working with unstructured data and will be among the first to admit that while the quality of tools to tackle this data have radically improved, understanding what kind of analysis to undertake, or how to better ask the questions are just as important as the technology.

Sadly many researchers and just about all text analytics firms I’ve run into understand very little about these more explicit techniques in how to actually collect better data.

Therefore I aim to devote at least one if not more posts over the next few weeks to delve into some of the problems in working with unstructured data brought up by some of our researchers.

Stay tuned!

@TomHCAnderson

 

Ignoring Customer Comments: A Disturbing Trend

One-Third of Researchers Think Survey Ratings Are All They Need

You’d be hard-pressed to find anyone who doesn’t think customer feedback matters, but it seems an alarming number of researchers don’t believe they really need to hear what people have to say!

 

2in5 openends read

In fact, almost a third of market researchers we recently polled either don’t give consumers the opportunity to comment or flat out ignore their responses.

  • 30% of researchers report they do not include an option for customer comments in longitudinal customer experience trackers because they “don’t want to deal with the coding/analysis.” Almost as many (34%) admit the same for ad hoc surveys.
  • 42% of researchers also admit launching surveys that contain an option for customer comments with no intention of doing anything with the comments they receive.

Customer Comments Aren’t Necessary?

2 in 5 researchers it is sufficient to analyze only a small subset of my customers comments

Part of the problem—as the first bullet indicates—is that coding/analysis of responses to open-ended questions has historically been a time-consuming and labor-intensive process. (Happily, this is no longer the case.)

But a more troubling issue, it seems, is a widespread lack of recognition for the value of unstructured customer feedback, especially compared to quantitative survey data.

  • Almost half (41%) of researchers said actual voice-of-customer comments are of secondary importance to structured rating questions.
  • Of those who do read/analyze customer comments, 20% said it’s sufficient to just read/code a small subset of the comments rather than each and every

In short, we can conclude that many researchers omit or ignore customer comments because they believe they can get the same or better insights from quantitative ratings data.

This assumption is absolutely WRONG.

Misconception: Ratings Are Enough

I’ve posted on the serious problems with relying exclusively on quantitative data for insights before here.

But before I discovered text analytics, I used to be in the same camp as the researchers highlighted in our survey.

My first mistake was that I assumed I would always be able to frame the right questions and conceive of all possible relevant answers.

I also believed, naively, that respondents actually consider all questions equally and that the decimal point differences in mean ratings from (frequently onerous) attribute batteries are meaningful, especially if we can apply a T Test and the 0.23% difference is deemed “significant” (even if only at a directional 80% confidence level).

Since then, I have found time and time again that nothing predicts actual customer behavior better than the comment data from a well-crafted open-end.

For a real world example, I invite you to have a look at the work we did with Jiffy Lube.

There are real dollars attached to what our customers can tell us if we let them use their own words. If you’re not letting them speak, your opportunity cost is probably much higher than you realize.

Thank you for your readership,

I look forward to your COMMENTS!

@TomHCAnderson

[PS. Over 200 marketing researchers professionals completed the survey in just the first week in field (statistics above), and the survey is still fielding here. What I was most impressed with so far was ironically the quality and thought fullness of the two open ended comments that were provided. Thus I will be doing initial analysis and reporting here on the blog during the next few days. So come back soon to see part II and maybe even a part III of the analysis to this very short but interesting survey of research professionals]

Customer Satisfaction: What do satisfied vs. dissatisfied customers talk about?

Text Analytics Tips - Branding What do satisfied versus dissatisfied customers talk about? - Group Comparison Example Text Analytics Tips by Gosia

In this post we are going to discuss one of the first questions most researchers tend to explore using OdinText: what do satisfied versus dissatisfied customers talk about? Many market researchers not only seek to find out what the entire population of their survey respondents mentions but it is even more critical for them to understand the strengths mentioned by customers who are happy and the problems mentioned by those who are less happy with the product or service.

To perform this kind of analysis you need to first identify “satisfied” and “dissatisfied” customers in your data. The best way to do it is based on a satisfaction or satisfaction-related metric, e.g., Overall Satisfaction or NPS (Net Promoter Score) Rating (i.e., likelihood to recommend). In this example, satisfied customers are going to be those who answered 4 – “Somewhat satisfied” or 5 – “Very satisfied” to the Overall Satisfaction question (scale 1-5). And dissatisfied customers are those who answered 1 – “Very dissatisfied” or 2 – “Somewhat dissatisfied”.

Next, you can compare the content of comments provided by the two groups of customers (Group Comparison tab). I suggest you first select the frequency of occurrence statistic for your comparison. You can use a dictionary or create your own issues that are meaningful to you and see whether the two groups of customers discuss these issues with different frequency or you can look at any differences in the frequency of most commonly mentioned automatic terms (which OdinText has generated automatically for you).

Figure 1. Frequency of issues mentioned by satisfied (Overall Satisfaction 4-5) versus dissatisfied (Overall Satisfaction 1-2) customers. Descending order of frequency for satisfied customers.Figure 1. Frequency of issues mentioned by satisfied (Overall Satisfaction 4-5) versus dissatisfied (Overall Satisfaction 1-2) customers. Descending order of frequency for satisfied customers.

In the attached figure you can see a chart based on a simple group comparison using a dictionary of terms of a sample service company. There you go, lots of exciting insights to present to your colleagues based on a very quick analysis!

Gosia

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

Preventing Customer Churn with Text Analytics

3 Ways You Can Improve Your Lost Customer Analysis

Preventing Customer Churn with Text Analytics

Lapsed Customers, Customer Churn, Customer Attrition, Customer Defection, Lost Customers, Non-Renewals, whatever you call them this kind of customer research is becoming more relevant everywhere, and we are seeing more and more companies turning to text analytics in order to better answer how to retain more customers longer.  Why are they turning to text analytics? Because no structured survey data does a better job predicting customer behavior as well as actual voice of customer text comments!

Today’s post will highlight 3 mistakes we often see being made in this kind of research.

1. Most Customer Loss/Churn Analysis is done on the customers who leave, in isolation from customers who stay. Understandable since it would make little sense to ask a customer who is still with you a survey question such as “Why have you stopped buying from us?”. But customer churn analysis can be so much more powerful if you are able to compare customers who are still with you to those who have left. There are a couple of ways to do this:

  • Whether or not you conduct a separate lapsed customer survey among those who are no longer purchasing, also consider doing a separate post-hoc analysis of your customer satisfaction survey data. It doesn’t have to be current. Just take a time period of say the last 6-9 months and analyze the comment data from those customers who have left VS those who are still with you. What did the two groups say differently just before the lapsed customers left? Can these results be used to predict who is likely to churn ahead of time? The answer is very likely yes, and in many cases you can do something about it!
  • Whenever possible text questions should be asked of all customers, not just a subgroup such as the leavers. Here sampling as well as how you ask the questions both come into play.

Consider expanding your sampling frame to include not just customers who are no longer purchasing from you, but also customers who are still purchasing from you (especially those who are purchasing more) as well as those still purchasing, but purchasing less. What you really want to understand after all is what is driving purchasing – who gives a damn if they claim they are more or less likely to recommend you – promoter and detractor analysis is over hyped!

Reducing Customer Churn

You may also consider casting an even wider sampling net than just past and current customers. Why not use a panel sample provider and try to include some competitor’s customer as well? You will need to draw the line somewhere for scope and budget, but you get the idea. The survey should be short and concise and should have the text questions up front, starting very broad (top of mind unaided) and then probe.

Begin with a question such as “Q. How, if at all, has your purchasing of Category X changed over the last couple of months?” and/or “Q. You indicated your purchasing of category X has changed, why? (Please be as specific as possible)”. Or perhaps even better, “Q. How if at all has your purchasing of category X changed over the past couple of months? If it has not changed please also explain why it hasn’t changed? (please be as specific as possible)”. As you can see, almost anyone can answer these questions no matter how much or little they have purchased. This is exactly what is needed for predictive text analytics! Having only leaver’s data will be insufficient!

2. Include other structured (real behavior data in the analysis). Some researchers analyze their survey data in isolation. Mixed data usually adds predictive power, especially if it’s real behavior data from your CRM database, and not just stated/recall behavior from your survey. In either case, the key to unlocking meaning and predictability is likely to come from the unstructured comment data. Nothing else can do a better job explaining what happened to them.

3. PLEASE PLEASE, Resist the urge to start your leaver survey with a structured question asking a battery of “check all that apply” reasons for leaving/shopping less. Your various pre-defined reasons, even if you include an “Other Specify_____” will have several negative effects on your data quality.

First, the customer will often forget their primary reason for their change in purchase frequency, they will assume incorrectly that you are most interested in these reasons you have pre-identified. Second there will be no way for you to tell which of these several reasons they are now likely to check, is truly the most important to them. Third, some customers will repeat themselves in the other specify, while others will decide not to answer it at all since they checked so many of your boxes. Either way, you’ve just destroyed the best chance you had in accurately understanding why your customers purchasing has changed!

These are many other ways to improve your insights in lapsed customer survey research by asking fewer yet better comment questions in the right order.  I hope the above tips have given you some things to consider. We’re happy to give you additional tips if you like, and we often find that as customers begin using OdinText their use of survey data both structured and unstructured improves greatly along with their understanding of their customers.

@TomHCanderson

Look Who’s Talking, Part 1: Who Are the Most Frequently Mentioned Research Panels?

Survey Takers Average Two Panel Memberships and Name Names

Who exactly is taking your survey?

It’s an important question beyond the obvious reasons and odds are your screener isn’t providing all of the answers.

Today’s blog post will be the first in a series previewing some key findings from a new study exploring the characteristics of survey research panelists.

The study was designed and conducted by Kerry Hecht, Director of Research at Ramius. OdinText was enlisted to analyze the text responses to the open-ended questions in the survey.

Today I’ll be sharing an OdinText analysis of results from one simple but important question: Which research companies are you signed up with?

Note: The full findings of this rather elaborate study will be released in June in a special workshop at IIEX North America (Insight Innovation Exchange) in Atlanta, GA. The workshop will be led by Kerry Hecht, Jessica Broome and yours truly. For more information, click here.

About the Data

The dataset we’ve used OdinText to analyze today is a survey of research panel members with just over 1,500 completes.

The sample was sourced in three equal parts from leading research panel providers Critical Mix and Schlesinger Associates and from third-party loyalty reward site Swagbucks, respectively.

The study’s author opted to use an open-ended question (“Which research companies are you signed up with?”) instead of a “select all that apply” variation for a couple of reasons, not the least of which being that the latter would’ve needed to list more than a thousand possible panel choices.

Only those panels that were mentioned by at least five respondents (0.3%) were included in the analysis. As it turned out, respondents identified more than 50 panels by name.

How Many Panels Does the Average Panelist Belong To?

The overwhelming majority of respondents—approx. 80%—indicated they belong to only one or two panels. (The average number of panels mentioned among those who could recall specific panel names was 2.3.)

Less than 2% told us they were members of 10 or more panels.

Finally, even fewer respondents told us they were members of as many as 20+ panels; others could not recall the name of a single panel when asked. Some declined to answer the question.

Naming Names…Here’s Who

Caption: To see the data more closely, please click this screenshot for an Excel file. 

In Figure 1 we have the 50 most frequently mentioned panel companies by respondents in this survey.

It is interesting to note that even though every respondent was signed up with at least one of the three companies from which we sourced the sample, a third of respondents failed to name that company.

Who Else? Average Number of Other Panels Mentioned

Caption: To see the data more closely, please click this screenshot for an Excel file.

As expected—and, again, taking the fact that the sample comes from each of just three firms we mentioned earlier—larger panels are more likely than smaller, niche panels to contain respondents who belong to other panels (Figure 2).

Panel Overlap/Correlation

Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed.

Caption: To see the data more closely, please click this screenshot for an Excel file.

Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed. In a few cases where correlation ishigher, it may be that these panels tend to recruit in the same place online or that there is a relationship between the companies.

What’s Next?

Again, all of the data provided above are the result of analyzing just a single, short open-ended question using OdinText.

In subsequent posts, we will look into what motivates these panelists to participate in research, as well as what they like and don’t like about the research process. We’ll also look more closely at demographics and psychographics.

You can also look forward to deeper insights from a qualitative leg provided by Kerry Hecht and her team in the workshop at IIEX in June.


Thank you for your readership. As always, I encourage your feedback and look forward to your comments!

@TomHCanderson @OdinText

Tom H.C. Anderson

PS. Just a reminder that OdinText is participating in the IIEX 2016 Insight Innovation Competition!

Voting ends Today! Please visit MAKE DATA ACCESSIBLE and VOTE OdinText!

 

[If you would like to attend IIEX feel free to use our Speaker discount code ODINTEXT]

To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior,  please contact us or request a Free Demo here >

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

 

Support OdinText - Make Data Science Accessible!

Take 7 Seconds to Support the OdinText Mission: Help Make Data Science Accessible! I’m excited to announce that OdinText will participate in the IIEX2016 Insight Innovation Competition!

The competition celebrates innovation in market research and provides a platform for young companies and startups to showcase truly novel products and services with the potential to transform the consumer insights field.

Marketing and research are becoming increasingly complex, and the skills needed to thrive in this environment have changed.

To that end, OdinText was designed to make advanced data analytics and data science accessible to marketers and researchers.

Help us in that mission. It only takes 7 seconds.

Please visit http://www.iicompetition.org/idea/view/387 and cast a ballot for OdinText!

You can view and/or vote for the other great companies here if you like.

Thank you for your consideration and support!

Tom

Tom H. C. Anderson Founder - OdinText Inc. www.odintext.com Info/Demo Request

ABOUT ODINTEXT OdinText is a patented SaaS (software-as-a-service) platform for natural language processing and advanced text analysis. Fortune 500 companies such as Disney and Coca-Cola use OdinText to mine insights from complex, unstructured text data. The technology is available through the venture-backed Stamford, CT firm of the same name founded by CEO Tom H. C. Anderson, a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research. The company is the recipient of numerous awards for innovation from industry associations such as ESOMAR, CASRO, the ARF and the American Marketing Association. Anderson tweets under the handle @tomhcanderson.

 

Beyond Sentiment - What Are Emotions, and Why Are They Useful to Analyze?
Text Analytics Tips - Branding

Text Analytics Tips - Branding

Beyond Sentiment - What are emotions and why are they useful to analyze?Text Analytics Tips by Gosia

Emotions - Revealing What Really Matters

Emotions are short-term intensive and subjective feelings directed at something or someone (e.g., fear, joy, sadness). They are different from moods, which last longer, but can be based on the same general feelings of fear, joy, or sadness.

3 Components of Emotion: Emotions result from arousal of the nervous system and consist of three components: subjective feeling (e.g., being scared), physiological response (e.g., a pounding heart), and behavioral response (e.g., screaming). Understanding human emotions is key in any area of research because emotions are one of the primary causes of behavior.

Moreover, emotions tend to reveal what really matters to people. Therefore, tracking primary emotions conveyed in text can have powerful marketing implications.

The Emotion Wheel - 8 Primary Emotions

OdinText can analyze any psychological content of text but the primary attention has been paid to the power of emotions conveyed in text.

8 Primary Emotions: OdinText tracks the following eight primary emotions: joy, trust, fear, surprise, sadness, disgust, anger, and anticipation (see attached figure; primary emotions in bold).

Sentiment Analysis

Sentiment Analysis

Bipolar Nature: These primary emotions have a bipolar nature; joy is opposed to sadness, trust to disgust, fear to anger, and surprise to anticipation. Emotions in the blank spaces are mixtures of the two neighboring primary emotions.

Intensity: The color intensity dimension suggests that each primary emotion can vary in ntensity with darker hues representing a stronger emotion (e.g., terror > fear) and lighter hues representing a weaker emotion (e.g. apprehension < fear). The analogy between theory of emotions and the theory of color has been adopted from the seminal work of Robert Plutchik in 1980s. [All 32 emotions presented in the figure above are a basis for OdinText Emotional Sentiment tracking metric].

Stay tuned for more tips giving details on each of the above emotions.

Gosia

Text Analytics Tips with Gosi

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.