Posts tagged coding / programming
Five Reasons to NEVER Design a Survey without a Comment Field

Marketing Research Confessions Part II - Researchers Say Open-Ends Are Critical!

My last post focused on the alarmingly high number of marketing researchers (~30%) who, as a matter of policy, either do not include a section for respondent comments (a.k.a. “open-ended” questions) in their surveys or who field surveys with a comment section but discard the responses.

The good news is that most researchers do, in fact, understand and appreciate the value of comment data from open-ended questions.

Indeed, many say feedback in consumers’ own words is indispensable.

Among researchers we recently polled:

  • 70% would NEVER launch tracker OR even an ad-hoc (66%) survey without a comment field
  • 80% DO NOT agree that analyzing only a subset of the comment data is sufficient
  • 59% say comment data is AT LEAST as important as the numeric ratings data (and many state they are the most important data points)
  • 58% ALWAYS allocate time to analyze comment data after fielding

In Their Own Words: “Essential”

In contrast to the flippancy we saw in comments from those who don’t see any need for open-ended survey questions, researchers who value open-ends felt pretty strongly about them.

Consider these two verbatim responses, which encapsulate the general sentiment expressed by researchers in our survey:

“Absolutely ESSENTIAL. Without [customer comments] you can easily draw the wrong conclusion from the overall survey.”

“Open-ended questions are essential. There is no easy shortcut to getting at the nuanced answers and ‘ah-ha!’ findings present in written text.”

As it happens, respondents to our survey provided plenty of detailed and thoughtful responses to our open-ended questions.

We, of course, ran these responses through OdinText and our analysis identified five common reasons for researchers’ belief that comment data from open-ended questions is critically important.

So here’s why, ranked chronologically in ascending order by preponderance of mentions and in their own words

 Top Five Reasons to Always Include an Open-End

 

#5 Proxy for Quality & Fraud

“They are essential in sussing out fraud—in quality control.”

“For data quality to determine satisficing and fraudulent behavior

“…to verify a reasonable level of engagement in the survey…”

 

#4 Understand the ‘Why’ Behind the Numbers

“Very beneficial when trying to identify cause and effect

“Open ends are key to understand the meaning of all the other answers. They provide context, motivations, details. Market Research cannot survive without open ends”

Extremely useful to understand what is truly driving decisions. In closed-end questions people tend to agree with statements that seem a reasonable, logical answer, even if they have not considered them before at all

“It's so critical for me to understand WHY people choose the hard codes, or why they behave the way the big data says they behave. Inferences from quant data only get you so far - you need to hear it from the horse’s mouth...AT SCALE!”

“OEs are windows into the consumer thought process, and I find them invaluable in providing meaning when interpreting the closed-ended responses.”

 

#3 Freedom from Quant Limitations

“They allow respondents more freedom to answer a question how they want to—not limited to a list that might or might not be relevant.”

“Extremely important to gather data the respondent wants to convey but cannot in the limited context of closed ends.”

“Open-enders allow the respondent to give a full explanation without being constrained by pre-defined and pre-conceived codes and structures. With the use of modern text analytics tools these comments can be analyzed and classified with ease and greater accuracy as compared to previous manual processes.”

“…fixed answer options might be too narrow.  Product registration, satisfaction surveys and early product concept testing are the best candidates…”

allowing participants to comment on what's important to them

 

#2 Avoiding Wrong Conclusions

“We code every single response, even on trackers [longitudinal data] where we have thousands of responses across 5 open-end questions… you can draw the wrong conclusion without open-ends. I've got lots of examples!”

“Essential - mitigate risk of (1) respondents misunderstanding questions and (2) analysts jumping to wrong conclusions and (3) allowing for learnings not included in closed-ended answer categories”

“Open ended if done correctly almost always generate more right results than closed ended.  Checking a box is cheap, but communicating an original thought is more valuable.”

 

#1 Unearthing Unknowns – What We Didn’t Know We Didn’t Know

“They can give rich, in-depth insights or raise awareness of unknown insights or concerns.”

“This info can prove valuable to the research in unexpected ways.”

“They are critical to capture the voice of the customer and provide a huge amount of insight that would otherwise be missed.”

“Extremely useful.  I design them to try and get to the unexpected reasons behind the closed-end data.”

“To capture thoughts and ideas, in their own words, the research may have missed.”

“It can give good complementary information. It can also give information about something the researcher missed in his other questions.”

“Highly useful. They allow the interviewee to offer unanticipated and often most valuable observations.”

 

Ps. Additional Reasons…

Although it didn’t make the top five, several researchers cited one other notable reason for valuing open-ended questions, summarized in the following comment:

“They provide the rich unaided insights that often are the most interesting to our clients

 

Next Steps: How to Get Value from Open-Ended Questions

I think we’ve established that most researchers recognize the tremendous value of feedback from open-ended questions and the reasons why, but there’s more to be said on the subject.

Conducting good research takes knowledge and skill. I’ve spent the last decade working with unstructured data and will be among the first to admit that while the quality of tools to tackle this data have radically improved, understanding what kind of analysis to undertake, or how to better ask the questions are just as important as the technology.

Sadly many researchers and just about all text analytics firms I’ve run into understand very little about these more explicit techniques in how to actually collect better data.

Therefore I aim to devote at least one if not more posts over the next few weeks to delve into some of the problems in working with unstructured data brought up by some of our researchers.

Stay tuned!

@TomHCAnderson

 

Ignoring Customer Comments: A Disturbing Trend

One-Third of Researchers Think Survey Ratings Are All They Need

You’d be hard-pressed to find anyone who doesn’t think customer feedback matters, but it seems an alarming number of researchers don’t believe they really need to hear what people have to say!

 

2in5 openends read

In fact, almost a third of market researchers we recently polled either don’t give consumers the opportunity to comment or flat out ignore their responses.

  • 30% of researchers report they do not include an option for customer comments in longitudinal customer experience trackers because they “don’t want to deal with the coding/analysis.” Almost as many (34%) admit the same for ad hoc surveys.
  • 42% of researchers also admit launching surveys that contain an option for customer comments with no intention of doing anything with the comments they receive.

Customer Comments Aren’t Necessary?

2 in 5 researchers it is sufficient to analyze only a small subset of my customers comments

Part of the problem—as the first bullet indicates—is that coding/analysis of responses to open-ended questions has historically been a time-consuming and labor-intensive process. (Happily, this is no longer the case.)

But a more troubling issue, it seems, is a widespread lack of recognition for the value of unstructured customer feedback, especially compared to quantitative survey data.

  • Almost half (41%) of researchers said actual voice-of-customer comments are of secondary importance to structured rating questions.
  • Of those who do read/analyze customer comments, 20% said it’s sufficient to just read/code a small subset of the comments rather than each and every

In short, we can conclude that many researchers omit or ignore customer comments because they believe they can get the same or better insights from quantitative ratings data.

This assumption is absolutely WRONG.

Misconception: Ratings Are Enough

I’ve posted on the serious problems with relying exclusively on quantitative data for insights before here.

But before I discovered text analytics, I used to be in the same camp as the researchers highlighted in our survey.

My first mistake was that I assumed I would always be able to frame the right questions and conceive of all possible relevant answers.

I also believed, naively, that respondents actually consider all questions equally and that the decimal point differences in mean ratings from (frequently onerous) attribute batteries are meaningful, especially if we can apply a T Test and the 0.23% difference is deemed “significant” (even if only at a directional 80% confidence level).

Since then, I have found time and time again that nothing predicts actual customer behavior better than the comment data from a well-crafted open-end.

For a real world example, I invite you to have a look at the work we did with Jiffy Lube.

There are real dollars attached to what our customers can tell us if we let them use their own words. If you’re not letting them speak, your opportunity cost is probably much higher than you realize.

Thank you for your readership,

I look forward to your COMMENTS!

@TomHCAnderson

[PS. Over 200 marketing researchers professionals completed the survey in just the first week in field (statistics above), and the survey is still fielding here. What I was most impressed with so far was ironically the quality and thought fullness of the two open ended comments that were provided. Thus I will be doing initial analysis and reporting here on the blog during the next few days. So come back soon to see part II and maybe even a part III of the analysis to this very short but interesting survey of research professionals]

Code by Hand? The Benefits of Automated and User-Guided Automated Customer Comment Coding

Text Analytics Tips - Branding Why you should not code text data by hand: Benefits of automated and user-guided automated coding Text Analytics Tips by Gosia

Most researchers know very well that the coding of text data manually (using human coders who read the text and mark different codes) is very expensive both in terms of time that coders need to take and money needed to compensate them for this effort.

However, the major advantage of using human coding is their high understanding of complex meaning of text including sarcasms or jokes.

Usually at least two coders are required to code any type of text data and the calculation of inter-rater reliability or inter-rater agreement is a must. This statistic enables us to see how similarly any number of coders has coded the data, i.e., how often they have agreed on using the exact same codes.

Often even with the simplest codes the accuracy of human coding is low. No two human coders consistently code larger amounts of data the same way because of different interpretations of text or simply due to error. The latter is a reason why no single coder will code the same text data identically when done for the second time (perfect reliability for a single coder could be achieved in theory though, e.g., for very small datasets that can be proofread multiple times).

Another limitation is that human coders can only keep in their working memory a limited number of codes while reading the text. Finally, any change to the code will require repeating the entire coding process from the beginning. Because the process of manual coding of larger datasets is expensive and unreliable automated coding using computer software was introduced.

Automated or algorithm-based text coding solves many of the issues of human coding:

  1. it is fast (thousands of text comments can be read in seconds)
  2. cost-effective (automated coding should be always cheaper than human coding as it requires much less time)
  3. offers perfect consistency (same rules are applied every time without errors)
  4. an unlimited number of codes can be used in theory (some software might have limitations)

However, this process does also have disadvantages. As already mentioned above, humans are the only ones who can perfectly understand the complex meaning of text and simple algorithms are likely going to fail when trying to understand it (even though some new algorithms are under development recently, which can be almost as good as humans). Moreover, most software available on the market has low flexibility as codes cannot be known to or changed by the user.

Figure 1. Comparison of OdinText with “human coding” and “automated coding” approaches.Figure 1. Comparison of OdinText with “human coding” and “automated coding” approaches.

Therefore, OdinText developers decided to let users guide the automated coding. Users can view and edit the default codes and dictionaries, create and upload their own, or build custom dictionaries based on the exploratory results provided by the automated analysis. The codes can be very complex and specific producing a good understanding of the meaning of text, which is the key goal of each text analytics software.

OdinText is a user-guided automated text analytics solution, which has aspects and benefits of both fully automated and human coding. It is fast, cost-effective, accurate, and allows for an unlimited number of codes like many other automated text analytics tools. However, OdinText surpasses the capabilities of other software by providing high flexibility and customization of codes/dictionaries and thus a better understanding of the meaning of text. Moreover, OdinText allows you to conduct statistical analyses and create visualizations of your data in the same software.

Try switching from human coding to user-guided automated coding and you will be pleasantly surprised how easy and powerful it is!

Gosia

Text Analytics Tips with Gosi

[Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior.  Please feel free to request additional information or an OdinText demo here.]

[NOTE: OdinText is NOT a tool for human assisted coding. It is a tool used by analysts for better and faster insights from mixed (structured and unstructured) data.]