Celebrating Innovative Companies in Marketing Research
It's that time of year again when Greenbook fields their biannual GRIT market research industry survey.
Thankfully it looks like the Greenbook team has made the survey a bit shorter than last year. I do encourage fellow researchers to take the survey, as it does give everyone some direction in terms of where things seem to be heading.
PS. This is the GRIT survey which looks for the most innovative insights companies, both supplier side and client side. We encourage you to give some thought to this section as well. Its nice to recognize up and coming companies, as well as your go-to favorites.
I also want to take this time thank everyone who voted for OdinText in the most innovative supplier category last year. We were very encouraged by the support and have been working harder than ever to release a brand new version of the software next month!
How Your Customers Speak - OdinText Indexes Top Slang and Buzz Words for 2018
Understanding how your key customer demographic communicates about your category, and your product is key in learning how to communicate with them most effectively.
One of the posts I’ve come to enjoy most, yet also the most difficult to write, is our annual list of Slang words. OdinText has been indexing unusual/new/slang termsnow for three years, and so we have a good understanding of not just which slang and buzz words are most popular, but also how terms are moving up or down in popularity.
The interesting thing about all trends including buzz words and especially slang is change. Just because you think you know what one word means today, doesn’t mean that same word won’t have a completely different meaning tomorrow. For this reason, even trusted sources such as Urban Dictionary can fail you, because they list the most popular definition first, not the most recent ones. To understand slang you really have to understand movement. If a slang word has been in decline for a while, and picks up in usage, it may well be that there’s a new meaning for it.
Understanding what a new slang word means is often more difficult than it sounds. Context in any comment is often not enough, and various comments may use same term very differently. Neither is relying on any one source such as Urban Dictionary sufficient, far from it we’ve found. An approach of triangulating on the most current definition by considering multiple sources including online videos/song lyrics, internet meme’s, social media comments and, considering date of each are often the best way to arrive at a more current definition. Often it’s the success of a certain artist and how they use the word which propels it. If you default to looking something up in Urban Dictionary, know that the #1 ranked most popular definition may well be quite dated and incorrect.
ABOUT THE INDEX: We define our slang/buzz word index as terms or phrases that have entered public awarenesss, usually not at the general population level, but often, though not always in important youth or online subgroups. These terms occur in social and mainstream media, are often used by artists, youth and sometimes in digital speak. This year we have started allowing a few more general and political terms into the index. You can think of the terms in the index as terms that are some of the most dynamic in either proportion of use/awareness, and also sometimes in the way they are defined, as many of these have multiple meanings and are in a state of ongoing flux. The majority of the words in our index can be classified as slang.
#10 Dog & Yeet
Dog. As with most slang has multiple meanings, some of the more well known are “A man who can’t commit to one woman”, “a close buddy”, and to “fornicate” among others. A newer meaning, and the reason we believe helped dog just make our top 10 this year is that dog is becoming more general in use, and may soon be more gender neutral, as female rapper Toni Romiti says “If he a dog, I’m a Dog too!”
Yeet, Tied for 10th this year is a term we indexed and started tracking back in 2016 also in 10th place then. It’s popularity was due to a new dance move and and internet video meme. You can check out our definition from last year here. But as with other slang it tends to transform and take on multiple meanings, including being used simply as an expletive connotating excitement.
Bruh reached #10 back in 2015/2016 and has been holding steady and even gaining slightly. There are now even some female or gender neutral off shoot variants like Bra, in part promoted by advertising related to breast cancer (someone who supports you when you have breast cancer). The meaning of Bruh has been changing for sometime from a term of endearment (brother), to Bruh?! Meaning “Oh no… why did you do that?!”
Bet moved from #18 to #8 this year. That usually has to do with new usage and/or inclusion into some popular lyrics or meme. Moving from a simple term indicating agreement, e.g. “want to go to the movies?” “Sure, Bet!”, bet has been changing to just mean “yes”, and then ironically the total opposite of agreement, meaning doubt and sarcasm or simply the opposite of what someone wants or No. “Yo can you help me clean my room” “Bet (leaves walks out of door)”. It has even come to be used as a sort of replacement to Yolo., but the newest and most popular meaning currently is as the opposite of the older meanings, a negative sign of disbelief. Basically a sarcastic "No".
Woke moved into 20th place about a year ago and has also increased in popularity tremendously this year. Woke means being intellectually aware, on point and in the know, but can have broader meaning as well. “After taking that class in feminism, he’s really woke to gender issues”
In the past we’ve purposely kept political words out of these lists. Political terminology is more mainstream, and behaves differently from youth slang. That said, these terms too enter the general vernacular seemingly from nowhere and become very popular, perhaps now more than ever. So this year we’ve decided to include a few of them. So in 6th place (and 2nd) we have political words this year.
Snowflake is generally used to describe liberals who are overly sensitive and too easily offended by some general term or belief that doesn’t take into account that everyone is as individual as a snowflake. Though it is sometimes used to describe anyone who is overly sensitive.
Unlike most of our slang terms that tend to skew more urban and lower income, this one actually skews slightly higher income and older, and is in fact often used to describe millennials or Gen Z.
Fleek did a good job in maintaining its position this year, but is still down from a high (our #2 word) at the end of 2015. Fleek skews female quite a bit and generally means ‘on point’ “on fleek” and is often used when describing eyebrows.
Fetch climbed from 6th place back at end of 2015 and has maintained at 4th place this year. After “Bae”, it’s the second most female scewed slang term we track. This term was popularized by the movie Mean Girls and means cool/chic.
Dope moved up 3 from last year. Used a number of ways (see last years definition), including as a synonym for Lit, basically high in quality or mind blowing
#2 Fake News
Here’s our other political word in this years list. Again, not like slang in a number of ways including fact that it skews older and higher income. Still it hit 10th place on our new term index list last year even though we decided not to report political terms. As the name suggests, this term denotes political propaganda and unfactual unscientific information which is becoming ever more prevalent online.
Holding at #1 this year, Lit is literally still “cool” for now. It was 4th place back at beginning of 2016, it may be that its position gets challenged by Dope or something else later this year.
Top 5 Gainers
Each year these trendier terms compete with each other, some enter our list temporarily and then go off to die, sometimes to be resurrected years later, others get so mainstream that they enter our common lexicon and earn a place in the dictionary.
Here are the 5 terms which moved up the most over the past 12 months.
Mainstream media grabbed a hold of this word this past year and our indexing classified it as the biggest single mover in our index. I’m guessing you’re already well aware of this political term. If not check out the Wikipedia definition here https://en.wikipedia.org/wiki/Fake_news
Our second biggest upward mover this year was Snowflake. It will be interesting to see how this one does over the longer term.
So you might say Bitcoin is a brand or product name, but this term which is neither slang not political still made our list because of its fast movement from obscurity to global mainstream. I wouldn’t be totally surprised if the term is split, morphed and transformed into some alternate meanings in the future.
Finally, our fourth biggest mover of the year are one of our slang terms. It’s interesting as far as slang terms in its positive connotation of reaching a more enlightened level of cognition.
Fifth biggest mover this year was Bruh which continued to expand and change in meaning (see above) after being somewhat down last year.
Top 6 Losers
Just as there are winners, there are also losers. Here are the 5 that dropped the most this year.
Looks like ‘Fetch’ may have peaked. It was always a bit of an outlier in terms of slang in that it was so closely tied to a single movie ‘Mean Girls’ and while obviously feamale oriented, but also unlike most slang less ethnic and higher income.
Swag seemed to come and go last year. It just never gained the foothold needed for stickiness.
This somewhat odd female slang term with a definite online meme-ish component had its peak two years ago and has decreased in popularity by half each year since.
The meme-able dance move known as ‘dabbing’ seems to have spawned and given way to a term of similar origin “Yeet”, which has taken on more meaning than dab. While the future looks dim for this term, its use together with marijuana could give it more life in the future.
Another big loser over the past couple of years is “One” meaning goodbye. It may be time to bid “One!” to ‘One’.
A somewhat surprising sudden drop to Bae this year is accompanied by lots of annoyance by those using the term which refers to "boyfriend" or "girlfriend"
Bonus Terms to look out for
Here are some fun new terms that crept onto our radar/index this year, though they didn’t enter our top list but we’ll be continuing to track their movement if they increase in usage over the course of the year. They could suddenly become even more popular, as they are on an upward movement though not quit as extreme as our top 5 above.
Squad may end up being replaced by ‘gang’. Since calling someone a “friend” simply is never cool enough, this term has come to replace the now somewhat corny ‘squad’. Believe it or not, 1 friend is a "gang".
“gang gang”, saying gang twice has also become popular, as plural and often used as a hashtag on social media, especially twitter #ganggang
This term which can be traced to Philly originally meant joint, then morphed to mean literally anything, usually any noun, person place or thing (but quite often a sexy woman )
Cringey or cringy is a fun term meaning something that makes you want to cringe. It has often been used in regard to internet videos, especially with amature home made videos on youtube by very young performers sometimes younger siblings.
We noted “bra” as a feminine derivative of sorts for bro, meaning “someone who’se always there for support'
Awesome, cool, or good. Here’s an example of Guzzi paired with Gang mentioned above.
gucci gang (Lil Pump)
Savage, means "Brutal yet awesome", a useful combination ;)
"Dad" & "Mom"
Not what you think. Rather new, this term is given to the highest ranking person in a given environment. For instance, if 4 guys are playing an online video game, the "Dad" is the person with the highest level game character.
These are our top movers for the year. There are many other pop terms of course. These are US focused and more general in nature. If you put additional lenses, such as geography, age, gender, category and source of data in play, other quite different terms may top the list.
Of course, unless your industry is dead, monitoring the way your stakeholders talk about a topic may be very dynamic without involving too much slang or buzz words. Often which competitors are mentioned, what items are seen as benefits or barriers, and which emotions surround these topics can be just as interesting or even more so. Longitudinal voice of customer data can be found in a variety of sources from phone logs, emails, chat logs, surveys and social media, to name just a few.
If you’re curious about tracking what your customers say, how they say it, and how you can use that to better connect with them and even predict their future behavior including satisfaction and repurchase please reach out. We're happy to show you what Your Data +OdinText looks like.
Net Promoter Score (NPS) +OdinText = Predictive NPS
It was nice to see Research Business Report cover one of our Net Promoter (NPS) Case Studies today on Research Business Report. We’ve found contrary to popular belief, NPS and other Customer Satisfaction ratings like Overall Satisfaction don’t correlate much with important KPI’s like Return Behavior and Sales Revenue.
In this case study, by adding OdinText to NPS, it was possible to better understand and predict these far more important KPI’s, Predictive NPS id you will.
If you have NPS or any other Customer Satisfaction data and would like to better understand the more important KPI’s like Repurchase, Churn, and Revenue please reach out. We would be happy to sned you more information on our NPS Case studies and Key Driver Reporting . +OdinText to NPS and Predict What Matters!
ad testing +OdinText
[Authors note.As I am writing this blog post early Monday morning after the Super Bowl, I have already completed the initial ad testing analysis. It’s the case where modern AI and analytics software (OdinText) is faster than the data collection process/vendor we’re relying on. We’ve asked an open ended comment question among n=3,000 respondents about which super bowl ads they like/dislike and why. Eager to have the analysis complete as soon as possible, the analysis is already done, and blog written based on n=1,011 initial responses received. But since 1,998 more are expected I’m painfully waiting to publish results until the rest of the fielding comes in. The bad part is waiting for the sample. The good part is knowing that now repeating the analysis will literally take less than 1 minute. Just uploading the data into OdinText, and then the brand names and advertisement likes and dislikes will automatically be coded, analyzed and charted in seconds. I just have to review if anything has changed materially and make small updates in my copy below in such case. As it turned out, more data did change findings, and so I did have to change my blog copy. Ah, the joys of modern analytics!]
The Advertising Pundits weighed in on which ads were best and worst even before the Super Bowl aired. We tend to do things a little differently at OdinText and allow data, not opinion to drive.
Of course, for "best" and "worst" not to be subjective, we need some definition of desired outcomes. Last year we looked at a simple formula to evaluate efficacy consisting of Awareness + Positive Sentiment/Liking of the ads.
For instance, you may not remember this because of the low sentiment, but last year 85 Lumber was one of the companies with the highest Awareness after the super bowl. However, because it also had low sentiment and relevancy (as it dealt with the explosive issue of immigration/Trump's wall in a somewhat ambiguous way). It's probably the case that it ended up doing better among its core customer segments than among the general population, but since Super Bowl Ads are expensive, I argued that all things equal, a strategy with a broader target in mind, which aims to leave a positive impact among this broader group, should provide a better ROI. Looking at it another way, to have the most significant positive impact we want to maximize both awareness and sentiment almost equally.
With those assumptions and comments from over 3,000 respondents, OdinText's AI predicted which of the Super Bowl Ads were successful, and which were not. Below I've shown 10 Brands/ads, the best performing 6 and the worst 4.
OdinText Ad Ratings
#1. THE NFL
In a year where there has been a lot of controversy surrounding NFL players taking a knee, and with a few of our respondents explicitly stating that they had boycotted the Super Bowl this year, it interesting to see the NFL advertising, and doing it so well. The NFL's Dirty Dancing with Manning and Beckham performed best, I believe in part because of its high relevance to the audience, but also for garnering high awareness together with very high positive sentiment/liking. In fact, only one other ad came close in sentiment.
#2 AMAZON ALEXA
That second most well-liked ad was Amazon's Alexa. Not as much because of its awareness (which was rather low in comparison), but because of its extremely high sentiment. The audience loved the various famous actors playing the voice of Alexa at least as much as they enjoyed NFL players Dirty Dancing.
In 3rd place we have Tide. They earned the spot less so for sentiment (though viewers did like the ad). The reason Tide did so well was primarily due to the awareness it garnered. Tide had THE HIGHEST awareness of any Super Bowl Ad. However contrary to some of the Advertising Pundits opinions, it just wasn't quite as consistently well-liked by viewers as Dirty Dancing, and Alexa.
If you are in the camp who believe Awareness is everything, then Tide should have an even higher spot.
Doritos + Mountain Dew Ice was so close in our model, that I’m going to give them a tie for 3rd. Not awareness like Tide, but for balancing both positive sentiment and awareness perfectly. It's mix of awareness and liking was in the same proportions as NFL Dirty Dancing, just at a slightly smaller scale.
Obviously considering the audience and occasion, just like the NFL ad, Doritos especially is a highly relevant product, and as importantly the humorous approach with two extremely popular yet not commonly seen together stars (Namely Morgan Freeman and Peter Dinglage/Tyron from Game of Thrones) succeeded in the unique Combo messaging of Fire & Ice.
#5 BUD-LIGHT (NOT BUDWEISER)
Budweiser is almost expected to do well. So, in a way, it may be surprising to see it doesn’t make it into our analysis (Beer, in general, did poorly especially Miller Ultra). It really should be so easy for Budweiser though. Here's a case where the occasion is more than just relevant, it's almost as if the brand has a historic Super Bowl halo effect. That said, their performance was less than impressive.
While the idea of stopping the Budweiser line to make water in an emergency could be touching for some, reminding consumers you have a good fun product may be a safer strategy than asking for kudos for merely being a good corporate citizen?
And that’s where Bud-Light’s Knight did better. Beer should be about fun…
Here's a case where awareness was quite low, but the ad was still more liked than average compared to the other brands. Toyota and our #8 brand just barely made the list. While the setting was right "The Super Bowl," in the end perhaps the ‘Priest, an Imam, a Monk and a Rabbi' may have felt a bit less like a joke, and more like preaching…
Like Budweiser, we expect a lot from Coca-Cola when it comes to advertising. They’ve been pushing the diversity message for a few years now. It may be that pulling at heart strings is far harder to do than making people laugh. Coca-Cola had lower than average sentiment coupled with relatively low awareness. Not a winning combination.
#2 DODGE RAM
Dodge Ram did better than Coca-Cola at least, especially on awareness, but even concerning sentiment/liking.
The negative aspect of course in large part was the appropriateness of Martin Luther King's message at the beginning.
When it comes to ads like these though, I think we must assume, as was the case for 85 Lumber last year, that perhaps the brand knows what it's doing. They aren't there to please everyone (as you would hope is the goal of Pepsi and Coca-Cola), but to message their core audience with a ‘We Get You – Even if Everyone Else Doesn't'. And so, awareness wise, Ram did better than Amazon, Bud-Light, and Pepsi. But on an overall basis, they get dinged by the overall sentiment due to the some would say clumsy ‘MLK + Patriotic' messaging. Only time and sales will tell…
T-Mobile was less well liked than you’d think, who doesn’t like babies right? Turns out people are getting tired of the “social responsibility ads” in their entertainment, at least that’s what they told us.
#4 DIET COKE (TWISTED MANGO)
The Booby Prize. Ok, so here is a bad ad. In PR they used to say, any PR is good PR. But Diet Coke didn't do too well on either of our metrics. It had low awareness combined with even lower sentiment/liking. Diet Coke Mango, because, just no…
A WORD ABOUT PEPSI
Pepsi, what can I say. You may be surprised that yet again, Pepsi performed poorly compared to the other brands I mentoned considering that their name was all over the Super Bowl during the Half Time Show. And yet, it may be that the real winner of Halftime is the brand of the performer, which this year was Justin Timberlake. We saw a similar pattern last year as well.
State of The POTUS - Text Analytics Reveals the Reasons Behind Trumps Approval Ratings
Over the past few weeks we’ve heard political pundits on all major news networks chime in on how Trump is doing one year after taking office. Part of the discussion is around what he has and hasn’t done, but an even bigger part continues to be about how he is perceived, both domestically and abroad, and some very grim opinion/approval polling is available. Many polls have Trump as the President with the lowest approval ratings in history.
Sadly, Political Polling, including approval ratings, tells us absolutely nothing about the underlying causes for the ratings. Therefore, I thought I’d share our findings in this area. Utilizing our text analytics software, OdinText, we have been tracking not just sentiment related to Trump, but more importantly, the positioning of 40+ topics/themes that are important predictors of the sentiment.. In the brief analysis below, I will not have time to go into each of the attributes we have identified as important drivers, I will focus on a few of the areas which have seen the most change for Trump during the past year.
How has the opinion of Trump changed in the minds of the American people?
By looking at Trump’s positioning just before he took office (with all the campaign positioning fresh in the minds of the people), and comparing it to half a year into his office, and again now a full year into office, we can get a good idea about the impact various issues have on approval ratings and even more importantly, positioning.
Let’s start by looking back to just before he was elected. OdinText’s Ai uncovered the 15 most significant changes in perception since just before Trump won the election and now. Trump has fallen on 11 of these attributes and increased on 4.
Trump Pre Election Positioning VS One Year In
If we compare Trump just before the election VS Trump today, we several key differences. More recently four themes have become more important in terms of describing what Trump stands for in the minds of Americans when we include everyone (both those who like and dislike him). These newer positions are “Less Regulation”, “Healthcare Reform”, “Money/Greed”, and “Dishonesty”. Interestingly, text analytics reveals that one of the important issues seems to be changing, Trumps supporters are now more likely to be use the term “Healthcare Reform” rather than the previous “Repeal Obamacare”.
Other than the repeal of Obamacare issue, prior to the election, in the minds of Americans Trump was more likely to be associated with “Gun Rights”, “Honesty”, “Trade Deals”, “Change”, Supporting “Pro Life”, pro and con “Immigration” related issues including “The Wall”, and finally his slogan “MAGA” (Make America Great Again).
The decrease in relevance of many of these issues has to do with pre-election positioning, both by the Trump/Republican Party, as well as the Democrats Counter Positioning of him. After the election seemingly, some of these like ‘Gun Control’ have become less important for various reasons.
Five Months from Record Low
If we look at changes between this past Summer and now, there has been significantly less movement in terms of his positioning in American minds. He has seen a slight but significant bump in overall positive emotional sentiment/Joy, and the MAGA positioning as well as on Taxes, the economy, and The Wall, while also seeing a decrease in “Anger” and “Hate/Racism” which peaked this summer.
His lowest point so far in the minds of Americans was during the August 12th, 2017 White Nationalist Rally in Charlottesville. Trump’s positioning as a Hate Monger was almost as high as the weekend before the election, while simultaneously positive emotional sentiment and ‘MAGA’ among his supporters was at an all time low.
Since the August low Trump does appear to have rebounded some, and while one year into office many believe the one thing Trump now stands for is himself, greed and money are a lesser evil in America than hate and racism.
It seems that one year into office, at least for now, the economy and tax cuts are giving Trump a bit of a bump back to pre-election levels in the minds of many Americans.
I’m not sure what the future holds in this case, but I hope you like me found some of the underlying reasons for his approval ratings of interest. These are after all more important than simple ratings, because these reasons are levers that can be changed to affect the final outcomes and positioning of any brand, including that of a POTUS.
A Summary of the 2018 Insights Association CEO Summit
Last year I summarized the CEO Summit theme as ‘Technology Partnering’. This year the two words I’d choose would be ‘Change’ and ‘Partnering’.
It was widely agreed that successful companies can’t stand still in a changing industry. Changing doesn’t necessarily mean adding more Technology. In my opinion it means doing something completely new. If profits aren’t increasing, and your team isn’t happy, stop and think.
Personally, I believe change can even be backwards looking. Sometimes we’ve done something successful in the past, that more recently we have forgotten to do. A conference like the Insight Association’s CEO Summit can remind you of these things when you hear stories about what is working for others, and you think, hey I did that a while back, and had forgotten about it, it’s time to try it again, perhaps in a slightly new way that matches your current conditions.
The theme of the day, which I believe was expressed by different CEO’s in different ways had to do with incremental change. Changing a bit at a time. “Changing 1% Per Day”, or my personal long time favorite answer to the question “How do you eat an elephant?”, answer “one bite at a time”.
I like the new “1% per day” though because of the focus on the present and need for continuous improvement and change. [Zain Raj, CEO of Shapiro + Raj, really drove this home]
Partnering was a theme I wrote about last year as well. I do think if you come to a conference like this, and don’t have it in mind, you’re missing a big opportunity.
As usual at conferences there are many little side meetings. A good partnership in my experience doesn’t have to be some grand M&A, it must be more than words, there must be execution.
The CEO’s of Nielsen, Kantar, TNS, and IPSOS don’t attend the Insights Association Summit. This is a chance for start-ups, smaller and mid-sized firms to learn from each other, to begin partnerships, and offer better innovative products and services to our clients than the larger and somewhat slower moving firms can.
Jamin Brazil, formerly CEO of two successful research firms, Decipher and FocusVision, spoke on a different type of partnership than those between companies. He drew on his experience with long-term business partner Jayme Plunkett. His humble yet undeniably successful story is an interesting one.
As part of his talk he had surveyed the attendees at the CEO summit. As with most surveys, the data was “Mixed” (structured and unstructured), and so he had used OdinText to analyze the results. I’ll include 2 of his slides below.
First, comparing the market research industry data to other industries, he had found that we as an industry seem more likely to partner, and tend to do so longer/more successfully than CEO’s of other industries.
While sample size here was very small, OdinText’s AI was still able to detect some directional patterns in the data. For instance, when considering the Pro’s and Cons of Partnering, Marketing Research CEO’s who have partnered longer were much less likely to be concerned with ‘Decision Making’ issues and agreeing on specific ‘Goals and Roles’, and instead more likely to focus on ‘Sharing’, and ‘Finance’, while those in shorter relationships tended to be more focused on the former, and less on the latter.
Also, perhaps not surprisingly, those who were more favorable and successful in partnering had a very different, more positive and productive outlook related to the idea of partnering. This manifested in several ways including the tone and word choice. In fact, those who had more difficulty with the idea of partnering tended to be more likely to use more formal terminology like the word “Partner” instead of more familiar and affectionate terminology such as “best friend” and describing partnering “as a marriage”. As one of the many CEO’s who had responded to the survey said it, “You Fight and are Challenged to Make Decision – Best Decision Ever”, that certainly sounds like a marriage to me!
I know I for one can see the benefits of partnering, and have seen it work great in many other research companies. Another such company is Critical Mix where attending Co-CEO Keith Price and his Co-founder Hugh Davis, have also had a very long and successful relationship. Keith did a great job on the now infamous ‘CEO-Summit Hot Seat’, and echoed some of these findings.
Ultimately Partnerships and Partnering are to some degree about timing. But if we aren’t on the lookout for good partners, whether inside our business or outside with another business, we’re likely to miss these chances. Clearly based on what I saw partnering offer the opportunity of not just more profit, less risk and stress, but also as a way to make our journeys more fun.
How do you plan to change or partner in 2018? Looking forward to hearing your thoughts, at OdinText we’re always looking to partner with researchers who have good data and want to improve their insights.
A Customer Experience Case Study Utilizing OdinTexts’ Text and Predictive Analytics (Predicting Actual Return Behavior and Sales with CX Ratings or NPS)
We were honored today to have one of our case studies featured by Greenbook. Though we have several other similar cases like it, it remains one of our favorite uses of Customer Satisfaction/Customer Experience data (whether NPS or any other rating scales are used). The final analysis involved close to a million customers over a two-year period.
In the case study which features Jiffy Lube, we found that contrary to what Bain Consulting had been claiming in Harvard Business Review for over a decade, Customer Satisfaction Ratings (whether NPS, OSAT or any thing else), these metrics have very little correlation with actual return behavior/repurchase, and absolutely NO correlation with sales/revenue (business growth).
The solution to better understanding and modeling both return behavior and sales lies in leveraging both the structured and unstructured text data, something OdinText is uniquely built to do.
You can read the abbreviated case study on Greenbooks’ site here.
Feel free to contact us with any questions or for a slightly more in-depth write up.
OdinTexts’ software has recently been updated and is now even more powerful, in terms of easily handling predictive analytics related to any customer experience metric whether OSAT, NPS or any other metric. You may request information, as well as early access, to our upcoming release here.
Thank you for reading, and thank you to Greenbook for selecting and sharing this interesting case study.
Almost Half of Market Researchers are doing Market Research Wrong! - My Interview with the QRCA (And a Quiet New Trend - Science Based Qualitative).
Two years ago I shared some research on research about how market researchers view Quantitative and Qualitative research. I stated that almost half of researchers don’t understand what good data is. Some ‘Quallies’ tend to rely and work almost exclusively with comment data from extremely small samples (about 25% of market researchers surveyed), conversely there is a large group of ‘Quant Jockey’s’ who while working with larger more representative sample sizes, purposefully avoid any unstructured data such as open ended comments because they don’t want to deal with coding and analyzing it or don’t believe in it’s accuracy and ability to add to the research objectives. In my opinion both researcher groups have it totally wrong, and are doing a tremendous disservice to their companies and clients. Today, I’ll be focusing on just the first group above, those who tend to rely primarily on qualitative research for decisions.
Note that today’s blog post is related to a recent interview, which I was asked to take part in by the QRCA’s (Qualitative Research Consultant’s Association) Views Magazine. When they contacted me I told them that in most cases (with some exceptions), Text Analytics really isn’t a good fit for Qualitative Researchers, and asked if they were sure they wanted to include someone with that opinion in their magazine? I was told that yes, they were ok with sharing different viewpoints.
I’ll share a link to the full interview in the online version of the magazine at the bottom of this post. But before that, a few thoughts to explain my issues with qualitative data and how it’s often applied as well as some of my recent experiences with qualitative researchers licensing our text analytics software, OdinText.
The Problem with Qualitative Research
IF Qual research was really used in the way it’s often positioned, ‘as a way to inform quant research’, that would be ok. The fact of the matter is though, Qual often isn’t being used that way, but instead as an end in and of itself. Let me explain.
First, there is one exception to this rule of only using Qual as pilot feedback for Quant. If you had a product for instance which was specifically made only for US State Governors, then your total population is only N=50. And of course it is highly unlikely that you would ever get all the Governors of each and every US State to participate in any research (which would be a census of all governors), and so if you were fortunate enough to have a group of say 5 Governors whom were willing to give you feedback on your product or service, you would and should obviously hang on to and over analyze every single comment they gave you.
IF however you have even a slightly more common mainstream product, I’ll take a very common product like hamburgers as an example, and you are relying on 5-10 focus groups of n=12 to determine how different parts of the USA (North East, Mid-West, South and West) like their burgers, and rather than feeding directly into some quantitative research instrument with a greater sample, you issue a ‘Report’ that you share with management; well then you’ve probably just wasted a lot of time and money for some extremely inaccurate and dangerous findings. Yet surprisingly, this happens far more often than one would imagine.
Cognitive Dissonance Among Qual Researchers when Using OdinText
How do I know this you may ask? Good Text Analytics software is really about data mining and pattern recognition. When I first launched OdinText we had a lot of inquiries from Qualitative researchers who wanted some way to make their lives easier. After all, they had “a lot” of unstructured/text comment data which was time consuming for them to process, read, organize and analyze. Certainly, software made to “Analyze Text” must therefore be the answer to their problems.
The problem was that the majority of Qual researchers work with tiny projects/sample, interviews and groups between n=1 and n=12. Even if they do a couple of groups like in the hamburger example I gave above, we’re still taking about a total of just around n=100 representing four or more regional groups of interest, and therefore fewer than n=25 per group. It is impossible to get meaningful/statistically comparable findings and identify real patterns between the key groups of interest in this case.
The Little Noticed Trend In Qual (Qual Data is Getting Bigger)
However, slowly across the past couple of years or so, for the first time I’ve seen a movement of some ‘Qualitative’ shops and researchers, toward Quant. They have started working with larger data sets than before. In some cases, it has been because they have been pulled in to manage larger ongoing community/boards, in some cases larger social media projects, and in others, they have started using survey data mixed with qual, or even better, employing qualitative techniques in quant research (think better open-ends in survey research).
For this reason, we now have a small but growing group of ‘former’ Qual researchers using OdinText. These researchers aren’t our typical mixed data or quantitative researchers, but qualitative researchers that are working with larger samples.
And guess what, “Qualitative” has nothing to do with whether data is in text or numeric format, instead it has everything to so with sample size. And so perhaps unknowingly, these ‘Qualitative Researchers’ have taken the step across the line into Quantitative territory, where often for the first time in their career, statistics can actually be used. – And it can be shocking!
My Experience with ‘Qualitative’ Researchers going Quant/using Text Analytics
Let me explain what I mean. Recently several researchers that come from a clear ‘Qual’ background have become users of our software OdinText. The reason is that the amount of data they had was quickly getting “bigger than they were able to handle”. They believe they are still dealing with “Qualitative” data because most of it is text based, but actually because of the volume, they are now Quant researchers whether they know it or not (text or numeric data is irrelevant).
Ironically, for this reason, we also see much smaller data sizes/projects than ever before being uploaded to the OdinText servers. No, not typically single focus groups with n=12 respondents, but still projects that are often right on the line between quant and qual (n=100+).
The discussions we’re having with these researchers as they begin to understand the quantitative implications of what they have been doing for years are interesting.
Let me preface this with the fact that I have a great amount of respect for the ‘Qualitative’ researchers that begin using OdinText. Ironically, the simple fact that we have mutually determined that an OdinText license is appropriate for them means that they are no longer ‘Qualitative’ researchers (as I explained earlier). They are in fact crossing the line into Quant territory, often for the first time in their careers.
The data may be primarily text based, though usually mixed, but there’s no doubt in their mind nor ours, that one of the most valuable aspects of the data is the customer commentary in the text, and this can be a strength
The challenge lies in getting them to quickly accept and come to terms with quantitative/statistical analysis, and thereby also the importance of sample size.
What do you mean my sample is too small?
When you have licensed OdinText you can upload pretty much any data set you have. So even though they may have initially licensed OdinText to analyze some projects with say 3,000+ comments, there’s nothing to stop them from uploading that survey or set of focus groups with just n=150 or so.
Here’s where it sometimes gets interesting. A sample size of n=150 is right on the borderline. It depends on what you are trying to do with it of course. If half of your respondents are doctors (n=75) and half are nurses (n=75), then you may indeed be able to see some meaningful differences between these two groups in your data.
But what if these n=150 respondents are hamburger customers, and your objective was to understand the difference between the 4 US regions in the I referenced earlier? Then you have about n=37 in each subgroup of interest, and you are likely to have very few, IF ANY, meaningful patterns or differences.
Here’s where that cognitive dissonance can happen --- and the breakthroughs if we are lucky.
A former ‘Qual Researcher’ who has spent the last 15 years of their career making ‘management level recommendations’ on how to market burgers differently in different regions based on data like this, for the first time is looking at software which says that there are maybe just two to 3 small differences, or even worse, NO MEANINGFUL PATTERNS OR DIFFERENCES WHATSOEVER, in their data, may be in shock!
How can this be? They’ve analyzed data like this many times before, and they were always able to write a good report with lots of rich detailed examples of how North Eastern Hamburger consumers preferred this or that because of this and that. And here we are, looking at the same kind of data, and we realize, there is very little here other than completely subjective thoughts and quotes.
Opportunity for Change
This is where, to their credit, most of our users start to understand the quantitative nature of data analysis. They, unlike the few ‘Quant Only Jockie’s’ I referenced at the beginning of the article already understand that many of the best insights come from text data in free form unaided, non-leading, yet creative questions.
They only need to start thinking about their sample sizes before fielding a project. To understand the quantitative nature of sampling. To think about the handful of structured data points that they perhaps hadn’t thought much about in previous projects and how they can be leveraged together with the unstructured data. They realize they need to start thinking about this first, before the data has all been collected and the project is nearly over and ready for the most important step, the analysis, where rubber hits the road and garbage in really should mean garbage out.
If we’re lucky, they quickly understand, its not about Quant and Qual any more. It’s about Mixed Data, it’s about having the right data, it’s about having enough data to generate robust findings and then superior insights!
Final Thoughts on the Two Meaningless Nearly Terms of ‘Quant and Qual’
As I’ve said many times before here and on the NGMR blog, the terms “Qualitative” and “Quantitative” at least the way they are commonly used in marketing research, is already passé.
The future is Mixed Data. I’ve known this to be true for years, and almost all our patent claims involve this important concept. Our research shows time and time again, that when we use both structured and unstructured data in our analysis, models and predictions, the results are far more accurate.
For this reason we’ve been hard at work developing the first ever truly Mixed Data Analytics Platform, we’ll be officially launching it three months from now, but many of our current customers already have access. [For those who are interested in learning more or would like early access you can inquire here: OdinText.com/Predict-What-Matters].
In the meantime, if you’re wondering whether you have enough data to warrant advanced mixed data and text annalysis, check out the online version of article in QRCA Views magazine here. Robin Wedewer at QRCA really did an excellent job in asking some really pointed questions that forced me too answer more honestly and clearly than I might otherwise have.
I realize not everyone will agree with today’s post nor my interview with QRCA, and I welcome your comments here. I just please ask that you read both the post above, as well as the interview in QRCA before commenting solely based on the title of this post.
Thank you for reading. As always, I welcome questions publicly in post below or privately via LinkedIn or our Inquiry form.
Our Top 10 Most Read Data and Text Mining Posts of 2017
Thank you for reading our blog this year. The OdinText blog has quickly become even more popular than the Next Gen Market Research blog, and I really appreciate the thoughtful feedback we’ve gotten here on the blog, via Twitter, and email.
In case you’re curious, here are the most popular posts of the year: