Posts tagged Marketing Research
A New Trend in Qualitative Research

Almost Half of Market Researchers are doing Market Research Wrong! - My Interview with the QRCA (And a Quiet New Trend - Science Based Qualitative).

Two years ago I shared some research on research about how market researchers view Quantitative and Qualitative research. I stated that almost half of researchers don’t understand what good data is. Some ‘Quallies’ tend to rely and work almost exclusively with comment data from extremely small samples (about 25% of market researchers surveyed), conversely there is a large group of ‘Quant Jockey’s’ who while working with larger more representative sample sizes, purposefully avoid any unstructured data such as open ended comments because they don’t want to deal with coding and analyzing it or don’t believe in it’s accuracy and ability to add to the research objectives. In my opinion both researcher groups have it totally wrong, and are doing a tremendous disservice to their companies and clients.  Today, I’ll be focusing on just the first group above, those who tend to rely primarily on qualitative research for decisions.

Note that today’s blog post is related to a recent interview, which I was asked to take part in by the QRCA’s (Qualitative Research Consultant’s Association) Views Magazine. When they contacted me I told them that in most cases (with some exceptions), Text Analytics really isn’t a good fit for Qualitative Researchers, and asked if they were sure they wanted to include someone with that opinion in their magazine? I was told that yes, they were ok with sharing different viewpoints.

I’ll share a link to the full interview in the online version of the magazine at the bottom of this post. But before that, a few thoughts to explain my issues with qualitative data and how it’s often applied as well as some of my recent experiences with qualitative researchers licensing our text analytics software, OdinText.

The Problem with Qualitative Research

IF Qual research was really used in the way it’s often positioned, ‘as a way to inform quant research’, that would be ok. The fact of the matter is though, Qual often isn’t being used that way, but instead as an end in and of itself. Let me explain.

First, there is one exception to this rule of only using Qual as pilot feedback for Quant. If you had a product for instance which was specifically made only for US State Governors, then your total population is only N=50. And of course it is highly unlikely that you would ever get all the Governors of each and every US State to participate in any research (which would be a census of all governors), and so if you were fortunate enough to have a group of say 5 Governors whom were willing to give you feedback on your product or service, you would and should obviously hang on to and over analyze every single comment they gave you.

IF however you have even a slightly more common mainstream product, I’ll take a very common product like hamburgers as an example, and you are relying on 5-10 focus groups of n=12 to determine how different parts of the USA (North East, Mid-West, South and West) like their burgers, and rather than feeding  directly into some quantitative research instrument with a greater sample, you issue a ‘Report’ that you share with management; well then you’ve probably just wasted a lot of time and money for some extremely inaccurate and dangerous findings. Yet surprisingly, this happens far more often than one would imagine.

Cognitive Dissonance Among Qual Researchers when Using OdinText

How do I know this you may ask? Good Text Analytics software is really about data mining and pattern recognition. When I first launched OdinText we had a lot of inquiries from Qualitative researchers who wanted some way to make their lives easier. After all, they had “a lot” of unstructured/text comment data which was time consuming for them to process, read, organize and analyze. Certainly, software made to “Analyze Text” must therefore be the answer to their problems.

The problem was that the majority of Qual researchers work with tiny projects/sample, interviews and groups between n=1 and n=12. Even if they do a couple of groups like in the hamburger example I gave above, we’re still taking about a total of just around n=100 representing four or more regional groups of interest, and therefore fewer than n=25 per group. It is impossible to get meaningful/statistically comparable findings and identify real patterns between the key groups of interest in this case.

The Little Noticed Trend In Qual (Qual Data is Getting Bigger)

However, slowly across the past couple of years or so, for the first time I’ve seen a movement of some ‘Qualitative’ shops and researchers, toward Quant. They have started working with larger data sets than before. In some cases, it has been because they have been pulled in to manage larger ongoing community/boards, in some cases larger social media projects, and in others, they have started using survey data mixed with qual, or even better, employing qualitative techniques in quant research (think better open-ends in survey research).

For this reason, we now have a small but growing group of ‘former’ Qual researchers using OdinText. These researchers aren’t our typical mixed data or quantitative researchers, but qualitative researchers that are working with larger samples.

And guess what, “Qualitative” has nothing to do with whether data is in text or numeric format, instead it has everything to so with sample size. And so perhaps unknowingly, these ‘Qualitative Researchers’ have taken the step across the line into Quantitative territory, where often for the first time in their career, statistics can actually be used. – And it can be shocking!

My Experience with ‘Qualitative’ Researchers going Quant/using Text Analytics

Let me explain what I mean. Recently several researchers that come from a clear ‘Qual’ background have become users of our software OdinText. The reason is that the amount of data they had was quickly getting “bigger than they were able to handle”. They believe they are still dealing with “Qualitative” data because most of it is text based, but actually because of the volume, they are now Quant researchers whether they know it or not (text or numeric data is irrelevant).

Ironically, for this reason, we also see much smaller data sizes/projects than ever before being uploaded to the OdinText servers. No, not typically single focus groups with n=12 respondents, but still projects that are often right on the line between quant and qual (n=100+).

The discussions we’re having with these researchers as they begin to understand the quantitative implications of what they have been doing for years are interesting.

Let me preface this with the fact that I have a great amount of respect for the ‘Qualitative’ researchers that begin using OdinText. Ironically, the simple fact that we have mutually determined that an OdinText license is appropriate for them means that they are no longer ‘Qualitative’ researchers (as I explained earlier). They are in fact crossing the line into Quant territory, often for the first time in their careers.

The data may be primarily text based, though usually mixed, but there’s no doubt in their mind nor ours, that one of the most valuable aspects of the data is the customer commentary in the text, and this can be a strength

The challenge lies in getting them to quickly accept and come to terms with quantitative/statistical analysis, and thereby also the importance of sample size.

What do you mean my sample is too small?

When you have licensed OdinText you can upload pretty much any data set you have. So even though they may have initially licensed OdinText to analyze some projects with say 3,000+ comments, there’s nothing to stop them from uploading that survey or set of focus groups with just n=150 or so.

Here’s where it sometimes gets interesting. A sample size of n=150 is right on the borderline. It depends on what you are trying to do with it of course. If half of your respondents are doctors (n=75) and half are nurses (n=75), then you may indeed be able to see some meaningful differences between these two groups in your data.

But what if these n=150 respondents are hamburger customers, and your objective was to understand the difference between the 4 US regions in the I referenced earlier? Then you have about n=37 in each subgroup of interest, and you are likely to have very few, IF ANY, meaningful patterns or differences.

Here’s where that cognitive dissonance can happen --- and the breakthroughs if we are lucky.

A former ‘Qual Researcher’ who has spent the last 15 years of their career making ‘management level recommendations’ on how to market burgers differently in different regions based on data like this, for the first time is looking at software which says that there are maybe just two to 3 small differences, or even worse, NO MEANINGFUL PATTERNS OR DIFFERENCES WHATSOEVER, in their data, may be in shock!

How can this be? They’ve analyzed data like this many times before, and they were always able to write a good report with lots of rich detailed examples of how North Eastern Hamburger consumers preferred this or that because of this and that. And here we are, looking at the same kind of data, and we realize, there is very little here other than completely subjective thoughts and quotes.

Opportunity for Change

This is where, to their credit, most of our users start to understand the quantitative nature of data analysis. They, unlike the few ‘Quant Only Jockie’s’ I referenced at the beginning of the article already understand that many of the best insights come from text data in free form unaided, non-leading, yet creative questions.

They only need to start thinking about their sample sizes before fielding a project. To understand the quantitative nature of sampling. To think about the handful of structured data points that they perhaps hadn’t thought much about in previous projects and how they can be leveraged together with the unstructured data. They realize they need to start thinking about this first, before the data has all been collected and the project is nearly over and ready for the most important step, the analysis, where rubber hits the road and garbage in really should mean garbage out.

If we’re lucky, they quickly understand, its not about Quant and Qual any more. It’s about Mixed Data, it’s about having the right data, it’s about having enough data to generate robust findings and then superior insights!

Final Thoughts on the Two Meaningless Nearly Terms of ‘Quant and Qual’

As I’ve said many times before here and on the NGMR blog, the terms “Qualitative” and “Quantitative” at least the way they are commonly used in marketing research, is already passé.

The future is Mixed Data. I’ve known this to be true for years, and almost all our patent claims involve this important concept. Our research shows time and time again, that when we use both structured and unstructured data in our analysis, models and predictions, the results are far more accurate.

For this reason we’ve been hard at work developing the first ever truly Mixed Data Analytics Platform, we’ll be officially launching it three months from now, but many of our current customers already have access. [For those who are interested in learning more or would like early access you can inquire here: OdinText.com/Predict-What-Matters].

In the meantime, if you’re wondering whether you have enough data to warrant advanced mixed data and text annalysis, check out the online version of article in QRCA Views magazine here. Robin Wedewer at QRCA really did an excellent job in asking some really pointed questions that forced me too answer more honestly and clearly than I might otherwise have.

I realize not everyone will agree with today’s post nor my interview with QRCA, and I welcome your comments here. I just please ask that you read both the post above, as well as the interview in QRCA before commenting solely based on the title of this post.

Thank you for reading. As always, I welcome questions publicly in post below or privately via LinkedIn or our Inquiry form.

@TomHCAnderson

2018 Predictions for Market Research and Analytics

What Kind of Researcher are You?

It’s that time of year again where RFL Communications and Greenbook request predictions from market researchers on what trends they expect to see in the new year. Of course no one knows for sure, but some are interesting fun to read and I always like searching for the overall patterns, if any.

That said, here’s the one I submitted this year. I’m curios to to hear yours as well.

 

2018 The Best of Times & The Worst of Times

 The gap between what I’ll call ‘Just Traditional Research’ and more flexible, fluid ‘Advanced Analytics Generalists’ will continue to grow.

 There are three groups of marketing researchers along this dimension. Some ‘Just Traditional’ researchers and companies will not be able to adapt and will want to continue doing just the focus groups or panel surveys they have been doing and will become increasingly out of touch.

 A second group will feign expertise in these not so new areas of data and text mining (Advanced Analytics), they will prefer to call it “AI and Machine Learning” of course, but without any meaningful change to their products, services or analysis. It will be a sales and marketing treatment only.

 Both these groups are rather process oriented. The former doesn’t want to change their process, the latter just want a shiny new process. In either case, the end goal suffers. For both of these two groups the future is dim indeed.

 A third group of researchers, the group OdinText is invested in, don’t try to improve and change because they think they must in order to survive, they were already doing it because they are genuinely curious and ambitious. They don’t just want to run that survey a little faster and a little cheaper, they want much more than that. They want to add significant value for their company via their analysis.

 They will invest in learning new tools and techniques, and yet will not expect these tools to magically do the work for them after they push a button. These are not lazy employees/managers, they are A type employees, and they are the future of what ‘Marketing Research/Analytics’ is to become.

 They realize their own ingenuity and sweat need to be coupled with the new technology to achieve a competitive advantage and surpass management expectations and their competition. They are excited by those prospects, not scared.

 I too am very excited about meeting and working with more of these true ‘Advanced Analytics Generalists’ and the Marketing Research Supplier firms who serve them and realize Co-Opetition with other firms with key strengths that they don’t have make more sense than buzz words and feigning expertise in all categories.

 For these ‘New Data Scientists’, no these ‘Next Gen Market Researchers’ 2018 will be the best of times!

It’s a BIT lengthy and general for a prediction. But I believe it’s a real trend that will continue to accelerate. Do you agree or disagree?  What are your predictions?

If you subscribe to RFL Communications Business Report you’ll be receiving the annual writeup on this topic there, you can check out the Greenbook version from 36 CEO’s online here.

While you can tell all those participating takes this with various degrees of seriousness, and answer with different Point of Views, I believe that reading all of them, and deciding what patterns if any are detectable across them is well worth the 30 minutes or so it takes to do this.

Again, very much appreciate YOUR thoughts and predictions as well, so please feel free to comment below.

@TomHCAnderson

Artificial Intelligence in Consumer Insights

A Q&A session with ESOMAR’s Research World on Artificial Intelligence, Machine Learning, and implications in Marketing Research  [As part of an ESOMAR Research World article on Artificial Intelligence OdinText Founder Tom H. C. Anderson was recently took part in a Q&A style interview with ESOMAR’s Annelies Verheghe. For more thoughts on AI check out other recent posts on the topic including Why Machine Learning is Meaningless, and Of Tears and Text Analytics. We look forward to your thoughts or questions via email or in the comments section.]

 

ESOMAR: What is your experience with Artificial Intelligence & Machine Learning (AI)? Would you describe yourself as a user of AI or a person with an interest in the matter but with no or limited experience?

TomHCA: I would describe myself as both a user of Artificial Intelligence as well as a person with a strong interest in the matter even though I have limited mathematical/algorithmic experience with AI. However, I have colleagues here at OdinText who have PhD's in Computer Science and are extremely knowledgeable as they studied AI extensively in school and used it elsewhere before joining us. We continue to evaluate, experiment, and add AI into our application as it makes sense.

ESOMAR: For many people in the research industry, AI is still unknown. How would you define AI? What types of AI do you know?

TomHCA: Defining AI is a very difficult thing to do because people, whether they are researchers, data scientists, in sales, or customers, they will each have a different definition. A generic definition of AI is a set of processes (whether they are hardware, software, mathematical formulas, algorithms, or something else) that give anthropomorphically cognitive abilities to machines. This is evidently a wide-ranging definition. A more specific definition of AI pertaining to Market Research, is a set of knowledge representation, learning, and natural language processing tools that simplifies, speeds up, and improves the extraction of meaningful data.

The most important type of AI for Market Research is Natural Language Processing. While extracting meaningful information from numerical and categorical data (e.g., whether there is a correlation between gender and brand fidelity) is essentially an easy and now-solved problem, doing the same with text data is much more difficult and still an open research question studied by PhDs in the field of AI and machine learning. At OdinText, we have used AI to solve various problems such as Language Detection, Sentence Detection, Tokenizing, Part of Speech Tagging, Stemming/Lemmatization, Dimensionality Reduction, Feature Selection, and Sentence/Paragraph Categorization. The specific AI and machine learning algorithms that we have used, tested, and investigated range a wide spectrum from Multinomial Logit to Principal Component Analysis, Principal Component Regression, Random Forests, Minimum Redundancy Maximum Relevance, Joint Mutual Information, Support Vector Machines, Neural Networks, and Maximum Entropy Modeling.

AI isn’t necessarily something everyone needs to know a whole lot about. I blogged recently, how I felt it was almost comical how many were mentioning AI and machine learning at MR conferences I was speaking at without seemingly any idea what it means. http://odintext.com/blog/machine-learning-and-artificial-intelligence-in-marketing-research/

In my opinion, a little AI has already found its way into a few of the applications out there, and more will certainly come. But, if it will be successful, it won’t be called AI for too long. If it’s any good it will just be a seamless integration helping to make certain processes faster and easier for the user.

ESOMAR: What concepts should people that are interested in the matter look into?

TomHCA: Unless you are an Engineer/Developer with a PhD in Computer Science, or someone working closely with someone like that on a specific application, I’m not all that sure how much sense it makes for you to be ‘learning about AI’. Ultimately, in our applications, they are algorithms/code running on our servers to quickly find patterns and reduce data.

Furthermore, as we test various algorithms from academia, and develop our own to test, we certainly don’t plan to share any specifics about this with anyone else. Once we deem something useful, it will be incorporated as seamlessly as possible into our software so it will benefit our users. We’ll be explaining to them what these features do in layman’s terms as clearly as possible.

I don’t really see a need for your typical marketing researcher to know too much more than this in most cases. Some of the algorithms themselves are rather complex to explain and require strong mathematical and computer science backgrounds at the graduate level.

ESOMAR: Which AI applications do you consider relevant for the market research industry? For which task can AI add value?

TomHCA: We are looking at AI in areas of Natural Language Processing (which includes many problem subsets such as Part of Speech Tagging, Sentence Detection, Document Categorization, Tokenization, and Stemming/Lemmatization), Feature Selection, Data Reduction (i.e., Dimensionality Reduction) and Prediction. But we've gone well beyond that. As a simple example, take key driver analysis. If we have a large number of potential predictors, which are the most important in driving a KPI like customer satisfaction?

ESOMAR: Can you share any inspirational examples from this industry or related industries (advertisement, customer service)  that can illustrate these opportunities

TomHCA: As one quick example, a user of OdinText I recently spoke to used the software to investigate what text comments were most likely to drive belonging into either of several predefined important segments. The nice thing about AI is that it can be very fast. The not so nice thing is that sometimes at first glance some of the items identified, the output, can either be too obvious, or on the other extreme, not make any sense whatsoever.  The gold is in the items somewhere in the middle. The trick is to find a way for the human to interact with the output which gives them confidence and understanding of the results.

a human is not capable of correctly analyzing thousands, 100s of thousands, or even millions of comments/datapoints, whereas AI will do it correctly in a few seconds. The downside of AI is that some outcomes are correct but not humanly insightful or actionable. It’s easier for me to give examples when it didn’t work so well since its hard for me to share info on how are clients are using it. But for instance recently AI found that people mentioning ‘good’ 3 times in their comments was the best driver of NPS score – this is evidently correct but not useful to a human.

In another project a new AI approach we were testing reported that one of the most frequently discussed topics was “Colons”. But this wasn’t medical data! Turns out the plural of Colon is Cola, I didn’t know that. Anyway, people were discussing Coca-Cola, and AI read that as Colons…  This is exactly the part of AI that needs work to be more prevalent in Market Research.”

Since I can’t talk about too much about how our clients use our software on their data, In a way it’s easier for me to give a non-MR example. Imagine getting into a totally autonomous car (notice I didn’t have to use the word AI to describe that). Anyway, you know it’s going to be traveling 65mph down the highway, changing lanes, accelerating and stopping along with other vehicles etc.

How comfortable would you be in stepping into that car today if we had painted all the windows black so you couldn’t see what was going on?  Chances are you wouldn’t want to do it. You would worry too much at every turn that you might be a casualty of oncoming traffic or a tree.  I think partly that’s what AI is like right now in analytics. Even if we’ll be able to perfect the output to be 100 or 99% correct, without knowing what/how we got there, it will make you feel a bit uncomfortable.  Yet showing you exactly what was done by the algorithm to arrive at the solution is very difficult.

Anyway, the upside is that in a few years perhaps (not without some significant trial and error and testing), we’ll all just be comfortable enough to trust these things to AI. In my car example, you’d be perfectly fine getting into an Autonomous car and never looking at the road, but instead doing something else like working on your pc or watching a movie.

The same could be true of a marketing research question. Ultimately the end goal would be to ask the computer a business question in natural language, written or spoken, and the computer deciding what information was already available, what needed to be gathered, gathering it, analyzing it, and presenting the best actionable recommendation possible.

ESOMAR: There are many stories on how smart or stupid AI is. What would be your take on how smart AI Is nowadays. What kind of research tasks can it perform well? Which tasks are hard to take over by bots?

TomHCA: You know I guess I think speed rather than smart. In many cases I can apply a series of other statistical techniques to arrive at a similar conclusion. But it will take A LOT more time. With AI, you can arrive at the same place within milliseconds, even with very big and complex data.

And again, the fact that we choose the technique based on which one takes a few milliseconds less to run, without losing significant accuracy or information really blows my mind.

I tell my colleagues working on this that hey, this can be cool, I bet a user would be willing to wait several minutes to get a result like this. But of course, we need to think about larger and more complex data, and possibly adding other processes to the mix. And of course, in the future, what someone is perfectly happy waiting for several minutes today (because it would have taken hours or days before), is going to be virtually instant tomorrow.

ESOMAR: According to an Oxford study, there is a 61% chance that the market research analyst job will be replaced by robots in the next 20 years. Do you agree or disagree? Why?

TomHCA: Hmm. 20 years is a long time. I’d probably have to agree in some ways. A lot of things are very easy to automate, others not so much.

We’re certainly going to have researchers, but there may be fewer of them, and they will be doing slightly different things.

Going back to my example of autonomous cars for a minute again. I think it will take time for us to learn, improve and trust more in automation. At first autonomous cars will have human capability to take over at any time. It will be like cruise control is now. An accessory at first. Then we will move more and more toward trusting less and less in the individual human actors and we may even decide to take the ability for humans to intervene in driving the car away as a safety measure. Once we’ve got enough statistics on computers being safe. They would have to reach a level of safety way beyond humans for this to happen though, probably 99.99% or more.

Unlike cars though, marketing research usually can’t kill you. So, we may well be comfortable with a far lower accuracy rate with AI here.  Anyway, it’s a nice problem to have I think.

ESOMAR: How do you think research participants will react towards bot researchers?

TomHCA: Theoretically they could work well. Realistically I’m a bit pessimistic. It seems the ability to use bots for spam, phishing and fraud in a global online wild west (it cracks me up how certain countries think they can control the web and make it safer), well it’s a problem no government or trade organization will be able to prevent from being used the wrong way.

I’m not too happy when I get a phone call or email about a survey now. But with the slower more human aspect, it seems it’s a little less dangerous, you have more time to feel comfortable with it. I guess I’m playing devil’s advocate here, but I think we already have so many ways to get various interesting data, I think I have time to wait RE bots. If they truly are going to be very useful and accepted, it will be proven in other industries way before marketing research.

But yes, theoretically it could work well. But then again, almost anything can look good in theory.

ESOMAR: How do you think clients will feel about the AI revolution in our industry?

TomHCA: So, we were recently asked to use OdinText to visualize what the 3,000 marketing research suppliers and clients thought about why certain companies were innovative or not in the 2017 GRIT Report. One of the analysis/visualizations we ran which I thought was most interesting visualized the differences between why clients claimed a supplier was innovative VS why a supplier said these firms were innovative.

I published the chart on the NGMR blog for those who are interested [ http://nextgenmr.com/grit-2017 ], and the differences couldn’t have been starker. Suppliers kept on using buzzwords like “technology”, “mobile” etc. whereas clients used real end result terms like “know how”, "speed" etc.

So I’d expect to see the same thing here. And certainly, as AI is applied as I said above, and is implemented, we’ll stop thinking about it as a buzz word, and just go back to talking about the end goal. Something will be faster and better and get you something extra, how it gets there doesn’t matter.

Most people have no idea how a gasoline engine works today. They just want a car that will look nice and get them there with comfort, reliability and speed.

After that it’s all marketing and brand positioning.

 

[Thanks for reading today. We’re very interested to hear your thoughts on AI as well. Feel free to leave questions or thoughts below, request info on OdinText here, or Tweet to us @OdinText]

Congratulations 2017 NGMR Award Winners!

In case you weren’t at The Market Research Event (TMRE) last week and missed the news, here are The NGMR Award Winners for 2017. Winners across the three categories (Most Innovative Research Method, Industry Change Agent, and Outstanding Disruptive Start-Up were: Merck – Lisa Courtade, InsightsNow – David Lundahl, and IncognitoResearch – Greg Weston.

OdinText was proud to co-sponsor this year’s award ceremony with VoxPopMe.

Please join us in congratulating this year’s winners!

@OdinText

Of Tears and Text Analytics

An OdinText User Story - Text Analytics Tips Guest Post (AI Meets VOC)

Today on the blog we have another first in a soon to be ongoing series. We’re inviting OdinText users to participate more on the Text Analytics Tips blog. Today we have Kelsy Saulsbury guest blogging. Kelsy is a relatively new user of OdinText though she’s jumped right in and is doing some very interesting work.

In her post she ponders the apropos topic, whether automation via artificial intelligence may make some tasks too easy, and what if anything might be lost by not having to read every customer comment verbatim.

 

Of Tears and Text Analytics By Kelsy Saulsbury Manager, Consumer Insights & Analytics

“Are you ok?” the woman sitting next to me on the plane asked.  “Yes, I’m fine,” I answered while wiping the tears from my eyes with my fingers.  “I’m just working,” I said.  She looked at me quizzically and went back to reading her book.

I had just spent the past eight hours in two airports and on two long flights, which might make anyone cry.  Yet the real reason for my tears was that I had been reading hundreds of open-end comments about why customers had decided to buy less from us or stop buying from us altogether.  Granted eight hours hand-coding open ends wasn’t the most accurate way to quantify the comments, but it did allow me to feel our customers’ pain from the death of a spouse to financial hardship with a lost job.  Other reasons for buying less food weren’t quite as sad — children off to college or eating out more after retirement and a lifetime of cooking.

I could also hear the frustration in their voices on the occasions when we let them down.  We failed to deliver when we said we would, leaving the dessert missing from a party.  They took off work to meet us, and we never showed.  Anger at time wasted.

Reading their stories allowed me to feel their pain and better share it with our marketing and operations teams.  However, I couldn’t accurately quantify the issues or easily tie them to other questions in the attrition study.  So this year when our attrition study came around, I utilized a text analytics tool (OdinText) for the text analysis of our open ends around why customers were buying less.

It took 1/10th of the time to see more accurately how many people talked about each issue.  It allowed me to better see how the issues clustered together and how they differed based on levels of overall satisfaction.  It was fast, relatively easy to do, and directly tied to other questions in our study.

I’ve seen the benefits of automation, yet I’m left wondering how we best take advantage of text analytics tools without losing the power of the emotion in the words behind the data.  I missed hearing and internalizing the pain in their voices.  I missed the tears and the urgency they created to improve our customers’ experience.

 

Kelsy Saulsbury Manager, Consumer Insights & Analytics Schwan's Company

 

A big thanks to Kelsy for sharing her thoughts on OdinText's Text Analytics Tips blog. We welcome your thoughts and questions in comment section below.

If you’re an OdinText user and have a story to share please reach out. In the near future we’ll be sharing more user blog posts and case studies.

@OdinText

OdinText Voted #1 Most Innovative Market Research Company in North America and #4 Worldwide!

GRIT Industry Survey Ranks OdinText #1 Most Innovative Market Research Services Provider in North America and #4 Worldwide, Up 32 Places to Become Fastest Rising Company! I am thrilled to announce that the GreenBook Research Industry Trends (GRIT) Report just came out today and the industry has ranked OdinText the #1 most innovative research services provider in North America and the #4 most innovative research company in the world!

GRIT Top 5 Marketing Research Firms 2017

It’s only been one year since we first debuted on the list—and only two years since OdinText launched—and we’ve already jumped 32 spots, making us the fastest rising company on the list.

As a start-up, it’s a huge honor to appear alongside venerable research giants like Nielsen and Ipsos. We’ve come a long way in an incredibly short time, but to be ranked as the most innovative research provider in North America by members of the industry really raises the bar for us.

I’m so very grateful to our users and fans for voting for us, but honestly our research industry clients are the real innovators. We simply provide the tool; you make the magic happen. I’m frequently blown away by the creativity many of you bring to bear using OdinText to unearth insights in ways even I hadn’t thought of.

Thanks also to GreenBook Blog’s Editor-in-Chief and Publisher of GRIT Lenny Murphy for all of his hard work and for calling OdinText “a stand-out example of a technology-enabled solution based on established, applied research principles” and “definitely one to watch!” (Check out our press release for more details.)

Lastly, my congratulations to the other fantastic and innovative research providers named to the GRIT Top 50. All of the companies on this list are worth a close look, and some of the new up-and-comers may be top the list in coming years (check all 50 firms and the full report here).

Thanks again for your support and congratulations to the GRIT Top 50!

@TomHCAnderson

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

You Asked for It. Here’s a Chance to Learn More about Our International Culture Poll…

It’s True: You Only Need One Open-Ended Question and Language Doesn’t Matter!

First of all, thank you all so much for the incredible response to this week’s multi-country, multilingual Text Analytics Poll!

I’ve received a flood of email and calls for additional information and I’m always happy to share, so if you have questions or want to geek out with me, please feel free to contact me on our website, LinkedIn or Twitter.

While so many of you thought the findings of our poll were remarkable, I was pleased that the implications for researchers weren’t lost on anyone, notably:

  • A single analyst, speaking English only, can today analyze data in eight different languages,

and

  • In an age of steeply declining response rates, one can gather deep insights on a multi-dimensional subject with just a single question!

This analysis of more than 15, 500 text comments spanning 11 cultures, 10 countries and eight languages really showcased the power and practicality of modern text analytics.

So much so, in fact, that I am delighted to announce that I’ve been invited by the Insights Association to present on this topic at their inaugural analytics conference, NEXT: Advancing Insights Through Innovation & Research, May 9-10 in New York.

For what it’s worth, I really got a lot out of attending the Insights Association’s CEO conference earlier this year (I blogged about it here).

Anyone interested in conducting international, multilingual research on the scale of our poll this week easily, quickly and affordably will not want to miss my presentation. Please feel free to use my speaker code [NEXTTA15] to register at a 15% discount.

If you won’t be able to attend NEXT, or you can’t wait until May to learn more about what OdinText can do for YOU, please request additional info or a demo here.

Thanks again for your readership, support and interest in what we are doing!

@TomHCAnderson

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

Text Analytics: It's Not Just for BIG Data

In a world focused on the value of Big Data, it's important to realize that Small Data is meaningful, too, and worth analyzing to gain understanding. Let me show you with a personal example. If you're a regular reader of the OdinText blog, you probably know that our company President, Tom Anderson, writes about performing text analytics on large data sets.  And yes, OdinText is ideal for understanding data after launching a rapid survey then collecting thousands of responses.

However for this blog post, I'm going to focus on the use of Text Analytics for smaller, nontraditional data set:  emails.

SMALL Data (from email) Text Analytics

I recently joined OdinText as Vice President, working closely with Tom on all our corporate initiatives. I live in a small town in Connecticut with an approximate population of 60,000.  Last year I was elected to serve our town government as an RTM member along with 40 other individuals.  Presently, our town's budget is $290M and the RTM is designing the budget for the next year.

Many citizens email elected members to let them know how they feel about the budget.  To date, I have received 280 emails. (Before you go down a different path with this, please know that I respond personally to each one -- people who take the time to write me deserve a personal response.  I did not and will not include in this blog post how I intend to vote on the upcoming budget, nor will I include anything about party affiliations. And I certainly will not share names.)

As the emails were coming in, I started to wonder … what if I ran this the data I was receiving through OdinText?  Would I be able to use the tool to identify, understand and quantify the themes in the people’s thoughts on how I should vote on the budget?

The Resulting Themes from Small Data Analytics

A note about the methodology:  Each email that I received contained the citizen's name, their email address and content in open text format.  Without a key driver metric like OSAT, CSAT or NPS to analyze the text against, I chose to use overall sentiment. Here is what I learned

280-emails-1024x600.png

Emails about the town budget show that our citizens feel Joy but RTM members need to recognize their Sadness, Fear and Anger

280-emotions-1024x775.png

Joy:

“I have been a homeowner in Fairfield for 37 years, raised 4 kids here and love the community.”

Sadness:

“I am writing you to tell you that I am so unhappy with the way you have managed our town.”

Fear:

“My greatest concern seems to be the inability of our elected members to cut spending and run the town like a business”

Anger:

“We live in a very small house and still have to pay an absurd amount of money in taxes.”

Understanding the resulting themes in their own words

Reduce Taxes (90.16%)

“Fairfield taxes are much higher than surrounding communities.”

“Fairfield taxes are out of line with similar communities”

“The town has to stop raising taxes at such a feverish rate.”

“High taxes are slowly eroding the town of Fairfield.”

Moving if Taxes are Increased (25.13%)

“I am on a fixed income at 64, and cannot afford Fairfield’s taxes now. Please recognize that I cannot easily sell my house, due to the economy & the amount of homes on the market here”

“regret to say most of our colleagues and friends have an "exit strategy" to leave Fairfield”

“Our town is losing residents who are fed up and have moved or are moving to Westport and other towns with lower mil rates”

Reduce Spending (33.33%)

“... bring spending under control”

“Stop the spending please”

“... needs to trim fat at the local level, cut services, stop spending money”

“We need to keep taxes down as much as possible - even if it means spending cuts.”

Education ‘don’t cut’ (8.74%)

“… takes great pride in its education system”

“… promise of an excellent public education”

“… fiscal responsibility; however, not at the expense of the children and their right to an excellent education.”

Education ‘please cut’ (9.83%)

“Let's shave funding from all programs including education”

“... deeply questioning our education budget”

“... reduce the Education budget”

“I have a cherished budgetary item that I want protected--the library. Cut that last, after you cut education, police, official salaries”

Big Value from Small Data in Little Time

I performed this text analysis in 30 minutes. Ironically, it has taken me longer to write this blog post than it did to quantify the text from all those emails. Yet the information and understanding I have gleaned will empower me as I make decisions on this important topic. A small investment in small data has paid off in a BIG way.

Tim Lynch - @OdinText

Poll: What Other Countries Think of Trump’s Immigration Order

Text Analytics PollTM Shows Australians, Brits, and Canadians  Angry About Executive Order Temporarily Barring Refugee (Part II of II)In my previous post, we compared text analysis of results from an open-ended survey instrument with a conventional Likert-scale rating poll to assess where 3,000 Americans really stand on President Trump’s controversial executive order temporarily barring refugees and people from seven predominately-Muslim countries from entering the U.S.

Today, we’re going to share results from an identical international study that asked approx. 9,000 people—3,000 people from each of three other countries—what they think about the U.S. immigration moratorium ordered by President Trump.

But first, a quick recap…

As I noted in the previous post, polling on this issue has been pretty consistent insomuch as Americans are closely divided in support/opposition, but the majority position flips depending on the poll. Consequently, the accuracy of polling has again been called into question by pundits on both sides of the issue.

By fielding the same question first in a multiple-choice response format and a second time providing only a text comment box for responses, and then comparing results, we were able to not only replicate the results of the former but gain a much deeper understanding of where Americans really stand on this issue.

Text analysis confirmed a much divided America with those opposing the ban just slightly outnumbering (<3%) those who support the order (42% vs 39%). Almost 20% of respondents had no opinion or were ambivalent on this issue.

Bear in mind that text analysis software such as OdinText enables us to process and quantify huge quantities of comments (in this case, more than 1500 replies from respondents using their own words) in order to arrive at the same percentages that one would get from a conventional multiple-choice survey.

But the real advantage to using an open-ended response format (versus a multiple-choice) to gauge opinion on an issue like this is that the responses also tell us so much more than whether someone agrees/disagrees or likes/dislikes. Using text analytics we uncovered people’s reasoning, the extent to which they are emotionally invested in the issue, and why.

Today we will be looking a little further into this topic with data from three additional countries: Australia, Canada and the UK.

A note about multi-lingual text analysis and the countries selected for this project…

Different software platforms handle different languages with various degrees of proficiency. OdinText analyzes most European languages quite well; however, analysis of Dutch, German, Spanish or Swedish text requires proficiency in said language by the analyst. (Of course, translated results, including and especially machine-translated results, work very well with text analytics.)

Not inconspicuously, each of the countries represented in our analysis here has an English-speaking population. But this was not the primary reason that we chose them; each of these countries has frequently been mentioned in news coverage related to the immigration ban: The UK because of Brexit, Australia because of a leaked telephone call between President Trump and its Prime Minister, and Canada due to its shared border and its Prime Minister’s comments on welcoming refugees affected by the immigration moratorium.

Like our previous U.S. population survey, we used a nationally-representative sample of n=3000 for each of these countries.

Opposition Highest in Canada, Lowest in the UK

It probably does not come as a surprise to anyone who’s been following this issue in the media that citizens outside of America are less likely to approve of President Trump’s immigration moratorium.

I had honestly expected Australians to be the most strongly opposed to the order in light of the highly-publicized and problematic telephone call transcript leaked last week between President Trump and the Australian Prime Minister (which, coincidentally, involved a refugee agreement). But interestingly, people from our close ally and neighbor to the north, Canada, were most strongly opposed to the executive order (67%). The UK had proportionately fewer opposing the ban than Australia (56% vs. 60%), but the numbers of people opposed to the policy in both countries significantly lagged the Canadians. Emotions Run High Abroad Deriving emotions from text is an interesting and effective measure for understanding people’s opinions and preferences (and more useful than the “sentiment” metrics often discussed in text analytics and, particularly, in social media monitoring circles).

The chart below features OdinText’s emotional analysis of comments for each of the four countries across what most psychologists agree constitute the eight major emotion categories:

We can see that while the single highest emotion in American comments is joy/happiness, the highest emotion in the other three countries is anger. Canadians are angriest. People in the UK and Australians exhibit somewhat greater sadness and disgust in their comments. Notably, disgust is an emotion that we typically only see rarely in food categories. Here it takes the form of vehement rejection with terms such as “sickened,” “revolting,” “vile,” and, very often, “disgusted.” It is also worth noting that in cases, people directed their displeasure at President Trump, personally.

Examples:

"Trump is a xenophobic, delusional, and narcissistic danger to the world." – Canadian (anger) “Most unhappy - this will worsen relationships between Muslims and Christians.” – Australian (sadness) "It's disgusting. You can't blame a whole race for the acts of some extremists! How many white people have shot up schools and such? Isn't that an act of terror? Ban guns instead. He's a vile little man.” –Australian (disgust)

UK comments contain the highest levels of fear/anxiety:

"I am outraged. A despicable act of racism and a real worry for what political moves may happen next." – UK (fear/anxiety)

That said, it is also important to point out that there is a sizeable group in each country who express soaring agreement to the level of joy:

“Great move! He should stop all people that promote beating of women” – Australian (joy) “Sounds bloody good would be ok for Australia too!” – Australian (joy) “EXCELLENT. Good to see a politician stick by his word” – UK (joy) “About time, I feel like it's a great idea, the United States needs to help their own people before others. If there is an ongoing war members of that country should not be allowed to migrate as the disease will spread.” – Canadian (joy)

Majority of Canadians Willing to Take Refugee Overflow Given Canada’s proximity to the U.S., and since people from Canada were the most strongly opposed to President Trump’s executive order, this raised the question of whether Canadians would then support a measure to absorb refugees that would be denied entrance to the U.S., as Prime Minister Justin Trudeau appears to support.

(Note: In a Jan. 31 late-night emergency debate, the Canadian Parliament did not increase its refugee cap of 25,000.)

 

A solid majority of Canadians would support such an action, although it’s worth noting that there is a significant difference between the numbers of Canadians who oppose the U.S. immigration moratorium (67%) and the number who indicated they would be willing to admit the refugees affected by the policy.

When asked a follow-up question on whether “Canada should accept all the refugees which are turned away by USA's Trump EO 13769,” only 45% of Canadians agreed with such a measure, 33% disagreed and 22% said they were not sure.

Final Thoughts: How This Differs from Other Polls Both the U.S. and the international versions of this study differ significantly from any other polls on this subject currently circulating in the media because they required respondents to answer the question in a text comment box in their own words, instead of just selecting from options on an “agree/disagree” Likert scale.

As a result, we were able to not only quantify support and opposition around this controversial subject, but also to gauge respondents’ emotional stake in the matter and to better understand the “why” underlying their positions.

While text analysis allows us to treat qualitative/unstructured data quantitatively, it’s important to remember that including a few quotes in any analysis can help profile and tell a richer story about your data and analysis.

We also used a substantially larger population sample for each of the countries surveyed than any of the conventional polls I’ve seen cited in the media. Because of our triangulated approach and the size of the sample, these findings are in my opinion the most accurate numbers currently available on this subject.

I welcome your thoughts!

@TomHCAnderson - @OdinText

About Tom H. C. Anderson Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

Why Machine Learning is Meaningless

Beware These Buzzwords! The Truth About "Machine Learning" and "Artificial Intelligence" Machine learning, artificial intelligence, deep learning… Unless you’ve been living under a rock, chances are you’ve heard these terms before. Indeed, they seem to have become a must for market researchers.

Unfortunately, so many precise terms have never meant so little!

For computer scientists these terms entail highly technical algorithms and mathematical frameworks; to the layman they are synonyms; but as far as most of us should be concerned, increasingly, they are meaningless.

My engineers would severely chastise me if I used these words incorrectly—an easy mistake to make since there is technically no correct or incorrect way to use these terms, only strict and less strict definitions.

Nor, evidently, is there any regulation about how they’re used for marketing purposes.

(To simplify the rest of this blog post, let’s stick with the term “machine learning” as a catch-all.)

Add to this ambiguity the fact that no sane company would ever divulge the specifics underpinning their machine learning solution for fear of intellectual property theft. Still others may just as easily hide behind an IP claim.

Bottom line: It is simply impossible for clients to know what they are actually getting from companies that claim to offer machine learning unless the company is able and chooses to patent said algorithm.

It’s an environment that is ripe for unprincipled or outright deceitful marketing claims.

A Tale of Two Retailers

Not all machine learning capabilities are created equal. To illustrate, let’s consider two fictitious competing online retailers who use machine learning to increase their add-on sales:

  • The first retailer suggests other items that may be of interest to the shopper by randomly picking a few items from the same category as the item in the shopper’s cart.

 

  • The second retailer builds a complex model of the customer, incorporating spending habits, demographic information and historical visits, then correlates that information with millions of other shoppers who have a similar profile, and finally suggests a few items of potential interest by analyzing all of that data.

In this simplistic example, both retailers can claim they use machine learning to improve shoppers’ experiences, but clearly the second retailer employs a much more sophisticated approach. It’s simply a matter of the standard to which they adhere.

This is precisely what I’m seeing in the insights marketplace today.

At the last market research conference I attended, I was stunned by how many vendors—no matter what they were selling—claimed their product leveraged advanced machine learning and artificial intelligence.

Many of the products being sold would not even benefit from what I would classify as machine learning because the problems they are solving are so simple.

Why run these data through a supercomputer and subject them to very complicated algorithms only to arrive at the same conclusions you could come to with basic math?

Even if all these companies actually did what they claimed, in many cases it would be silly or wasteful.

Ignore Buzzwords, Focus on Results

In this unregulated, buzzword-heavy environment, I urge you to worry less about what it’s called and focus instead on how the technology solves problems and meets your needs.

At OdinText, we use advanced algorithms that would be classified as machine learning/AI, yet we refrain from using these buzzwords because they don’t really say anything.

Look instead for efficacy, real-world results and testimonials from clients who have actually used the tool.

And ALWAYS ask for a real-time demo with your ACTUAL data!

Yours truly,

@TomHCanderson

Ps. See firsthand how OdinText can help you learn what really matters to your customers and predict real behavior. Contact us for a demo using your own data here!

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He  tweets under the handle @tomhcanderson.