Analitica de Texto En Español – Spanish Text Analysis
Analitica de Texto En Español, I didn’t write that, it is machine translation of "Text Analytics in Spanish"
Mathematics has often been called the Universal Language, but in an age of instant machine translation, any text, or text data, is as understandable as math.
That’s one of the reasons I was very happy to take part in a special series of interviews in celebration of the Spanish Association of Market Research’s 50th Anniversary.
Several of our clients are analyzing non English text with OdinText, but in some ways a single mono lingual analyst being able to instantly analyze the comments of millions of customers speaking multiple foreign languages is even more exciting. And this isn’t science fiction, many of our global clients have been doing this for some time now.
The current issue of AEDMO’s Magazine (Asociación Española de Estudios de Mercado, Marketing y Opinión) celebrates technology in the world of research, and several prominent researchers have been invited to write on their core issues of expertise. I was honored to give an interview on text analytics.
If you don’t get their magazine you can read our Q&A on their blog here in Spanish or English.
Their Editor Xavier Moraño asked some very interesting and pertinent questions.
I’d love to hear your thoughts and questions.
Tom H. C. Anderson
Chief Research Officer @OdinText
What You Missed at IIEX 2018 – 3 Takeaways Walking the floor at the Insights Innovation Exchange (IIEX) for a day and a half with our new CEO, Andy Greenawalt, we spoke to several friends, client and supplier side partners, and ducked into quite a few exciting startup sessions.
Three things struck me this year:
-Insights Technology is Finally Getting More Innovative. By that I mean there are no longer just the slight immaterial modifications to existing ways of doing things, but actual innovation that has disruptive implications (passive monitoring, blockchain, image recognition, more intelligent automation…).
As expected most of this innovation is coming from startups, many of which, while they have interesting ideas, have little to no experience in marketing research - and have yet to prove their use cases.
-A Few Marketing Research Suppliers are picking up their consulting game. Surprisingly perhaps, in this area it seems that change is coming from the Qualitative side. For a while qualitative looked like a race to the bottom in terms of price, even more so than what was happening in Quantitative Research. But there are now a handful of Image/Brand/Ideation ‘Agencies’ whose primary methodologies are qualitative who are leading the way to a higher value proposition. There are a couple, but I will mention two I’ve been most impressed with specifically, Brandtrust and Shapiro+Raj, Bravo!
-The Opportunity. I think the larger opportunity if there is one, lies in the ability of the traditional players to partner with and help prove the use cases of some of these newer startup technologies. Incorporating them into consulting processes with higher end value propositions, similar to what the qualitative agencies I noted above have done.
This seems to be both an opportunity and a real challenge. Can Old help New, and New help Old? It may be more likely that the end clients, especially those that are more open to DIY processes will be the ones that select and prove the use cases of these new technologies offered by the next generation of startups, and therefore benefit the most.
While this too is good, I fear that by leaving some of the traditional companies behind we will lose some institutional thinking and sound methodology along the way.
Either way, I’m more optimistic on new Marketing Research Tech than I’ve ever been.
Keep in mind though, Innovation in Marketing Research should be about more than just speed and lower cost (automation). It should be even more about doing things better, giving the companies and clients we work for an information advantage!
Andy Greenawalt to lead OdinText accelerated growth phase
We are happy to announce serial Inc. 500 entrepreneur Andy Greenawalt as CEO effective June 1. OdinText founder and current CEO Tom H.C. Anderson will transition to the roles of Chief Research Officer and Chairman.
An accomplished tech entrepreneur and leader, Greenawalt has successfully built two Inc. 500 SaaS (software as a service) businesses. Most recently, he was CEO of Continuity, a pioneer in the Regulatory Technology industry, and he remains chairman of its board. Prior to Continuity, Greenawalt founded Perimeter eSecurity, now part of BAE Systems, serving as CEO and CTO and on its board. He is a graduate of the University of Massachusetts, Amherst with a degree in Philosophy and Cognitive Linguistics.
“With more Fortune 500 companies choosing OdinText, Andy Greenawalt’s credentials in innovation, his successful record of building SaaS businesses, and his singular focus on creating customer value make him a perfect fit to lead OdinText through its next phase of growth,” said Anderson.
“OdinText is a truly rare startup with Fortune 500 enterprise customers — the most sophisticated buyers in the world,” said Greenawalt. “This is a testament to the vision and team that Tom Anderson has assembled and it’s a great position to be starting from as a pioneer in the text analytics market. The company is very well positioned to bring a new platform to bear and serve as a cornerstone to the smart enterprise of the future.”
Alison Malloy, the lead investor in OdinText from Connecticut Innovations, stated, “Connecticut Innovations has worked with Andy Greenawalt for 20 years. We have absolute confidence that he’s the right person to realize the market potential of OdinText — which has pioneered the next generation of text analytics — allowing Tom Anderson to focus on the research needed to continue to develop and lead the market with industry-leading products.”
“OdinText has developed patented IP, raised pre-seed funding and created an MVP product,” Greenawalt said. “OdinText is a transformative solution that is now poised to redefine how businesses improve satisfaction, retention and revenue. We expect to grow dramatically.”
7 Things to Know About AI/Machine Learning (Boiled Down to two Cliff Notes that are even more important).
In case you missed our session on Artificial Intelligence and Machine Learning (AI/ML) at the Insights Associations’ NEXT conference last week, I thought I would share a bit on the blog about what you missed. We had a full room, with some great questions both during and after the session. However, 30 minutes wasn’t enough time to cover everything thoroughly. In the end we agreed on four takeaways:
AI is part of how research & insights pros will address the ever-increasing demand for fast research results
AI Helps focus on the most important data
AI can’t compensate for bad data
AI isn’t perfect
So today I thought I would share seven additional points about AI/ML that I often get questions on, and then at the end of this post I’m going to share the ‘Cliff Notes’, i.e. I’m going to share just the 2 most important things you really need to know. So, unless you want to geek out with me a bit, feel free to scroll to the bottom.
OK, first, before we can talk about anything, we need to define what Artificial Intelligence (AI) is and isn’t.
1. AI/ML definition is somewhat fuzzy
AI, and more specifically machine learning (ML) is a term that is abused almost as often as it is used. On the one hand this is because a lot of folks are inaccurately claiming to be using it, but also because not unlike big data, its definitions can be a bit unclear, and don’t always make perfect sense.
Let’s take this common 3-part regression analysis process:
Data Prep (pre-processing including cleaning, feature identification, and dimension reduction)
Analysis of process & reporting
This process, even if automated would not be considered machine learning. However, switch out regression with a machine learning technique like Neural Nets, SVM, Decision Trees or Random Forests and bang, it’s machine learning. Why?
Regression models are also created to predict something, and they also require training data. If the data is linear, then there is no way any of these other models will beat regression in terms of ROI. So why would regression not be considered machine learning?
Who knows. Probably just because the first writers of the first few academic papers on ML refenced these techniques and not regression as ML. It really doesn’t make much sense.
2. There are basically 2 types of ML
Some ML approaches are binary like SVM (Support Vector Machines), for predicting something like male or female, and others like Decision Trees are multi class classification.
If you are using decision trees to predict an NPS rating on an 11 point scale then that’s a multi class problem. However, you can ‘trick’ binary techniques like SVM to solve the multi class problem by setting it up to run multiple times.
Either way, you are predicting something.
3. ML can be slow
Depending on the approach used, like Neural Nets for instance, training a model can take several days on a normal computer. There are other issues with Neural Nets as well, like the difficulty for humans to understand and control what they are doing.
But let’s focus on speed for now. Of course, if you can apply a previously trained model on very similar data, then results will be very fast indeed. This isn’t always possible though
If your goal is to insert ML into a process to solve a problem which a user is waiting for, then training an algorithm might not be a very good solution. If another technique, ‘machine learning’ or not, can solve the problem much faster with similar accuracy, then that should be the approach to use.
4. Neural Nets are not like the brain
I’ll pick on Neural Nets a bit more, because they are almost a buzz word unto themselves. That’s because a lot of people have claimed they work like the human brain. This isn’t true. If we’re going to be honest, we’re not sure how the human brain works. In fact, what we do know about the human brain makes me think the human brain is quite different.
The human brain contains nearly 90 billion neurons, each with thousands of synapses. Some of these fire and send information for a given task, some will not fire, and yet others fire and do not send any information. The fact is we don’t know exactly why. This is something we are still working on with hopes that new more powerful quantum computers may give us some insight.
We can however map some functions of the brain to robotics to do things like lift arms, without knowing exactly what happens in between.
There is one problematic similarity between the brain and Neural Nets though. That is, we’re not quite sure how Neural Nets work either. When running a Neural Net, we cannot easily control or explain what happens in the intermediary nodes. So, this (along with speed I mentioned earlier) is more of a reason to be cautious about using Neural Nets.
5. Not All Problems are best solved with Machine Learning
Are all problems best solved with ML? No, probably not.
Take pricing as an example. People have solved for this problem for years, and there are many different solutions depending on your unique situation. These solutions can factor in everything from supply and demand, to cost.
Introducing machine learning, or even just a simpler non-ML based automated technique can sometimes cause unexpected problems. As an example, consider the automated real-time pricing model which Uber used to model supply and demand as inputs. When fares skyrocketed to over $1,000 as drunk people were looking for a ride on New Years eve, the model created a lot of angry customers and bad press.
More on dangers of AI/ML in a bit…
6. It’s harder to beat humans than you think
One of the reasons ML is often touted as a solution is because of how much better than humans computers allegedly are. While theoretically there is truth to this, when applied to real world situations we often see a less ideal picture.
Take self driving cars as an example. Until recently they were touted as “safer than humans”. That was until they began crashing and blowing up.
Take the recent Tesla crash as an example. The AI/ML accidentally latched onto an older faded lane line rather than the newly painted correct lane line and proceeded without breaking, at full speed, into a head on collision with a divider. A specific fatal mistake no human would have been likely to make.
The truth is if we remove driving under the influence and falling asleep from the statistics (two things that are illegal anyway), then human accident statistics are incredibly low.
7. ML is Context Specific!
This is an important one. IBM Watson might be able to Google Lady Gaga’s age quickly, but Watson will be completely useless in identifying her in a picture. Machine learning solutions are extremely context specific.
This context specificity also comes into play when training any type of model. The model will only be as good as the training data used to create it, and the similarity to future data it is uses for predictions.
Model validation methods only test the accuracy of the model on the same exact type of data (typically a random portion of the same data), it does not test the quality of the data itself, nor the application of this model on future data other than the training data.
Be wary of anyone who claims their AI does all sorts of things well, or does it with extremely 100% accuracy.
My final point about Machine Learning & two Cliff Notes…
If some of the above points make it sound as if I’m not bullish on machine learning, I want to clarify that in fact I am. At OdinText we are continuously testing and implementing ML when it makes sense. I’m confident that we as an industry will get better and better at machine learning.
In the case of Tesla above, there are numerous ways to make the computers more efficient, including using special paint that would be easier for computer cameras to see, and traffic lights that send signals telling the computer stating “I am red”, “I am Green” etc., rather than having to guess it via color/light sensing. Things will certainly change, and AI/ML will play an important part.
However, immediately after my talk at the Insights Association I had two very interesting conversations on how to “identify the right AI solution”? In both instances, the buyer was evaluating vendors that made a lot of claims. Way too many in my opinion.
If you forget everything else from today’s post, please remember these two simple Cliff Notes on AI:
You Don’t Buy AI, you buy a solution that does a good job solving your need (which may or may not involve AI)
Remember AI is context specific, and not perfect. Stay away from anyone who says anything else. Select vendors you know you can trust.
There’s no way to know whether something is AI or not without looking at the code.
Unlike academics who share everything under peer review, companies protect their IP, Trade Secrets and code, so there will technically be no way for you to evaluate whether something actually is “AI” or not.
However, the good news is, this makes your job easier. Rather than reviewing someone’s code your job is simply still to decide whether the products solves your needs well or not.
In fact, in my opinion it is far more important to choose a vendor who is honest with you about what they can do to solve your problems. If a vendor claims they have AI everywhere that solves all kinds of various needs, and does so with 100% accuracy, run!
Insight practitioners from Aon, Conagra and Verizon speak out on what they think about AI and Machine Learning
Artificial Intelligence and Machine Learning are hot topics today in many fields, and marketing research is no exception. At the Insights Association’s NEXT conference on May 1 in NYC I've been asked to take part in a practitioner panel on AI to share a bit about how we are using AI in natural language processing and analytics at OdinText.
While AI is an important part of what data mining and text analytics software providers like OdinText do, before the conference I thought I’d reach out to a couple of the client-side colleagues to see what they think about the subject.
With me today I have David Lo, Associate Partner at the Scorpio Partnership (a collaboration between McLagan and the Aon Hewitt Corporation), Thatcher Schulte, Sr. Director, Strategic Insights at Conagra Brands, and Jonathan Schwedel, Consumer & Marketplace Insights at Verizon, all who will also be speaking at NEXT.
THCA: Artificial Intelligence means different things to different people and companies. What does it mean to you, and how if at all you are planning to use it in your departments?
Thatcher Schulte – Conagra:
Artificial intelligence is like many concepts we discuss in business, it’s a catch all that loses its meaning as more and more people use it. I’ve even heard people refer to “Macros” as AI. To me it means trying to make machines make decisions like people would, but that would beg the question on whether it would be “intelligent.” I make stupid decisions all the time.
We’re working with Voice to make inferences on what help consumers might need as they make decisions around food.
Jonathan Schwedel – Verizon:
I'm not a consumer insight professional - I'm a data analyst who works in the insights department, so my perspective is different. There are teams in other parts of Verizon who are doing a lot with more standard artificial intelligence and machine learning approaches, so I want to be careful not to conflate the term with broader advanced analytics. I have this image of cognitive scientists sitting in a lab, and am tempted to reduce "AI" to that.
For our specific insights efforts, we work on initiatives that are AI-adjacent - with automation, predictive modeling, machine learning, and natural language processing, but with a few exceptions those efforts are not scaled up, and are ad hoc on a project by project basis. We dabble with a lot of the techniques that are highlighted at NEXT, but I'm not knowledgeable enough about our day to day custom research efforts to speak well to them. One of the selling points of the knowledge management system we are launching is that it's supposed to leverage machine learning to push the most relevant content to our researchers and partners around our company.
David Lo – Scorpio Partnership/McLagan:
Working in the financial services space and specifically within wealth management, AI is a hot topic as it relates to how it will change advice delivery
[we are looking at using it for] Customer journey mapping through the various touchpoints they have with an organization.
THCA: There’s a lot of hype these days around AI. What is your impression on what you’ve been hearing, and about the companies you’ve been hearing it from, is it believable?
Thatcher Schulte - Conagra:
I don’t get pitched on AI a lot except through email, which frankly hurts the purpose of those people pitching me solutions. I don’t read emails from vendors.
Jonathan Schwedel – Verizon:
It's easy to tell if someone does not have a minimum level of domain expertise. The idea that any tool or platform can provide instant shortcuts is fiction. Most of the value in these techniques are very matter of fact and practical. Fantastic claims demand a higher level of scrutiny. If instead the conversation is about how much faster, cheaper, or easier they are, those are at least claims that can be quickly evaluated.
David Lo – Scorpio Partnership/McLagan:
Definitely a lot of hype. I think as it relates to efficiency, the hype is real. We will continue to see complex tasks such as trade execution optimized through AI.
THCA: For the Insights function specifically, how ready do you think the idea of completely unsupervised vs. supervised/guided AI is? In other words, do you think that the one size fits all AI provided by likes of Microsoft, Amazon, Google and IBM are very useful for research, or does AI need to be more customized and fine tuned/guided before it can be very useful to you?
And related to this, what areas of Market Research do you thing AI currently is better suited to AI?
Thatcher Schulte - Conagra:
Data sets are more important to me than the solutions that are in the market. Food decision making is specialized and complex and it varies greatly by what life stage you are in and where you live. Valid data around those factors are frankly more important than the company we push the data through.
David Lo – Scorpio Partnership/McLagan:
Guard rails are always important, particularly as it relates to unique customer needs.
Jonathan Schwedel – Verizon:
Most custom quantitative research studies use small sample sizes, making it often not feasible to do bespoke advanced analytics. When you are working with much larger data sets (the kind you'd see in analytics as a function as opposed to insights), AWS and Azure let you scale, especially with limited resources. It's a good general approach to use algorithmic type approaches with brand new data sets, and then start customizing when you hit the point of diminishing returns, in a way that your work can later be automated at scale.
[In regard to marketing research] It depends how you're defining research - are we broadening that to customer experience? Then text analytics is a most prominent area, because there are many prominent use cases for large companies at the enterprise level. If "market research" covers broader buckets of customer data, then there's potentially a lot you can do.
THCA: OK, so which areas are currently less well suited to AI?
David Lo – Scorpio Partnership/McLagan:
Hard to say, but probably less suited toward qualitative research. In my line of business we do a lot of work among UHNW investors where sample sizes are very small and there isn’t a lot of activity in the online space.
Jonathan Schwedel – Verizon:
I think sample size is often an issue when talking about research studies. Then it comes down to the research design. Is the machine learning component going to be baked in from the start, or is it just bolted on? A lot of these efforts are difficult to quantify. Verizon's insights group learns things all the time from talking to and observing consumers that we would not have otherwise thought to ask.
THCA: Does anyone have thoughts on usefulness of chat bots and/or other social media/twitter bots currently?
Jonathan Schwedel – Verizon:
They could potentially allow you to collect a lot more data, and reach under-represented consumers groups in the channels that they want to be in. A lot of our team's focus at Verizon is on the user experience and building a great digital experience for our customers. I think they will be important tools to understand and improve in that area.
THCA: Realistically where do you see AI in market research being 3-4 years from now?
David Lo – Scorpio Partnership/McLagan:
Integrated more fully with traditional quantitative research techniques, with researchers re-focusing their efforts on the more creative and thoughtful interpretations of the output.
Jonathan Schwedel – Verizon:
They will provide some new techniques that will be important for specific use cases, but I think the bulk of the fruitful efforts will come from automation and improved scalability. The desire to do more with less is pretty universal, and there's a good roadmap there. The prospect of genuinely groundbreaking insights offers a lot more uncertainty, but it would be great if we do see that level of innovation.
Big thanks to Jonathan, David and Thatcher for sharing their insights and opinions on AI.
If you’re interested in further discussion on AI and Machine Learning please feel free too post a comment here, or join me for the 'What’s New & What’s Ahead for AI & Machine Learning?' Panel on May 1st . I will be joined by John Colias of Decision Analyst, Andrew Konya of remesh, and moderator Kathryn Korostoff of Research Rockstar.
PS. If you would like to learn more about how OdinText can help you better understand your customers and employees feel free to request more info here. If you’re planning on attending the confernece feel free use my speaker code for a $150 discount [ODINTEXT]. I look forward to seeing some of you at the event!
Celebrating Innovative Companies in Marketing Research
It's that time of year again when Greenbook fields their biannual GRIT market research industry survey.
Thankfully it looks like the Greenbook team has made the survey a bit shorter than last year. I do encourage fellow researchers to take the survey, as it does give everyone some direction in terms of where things seem to be heading.
PS. This is the GRIT survey which looks for the most innovative insights companies, both supplier side and client side. We encourage you to give some thought to this section as well. Its nice to recognize up and coming companies, as well as your go-to favorites.
I also want to take this time thank everyone who voted for OdinText in the most innovative supplier category last year. We were very encouraged by the support and have been working harder than ever to release a brand new version of the software next month!
How Your Customers Speak - OdinText Indexes Top Slang and Buzz Words for 2018
Understanding how your key customer demographic communicates about your category, and your product is key in learning how to communicate with them most effectively.
One of the posts I’ve come to enjoy most, yet also the most difficult to write, is our annual list of Slang words. OdinText has been indexing unusual/new/slang termsnow for three years, and so we have a good understanding of not just which slang and buzz words are most popular, but also how terms are moving up or down in popularity.
The interesting thing about all trends including buzz words and especially slang is change. Just because you think you know what one word means today, doesn’t mean that same word won’t have a completely different meaning tomorrow. For this reason, even trusted sources such as Urban Dictionary can fail you, because they list the most popular definition first, not the most recent ones. To understand slang you really have to understand movement. If a slang word has been in decline for a while, and picks up in usage, it may well be that there’s a new meaning for it.
Understanding what a new slang word means is often more difficult than it sounds. Context in any comment is often not enough, and various comments may use same term very differently. Neither is relying on any one source such as Urban Dictionary sufficient, far from it we’ve found. An approach of triangulating on the most current definition by considering multiple sources including online videos/song lyrics, internet meme’s, social media comments and, considering date of each are often the best way to arrive at a more current definition. Often it’s the success of a certain artist and how they use the word which propels it. If you default to looking something up in Urban Dictionary, know that the #1 ranked most popular definition may well be quite dated and incorrect.
ABOUT THE INDEX: We define our slang/buzz word index as terms or phrases that have entered public awarenesss, usually not at the general population level, but often, though not always in important youth or online subgroups. These terms occur in social and mainstream media, are often used by artists, youth and sometimes in digital speak. This year we have started allowing a few more general and political terms into the index. You can think of the terms in the index as terms that are some of the most dynamic in either proportion of use/awareness, and also sometimes in the way they are defined, as many of these have multiple meanings and are in a state of ongoing flux. The majority of the words in our index can be classified as slang.
#10 Dog & Yeet
Dog. As with most slang has multiple meanings, some of the more well known are “A man who can’t commit to one woman”, “a close buddy”, and to “fornicate” among others. A newer meaning, and the reason we believe helped dog just make our top 10 this year is that dog is becoming more general in use, and may soon be more gender neutral, as female rapper Toni Romiti says “If he a dog, I’m a Dog too!”
Yeet, Tied for 10th this year is a term we indexed and started tracking back in 2016 also in 10th place then. It’s popularity was due to a new dance move and and internet video meme. You can check out our definition from last year here. But as with other slang it tends to transform and take on multiple meanings, including being used simply as an expletive connotating excitement.
Bruh reached #10 back in 2015/2016 and has been holding steady and even gaining slightly. There are now even some female or gender neutral off shoot variants like Bra, in part promoted by advertising related to breast cancer (someone who supports you when you have breast cancer). The meaning of Bruh has been changing for sometime from a term of endearment (brother), to Bruh?! Meaning “Oh no… why did you do that?!”
Bet moved from #18 to #8 this year. That usually has to do with new usage and/or inclusion into some popular lyrics or meme. Moving from a simple term indicating agreement, e.g. “want to go to the movies?” “Sure, Bet!”, bet has been changing to just mean “yes”, and then ironically the total opposite of agreement, meaning doubt and sarcasm or simply the opposite of what someone wants or No. “Yo can you help me clean my room” “Bet (leaves walks out of door)”. It has even come to be used as a sort of replacement to Yolo., but the newest and most popular meaning currently is as the opposite of the older meanings, a negative sign of disbelief. Basically a sarcastic "No".
Woke moved into 20th place about a year ago and has also increased in popularity tremendously this year. Woke means being intellectually aware, on point and in the know, but can have broader meaning as well. “After taking that class in feminism, he’s really woke to gender issues”
In the past we’ve purposely kept political words out of these lists. Political terminology is more mainstream, and behaves differently from youth slang. That said, these terms too enter the general vernacular seemingly from nowhere and become very popular, perhaps now more than ever. So this year we’ve decided to include a few of them. So in 6th place (and 2nd) we have political words this year.
Snowflake is generally used to describe liberals who are overly sensitive and too easily offended by some general term or belief that doesn’t take into account that everyone is as individual as a snowflake. Though it is sometimes used to describe anyone who is overly sensitive.
Unlike most of our slang terms that tend to skew more urban and lower income, this one actually skews slightly higher income and older, and is in fact often used to describe millennials or Gen Z.
Fleek did a good job in maintaining its position this year, but is still down from a high (our #2 word) at the end of 2015. Fleek skews female quite a bit and generally means ‘on point’ “on fleek” and is often used when describing eyebrows.
Fetch climbed from 6th place back at end of 2015 and has maintained at 4th place this year. After “Bae”, it’s the second most female scewed slang term we track. This term was popularized by the movie Mean Girls and means cool/chic.
Dope moved up 3 from last year. Used a number of ways (see last years definition), including as a synonym for Lit, basically high in quality or mind blowing
#2 Fake News
Here’s our other political word in this years list. Again, not like slang in a number of ways including fact that it skews older and higher income. Still it hit 10th place on our new term index list last year even though we decided not to report political terms. As the name suggests, this term denotes political propaganda and unfactual unscientific information which is becoming ever more prevalent online.
Holding at #1 this year, Lit is literally still “cool” for now. It was 4th place back at beginning of 2016, it may be that its position gets challenged by Dope or something else later this year.
Top 5 Gainers
Each year these trendier terms compete with each other, some enter our list temporarily and then go off to die, sometimes to be resurrected years later, others get so mainstream that they enter our common lexicon and earn a place in the dictionary.
Here are the 5 terms which moved up the most over the past 12 months.
Mainstream media grabbed a hold of this word this past year and our indexing classified it as the biggest single mover in our index. I’m guessing you’re already well aware of this political term. If not check out the Wikipedia definition here https://en.wikipedia.org/wiki/Fake_news
Our second biggest upward mover this year was Snowflake. It will be interesting to see how this one does over the longer term.
So you might say Bitcoin is a brand or product name, but this term which is neither slang not political still made our list because of its fast movement from obscurity to global mainstream. I wouldn’t be totally surprised if the term is split, morphed and transformed into some alternate meanings in the future.
Finally, our fourth biggest mover of the year are one of our slang terms. It’s interesting as far as slang terms in its positive connotation of reaching a more enlightened level of cognition.
Fifth biggest mover this year was Bruh which continued to expand and change in meaning (see above) after being somewhat down last year.
Top 6 Losers
Just as there are winners, there are also losers. Here are the 5 that dropped the most this year.
Looks like ‘Fetch’ may have peaked. It was always a bit of an outlier in terms of slang in that it was so closely tied to a single movie ‘Mean Girls’ and while obviously feamale oriented, but also unlike most slang less ethnic and higher income.
Swag seemed to come and go last year. It just never gained the foothold needed for stickiness.
This somewhat odd female slang term with a definite online meme-ish component had its peak two years ago and has decreased in popularity by half each year since.
The meme-able dance move known as ‘dabbing’ seems to have spawned and given way to a term of similar origin “Yeet”, which has taken on more meaning than dab. While the future looks dim for this term, its use together with marijuana could give it more life in the future.
Another big loser over the past couple of years is “One” meaning goodbye. It may be time to bid “One!” to ‘One’.
A somewhat surprising sudden drop to Bae this year is accompanied by lots of annoyance by those using the term which refers to "boyfriend" or "girlfriend"
Bonus Terms to look out for
Here are some fun new terms that crept onto our radar/index this year, though they didn’t enter our top list but we’ll be continuing to track their movement if they increase in usage over the course of the year. They could suddenly become even more popular, as they are on an upward movement though not quit as extreme as our top 5 above.
Squad may end up being replaced by ‘gang’. Since calling someone a “friend” simply is never cool enough, this term has come to replace the now somewhat corny ‘squad’. Believe it or not, 1 friend is a "gang".
“gang gang”, saying gang twice has also become popular, as plural and often used as a hashtag on social media, especially twitter #ganggang
This term which can be traced to Philly originally meant joint, then morphed to mean literally anything, usually any noun, person place or thing (but quite often a sexy woman )
Cringey or cringy is a fun term meaning something that makes you want to cringe. It has often been used in regard to internet videos, especially with amature home made videos on youtube by very young performers sometimes younger siblings.
We noted “bra” as a feminine derivative of sorts for bro, meaning “someone who’se always there for support'
Awesome, cool, or good. Here’s an example of Guzzi paired with Gang mentioned above.
gucci gang (Lil Pump)
Savage, means "Brutal yet awesome", a useful combination ;)
"Dad" & "Mom"
Not what you think. Rather new, this term is given to the highest ranking person in a given environment. For instance, if 4 guys are playing an online video game, the "Dad" is the person with the highest level game character.
These are our top movers for the year. There are many other pop terms of course. These are US focused and more general in nature. If you put additional lenses, such as geography, age, gender, category and source of data in play, other quite different terms may top the list.
Of course, unless your industry is dead, monitoring the way your stakeholders talk about a topic may be very dynamic without involving too much slang or buzz words. Often which competitors are mentioned, what items are seen as benefits or barriers, and which emotions surround these topics can be just as interesting or even more so. Longitudinal voice of customer data can be found in a variety of sources from phone logs, emails, chat logs, surveys and social media, to name just a few.
If you’re curious about tracking what your customers say, how they say it, and how you can use that to better connect with them and even predict their future behavior including satisfaction and repurchase please reach out. We're happy to show you what Your Data +OdinText looks like.
Net Promoter Score (NPS) +OdinText = Predictive NPS
It was nice to see Research Business Report cover one of our Net Promoter (NPS) Case Studies today on Research Business Report. We’ve found contrary to popular belief, NPS and other Customer Satisfaction ratings like Overall Satisfaction don’t correlate much with important KPI’s like Return Behavior and Sales Revenue.
In this case study, by adding OdinText to NPS, it was possible to better understand and predict these far more important KPI’s, Predictive NPS id you will.
If you have NPS or any other Customer Satisfaction data and would like to better understand the more important KPI’s like Repurchase, Churn, and Revenue please reach out. We would be happy to sned you more information on our NPS Case studies and Key Driver Reporting . +OdinText to NPS and Predict What Matters!
ad testing +OdinText
[Authors note.As I am writing this blog post early Monday morning after the Super Bowl, I have already completed the initial ad testing analysis. It’s the case where modern AI and analytics software (OdinText) is faster than the data collection process/vendor we’re relying on. We’ve asked an open ended comment question among n=3,000 respondents about which super bowl ads they like/dislike and why. Eager to have the analysis complete as soon as possible, the analysis is already done, and blog written based on n=1,011 initial responses received. But since 1,998 more are expected I’m painfully waiting to publish results until the rest of the fielding comes in. The bad part is waiting for the sample. The good part is knowing that now repeating the analysis will literally take less than 1 minute. Just uploading the data into OdinText, and then the brand names and advertisement likes and dislikes will automatically be coded, analyzed and charted in seconds. I just have to review if anything has changed materially and make small updates in my copy below in such case. As it turned out, more data did change findings, and so I did have to change my blog copy. Ah, the joys of modern analytics!]
The Advertising Pundits weighed in on which ads were best and worst even before the Super Bowl aired. We tend to do things a little differently at OdinText and allow data, not opinion to drive.
Of course, for "best" and "worst" not to be subjective, we need some definition of desired outcomes. Last year we looked at a simple formula to evaluate efficacy consisting of Awareness + Positive Sentiment/Liking of the ads.
For instance, you may not remember this because of the low sentiment, but last year 85 Lumber was one of the companies with the highest Awareness after the super bowl. However, because it also had low sentiment and relevancy (as it dealt with the explosive issue of immigration/Trump's wall in a somewhat ambiguous way). It's probably the case that it ended up doing better among its core customer segments than among the general population, but since Super Bowl Ads are expensive, I argued that all things equal, a strategy with a broader target in mind, which aims to leave a positive impact among this broader group, should provide a better ROI. Looking at it another way, to have the most significant positive impact we want to maximize both awareness and sentiment almost equally.
With those assumptions and comments from over 3,000 respondents, OdinText's AI predicted which of the Super Bowl Ads were successful, and which were not. Below I've shown 10 Brands/ads, the best performing 6 and the worst 4.
OdinText Ad Ratings
#1. THE NFL
In a year where there has been a lot of controversy surrounding NFL players taking a knee, and with a few of our respondents explicitly stating that they had boycotted the Super Bowl this year, it interesting to see the NFL advertising, and doing it so well. The NFL's Dirty Dancing with Manning and Beckham performed best, I believe in part because of its high relevance to the audience, but also for garnering high awareness together with very high positive sentiment/liking. In fact, only one other ad came close in sentiment.
#2 AMAZON ALEXA
That second most well-liked ad was Amazon's Alexa. Not as much because of its awareness (which was rather low in comparison), but because of its extremely high sentiment. The audience loved the various famous actors playing the voice of Alexa at least as much as they enjoyed NFL players Dirty Dancing.
In 3rd place we have Tide. They earned the spot less so for sentiment (though viewers did like the ad). The reason Tide did so well was primarily due to the awareness it garnered. Tide had THE HIGHEST awareness of any Super Bowl Ad. However contrary to some of the Advertising Pundits opinions, it just wasn't quite as consistently well-liked by viewers as Dirty Dancing, and Alexa.
If you are in the camp who believe Awareness is everything, then Tide should have an even higher spot.
Doritos + Mountain Dew Ice was so close in our model, that I’m going to give them a tie for 3rd. Not awareness like Tide, but for balancing both positive sentiment and awareness perfectly. It's mix of awareness and liking was in the same proportions as NFL Dirty Dancing, just at a slightly smaller scale.
Obviously considering the audience and occasion, just like the NFL ad, Doritos especially is a highly relevant product, and as importantly the humorous approach with two extremely popular yet not commonly seen together stars (Namely Morgan Freeman and Peter Dinglage/Tyron from Game of Thrones) succeeded in the unique Combo messaging of Fire & Ice.
#5 BUD-LIGHT (NOT BUDWEISER)
Budweiser is almost expected to do well. So, in a way, it may be surprising to see it doesn’t make it into our analysis (Beer, in general, did poorly especially Miller Ultra). It really should be so easy for Budweiser though. Here's a case where the occasion is more than just relevant, it's almost as if the brand has a historic Super Bowl halo effect. That said, their performance was less than impressive.
While the idea of stopping the Budweiser line to make water in an emergency could be touching for some, reminding consumers you have a good fun product may be a safer strategy than asking for kudos for merely being a good corporate citizen?
And that’s where Bud-Light’s Knight did better. Beer should be about fun…
Here's a case where awareness was quite low, but the ad was still more liked than average compared to the other brands. Toyota and our #8 brand just barely made the list. While the setting was right "The Super Bowl," in the end perhaps the ‘Priest, an Imam, a Monk and a Rabbi' may have felt a bit less like a joke, and more like preaching…
Like Budweiser, we expect a lot from Coca-Cola when it comes to advertising. They’ve been pushing the diversity message for a few years now. It may be that pulling at heart strings is far harder to do than making people laugh. Coca-Cola had lower than average sentiment coupled with relatively low awareness. Not a winning combination.
#2 DODGE RAM
Dodge Ram did better than Coca-Cola at least, especially on awareness, but even concerning sentiment/liking.
The negative aspect of course in large part was the appropriateness of Martin Luther King's message at the beginning.
When it comes to ads like these though, I think we must assume, as was the case for 85 Lumber last year, that perhaps the brand knows what it's doing. They aren't there to please everyone (as you would hope is the goal of Pepsi and Coca-Cola), but to message their core audience with a ‘We Get You – Even if Everyone Else Doesn't'. And so, awareness wise, Ram did better than Amazon, Bud-Light, and Pepsi. But on an overall basis, they get dinged by the overall sentiment due to the some would say clumsy ‘MLK + Patriotic' messaging. Only time and sales will tell…
T-Mobile was less well liked than you’d think, who doesn’t like babies right? Turns out people are getting tired of the “social responsibility ads” in their entertainment, at least that’s what they told us.
#4 DIET COKE (TWISTED MANGO)
The Booby Prize. Ok, so here is a bad ad. In PR they used to say, any PR is good PR. But Diet Coke didn't do too well on either of our metrics. It had low awareness combined with even lower sentiment/liking. Diet Coke Mango, because, just no…
A WORD ABOUT PEPSI
Pepsi, what can I say. You may be surprised that yet again, Pepsi performed poorly compared to the other brands I mentoned considering that their name was all over the Super Bowl during the Half Time Show. And yet, it may be that the real winner of Halftime is the brand of the performer, which this year was Justin Timberlake. We saw a similar pattern last year as well.
State of The POTUS - Text Analytics Reveals the Reasons Behind Trumps Approval Ratings
Over the past few weeks we’ve heard political pundits on all major news networks chime in on how Trump is doing one year after taking office. Part of the discussion is around what he has and hasn’t done, but an even bigger part continues to be about how he is perceived, both domestically and abroad, and some very grim opinion/approval polling is available. Many polls have Trump as the President with the lowest approval ratings in history.
Sadly, Political Polling, including approval ratings, tells us absolutely nothing about the underlying causes for the ratings. Therefore, I thought I’d share our findings in this area. Utilizing our text analytics software, OdinText, we have been tracking not just sentiment related to Trump, but more importantly, the positioning of 40+ topics/themes that are important predictors of the sentiment.. In the brief analysis below, I will not have time to go into each of the attributes we have identified as important drivers, I will focus on a few of the areas which have seen the most change for Trump during the past year.
How has the opinion of Trump changed in the minds of the American people?
By looking at Trump’s positioning just before he took office (with all the campaign positioning fresh in the minds of the people), and comparing it to half a year into his office, and again now a full year into office, we can get a good idea about the impact various issues have on approval ratings and even more importantly, positioning.
Let’s start by looking back to just before he was elected. OdinText’s Ai uncovered the 15 most significant changes in perception since just before Trump won the election and now. Trump has fallen on 11 of these attributes and increased on 4.
Trump Pre Election Positioning VS One Year In
If we compare Trump just before the election VS Trump today, we several key differences. More recently four themes have become more important in terms of describing what Trump stands for in the minds of Americans when we include everyone (both those who like and dislike him). These newer positions are “Less Regulation”, “Healthcare Reform”, “Money/Greed”, and “Dishonesty”. Interestingly, text analytics reveals that one of the important issues seems to be changing, Trumps supporters are now more likely to be use the term “Healthcare Reform” rather than the previous “Repeal Obamacare”.
Other than the repeal of Obamacare issue, prior to the election, in the minds of Americans Trump was more likely to be associated with “Gun Rights”, “Honesty”, “Trade Deals”, “Change”, Supporting “Pro Life”, pro and con “Immigration” related issues including “The Wall”, and finally his slogan “MAGA” (Make America Great Again).
The decrease in relevance of many of these issues has to do with pre-election positioning, both by the Trump/Republican Party, as well as the Democrats Counter Positioning of him. After the election seemingly, some of these like ‘Gun Control’ have become less important for various reasons.
Five Months from Record Low
If we look at changes between this past Summer and now, there has been significantly less movement in terms of his positioning in American minds. He has seen a slight but significant bump in overall positive emotional sentiment/Joy, and the MAGA positioning as well as on Taxes, the economy, and The Wall, while also seeing a decrease in “Anger” and “Hate/Racism” which peaked this summer.
His lowest point so far in the minds of Americans was during the August 12th, 2017 White Nationalist Rally in Charlottesville. Trump’s positioning as a Hate Monger was almost as high as the weekend before the election, while simultaneously positive emotional sentiment and ‘MAGA’ among his supporters was at an all time low.
Since the August low Trump does appear to have rebounded some, and while one year into office many believe the one thing Trump now stands for is himself, greed and money are a lesser evil in America than hate and racism.
It seems that one year into office, at least for now, the economy and tax cuts are giving Trump a bit of a bump back to pre-election levels in the minds of many Americans.
I’m not sure what the future holds in this case, but I hope you like me found some of the underlying reasons for his approval ratings of interest. These are after all more important than simple ratings, because these reasons are levers that can be changed to affect the final outcomes and positioning of any brand, including that of a POTUS.