Posts tagged AI
What you Need to Know Before Buying AI/Machine Learning

7 Things to Know About AI/Machine Learning (Boiled Down to two Cliff Notes that are even more important).

In case you missed our session on Artificial Intelligence and Machine Learning (AI/ML) at the Insights Associations’ NEXT conference last week, I thought I would share a bit on the blog about what you missed. We had a full room, with some great questions both during and after the session. However, 30 minutes wasn’t enough time to cover everything thoroughly. In the end we agreed on four takeaways:

  • AI is part of how research & insights pros will address the ever-increasing demand for fast research results
  • AI Helps focus on the most important data
  • AI can’t compensate for bad data
  • AI isn’t perfect

So today I thought I would share seven additional points about AI/ML that I often get questions on, and then at the end of this post I’m going to share the ‘Cliff Notes’, i.e. I’m going to share just the 2 most important things you really need to know.  So, unless you want to geek out with me a bit, feel free to scroll to the bottom.

OK, first, before we can talk about anything, we need to define what Artificial Intelligence (AI) is and isn’t.

1. AI/ML definition is somewhat fuzzy

AI, and more specifically machine learning (ML) is a term that is abused almost as often as it is used. On the one hand this is because a lot of folks are inaccurately claiming to be using it, but also because not unlike big data, its definitions can be a bit unclear, and don’t always make perfect sense.

Let’s take this common 3-part regression analysis process:

  1. Data Prep (pre-processing including cleaning, feature identification, and dimension reduction)
  2. Regression
  3. Analysis of process & reporting

This process, even if automated would not be considered machine learning. However, switch out regression with a machine learning technique like Neural Nets, SVM, Decision Trees or Random Forests and bang, it’s machine learning. Why?

Regression models are also created to predict something, and they also require training data. If the data is linear, then there is no way any of these other models will beat regression in terms of ROI. So why would regression not be considered machine learning?

Who knows. Probably just because the first writers of the first few academic papers on ML refenced these techniques and not regression as ML. It really doesn’t make much sense.

2. There are basically 2 types of ML

Some ML approaches are binary like SVM (Support Vector Machines), for predicting something like male or female, and others like Decision Trees are multi class classification.

If you are using decision trees to predict an NPS rating on an 11 point scale then that’s a multi class problem. However, you can ‘trick’ binary techniques like SVM to solve the multi class problem by setting it up to run multiple times.

Either way, you are predicting something.

3. ML can be slow

Depending on the approach used, like Neural Nets for instance, training a model can take several days on a normal computer. There are other issues with Neural Nets as well, like the difficulty for humans to understand and control what they are doing.

But let’s focus on speed for now. Of course, if you can apply a previously trained model on very similar data, then results will be very fast indeed. This isn’t always possible though

If your goal is to insert ML into a process to solve a problem which a user is waiting for, then training an algorithm might not be a very good solution. If another technique, ‘machine learning’ or not, can solve the problem much faster with similar accuracy, then that should be the approach to use.

4. Neural Nets are not like the brain

I’ll pick on Neural Nets a bit more, because they are almost a buzz word unto themselves. That’s because a lot of people have claimed they work like the human brain. This isn’t true. If we’re going to be honest, we’re not sure how the human brain works. In fact, what we do know about the human brain makes me think the human brain is quite different.

The human brain contains nearly 90 billion neurons, each with thousands of synapses. Some of these fire and send information for a given task, some will not fire, and yet others fire and do not send any information. The fact is we don’t know exactly why. This is something we are still working on with hopes that new more powerful quantum computers may give us some insight.

We can however map some functions of the brain to robotics to do things like lift arms, without knowing exactly what happens in between.

There is one problematic similarity between the brain and Neural Nets though. That is, we’re not quite sure how Neural Nets work either. When running a Neural Net, we cannot easily control or explain what happens in the intermediary nodes. So, this (along with speed I mentioned earlier) is more of a reason to be cautious about using Neural Nets.

5. Not All Problems are best solved with Machine Learning

Are all problems best solved with ML? No, probably not.

Take pricing as an example. People have solved for this problem for years, and there are many different solutions depending on your unique situation. These solutions can factor in everything from supply and demand, to cost.

Introducing machine learning, or even just a simpler non-ML based automated technique can sometimes cause unexpected problems. As an example, consider the automated real-time pricing model which Uber used to model supply and demand as inputs. When fares skyrocketed to over $1,000 as drunk people were looking for a ride on New Years eve, the model created a lot of angry customers and bad press.

More on dangers of AI/ML in a bit…

6. It’s harder to beat humans than you think

One of the reasons ML is often touted as a solution is because of how much better than humans computers allegedly are. While theoretically there is truth to this, when applied to real world situations we often see a less ideal picture.

Take self driving cars as an example. Until recently they were touted as “safer than humans”. That was until they began crashing and blowing up.

Take the recent Tesla crash as an example. The AI/ML accidentally latched onto an older faded lane line rather than the newly painted correct lane line and proceeded without breaking, at full speed, into a head on collision with a divider. A specific fatal mistake no human would have been likely to make.

The truth is if we remove driving under the influence and falling asleep from the statistics (two things that are illegal anyway), then human accident statistics are incredibly low.

7. ML is Context Specific!

This is an important one. IBM Watson might be able to Google Lady Gaga’s age quickly, but Watson will be completely useless in identifying her in a picture. Machine learning solutions are extremely context specific.

This context specificity also comes into play when training any type of model. The model will only be as good as the training data used to create it, and the similarity to future data it is uses for predictions.

Model validation methods only test the accuracy of the model on the same exact type of data (typically a random portion of the same data), it does not test the quality of the data itself, nor the application of this model on future data other than the training data.

Be wary of anyone who claims their AI does all sorts of things well, or does it with extremely 100% accuracy.

My final point about Machine Learning & two Cliff Notes…

If some of the above points make it sound as if I’m not bullish on machine learning, I want to clarify that in fact I am. At OdinText we are continuously testing and implementing ML when it makes sense. I’m confident that we as an industry will get better and better at machine learning.

In the case of Tesla above, there are numerous ways to make the computers more efficient, including using special paint that would be easier for computer cameras to see, and traffic lights that send signals telling the computer stating “I am red”, “I am Green” etc., rather than having to guess it via color/light sensing. Things will certainly change, and AI/ML will play an important part.

However, immediately after my talk at the Insights Association I had two very interesting conversations on how to “identify the right AI solution”? In both instances, the buyer was evaluating vendors that made a lot of claims. Way too many in my opinion.

If you forget everything else from today’s post, please remember these two simple Cliff Notes on AI:

  1. You Don’t Buy AI, you buy a solution that does a good job solving your need (which may or may not involve AI)
  2. Remember AI is context specific, and not perfect. Stay away from anyone who says anything else. Select vendors you know you can trust.

There’s no way to know whether something is AI or not without looking at the code.

Unlike academics who share everything under peer review, companies protect their IP, Trade Secrets and code, so there will technically be no way for you to evaluate whether something actually is “AI” or not.

However, the good news is, this makes your job easier. Rather than reviewing someone’s code your job is simply still to decide whether the products solves your needs well or not.

In fact, in my opinion it is far more important to choose a vendor who is honest with you about what they can do to solve your problems. If a vendor claims they have AI everywhere that solves all kinds of various needs, and does so with 100% accuracy, run!

@TomHCAnderson

AI and Machine Learning NEXT at The Insights Association
Insight practitioners from Aon, Conagra and Verizon speak out on what they think about AI and Machine Learning

Artificial Intelligence and Machine Learning are hot topics today in many fields, and marketing research is no  exception. At the Insights Association’s NEXT conference on May 1 in NYC I've been asked to take part in a practitioner panel on AI to share a bit about how we are using AI in natural language processing and analytics at OdinText.

While AI is an important part of what data mining and text analytics software providers like OdinText do, before the conference I thought I’d reach out to a couple of the client-side colleagues to see what they think about the subject.

With me today I have David Lo, Associate Partner at the Scorpio Partnership (a collaboration between McLagan and the Aon Hewitt Corporation) Thatcher Schulte, Sr. Director, Strategic Insights at Conagra Brands, and Jonathan Schwedel, Consumer & Marketplace Insights at Verizon, all who will also be speaking at NEXT.

THCA: Artificial Intelligence means different things to different people and companies. What does it mean to you, and how if at all you are planning to use it in your departments?

Thatcher Schulte – Conagra:

Artificial intelligence is like many concepts we discuss in business, it’s a catch all that loses its meaning as more and more people use it.  I’ve even heard people refer to “Macros” as AI.  To me it means trying to make machines make decisions like people would, but that would beg the question on whether it would be “intelligent.”  I make stupid decisions all the time.

We’re working with Voice to make inferences on what help consumers might need as they make decisions around food.

Jonathan Schwedel – Verizon:

I'm not a consumer insight professional - I'm a data analyst who works in the insights department, so my perspective is different. There are teams in other parts of Verizon who are doing a lot with more standard artificial intelligence and machine learning approaches, so I want to be careful not to conflate the term with broader advanced analytics. I have this image of cognitive scientists sitting in a lab, and am tempted to reduce "AI" to that.

For our specific insights efforts, we work on initiatives that are AI-adjacent - with automation, predictive modeling, machine learning, and natural language processing, but with a few exceptions those efforts are not scaled up, and are ad hoc on a project by project basis. We dabble with a lot of the techniques that are highlighted at NEXT, but I'm not knowledgeable enough about our day to day custom research efforts to speak well to them. One of the selling points of the knowledge management system we are launching is that it's supposed to leverage machine learning to push the most relevant content to our researchers and partners around our company.

David Lo – Scorpio Partnership/McLagan:

Working in the financial services space and specifically within wealth management, AI is a hot topic as it relates to how it will change advice delivery

[we are looking at using it for] Customer journey mapping through the various touchpoints they have with an organization.

 

THCA: There’s a lot of hype these days around AI. What is your impression on what you’ve been hearing, and about the companies you’ve been hearing it from, is it believable?

Thatcher Schulte - Conagra:

I don’t get pitched on AI a lot except through email, which frankly hurts the purpose of those people pitching me solutions.  I don’t read emails from vendors.

Jonathan Schwedel – Verizon:

It's easy to tell if someone does not have a minimum level of domain expertise. The idea that any tool or platform can provide instant shortcuts is fiction. Most of the value in these techniques are very matter of fact and practical. Fantastic claims demand a higher level of scrutiny. If instead the conversation is about how much faster, cheaper, or easier they are, those are at least claims that can be quickly evaluated.

David Lo – Scorpio Partnership/McLagan:

Definitely a lot of hype.  I think as it relates to efficiency, the hype is real.  We will continue to see complex tasks such as trade execution optimized through AI.

 

THCA: For the Insights function specifically, how ready do you think the idea of completely unsupervised vs. supervised/guided AI is? In other words, do you think that the one size fits all AI provided by likes of Microsoft, Amazon, Google and IBM are very useful for research, or does AI need to be more customized and fine tuned/guided before it can be very useful to you?

And related to this, what areas of Market Research do you thing AI currently is better suited to AI?

 Thatcher Schulte - Conagra:

Data sets are more important to me than the solutions that are in the market.  Food decision making is specialized and complex and it varies greatly by what life stage you are in and where you live. Valid data around those factors are frankly more important than the company we push the data through.

David Lo – Scorpio Partnership/McLagan:

Guard rails are always important, particularly as it relates to unique customer needs.

[In terms of usefulness to market research], Data mining

Jonathan Schwedel – Verizon:

Most custom quantitative research studies use small sample sizes, making it often not feasible to do bespoke advanced analytics. When you are working with much larger data sets (the kind you'd see in analytics as a function as opposed to insights), AWS and Azure let you scale, especially with limited resources. It's a good general approach to use algorithmic type approaches with brand new data sets, and then start customizing when you hit the point of diminishing returns, in a way that your work can later be automated at scale.

[In regard to marketing research] It depends how you're defining research - are we broadening that to customer experience? Then text analytics is a most prominent area, because there are many prominent use cases for large companies at the enterprise level. If "market research" covers broader buckets of customer data, then there's potentially a lot you can do.

 

THCA: OK, so which areas are currently less well suited to AI?

David Lo – Scorpio Partnership/McLagan:

Hard to say, but probably less suited toward qualitative research.  In my line of business we do a lot of work among UHNW investors where sample sizes are very small and there isn’t a lot of activity in the online space.

Jonathan Schwedel – Verizon:

I think sample size is often an issue when talking about research studies. Then it comes down to the research design. Is the machine learning component going to be baked in from the start, or is it just bolted on? A lot of these efforts are difficult to quantify. Verizon's insights group learns things all the time from talking to and observing consumers that we would not have otherwise thought to ask.

 

THCA: Does anyone have thoughts on usefulness of chat bots and/or other social media/twitter bots currently?

Jonathan Schwedel – Verizon:

They could potentially allow you to collect a lot more data, and reach under-represented consumers groups in the channels that they want to be in. A lot of our team's focus at Verizon is on the user experience and building a great digital experience for our customers. I think they will be important tools to understand and improve in that area.

 

THCA: Realistically where do you see AI in market research being 3-4 years from now?

David Lo – Scorpio Partnership/McLagan:

Integrated more fully with traditional quantitative research techniques, with researchers re-focusing their efforts on the more creative and thoughtful interpretations of the output.

Jonathan Schwedel – Verizon:

They will provide some new techniques that will be important for specific use cases, but I think the bulk of the fruitful efforts will come from automation and improved scalability. The desire to do more with less is pretty universal, and there's a good roadmap there. The prospect of genuinely groundbreaking insights offers a lot more uncertainty, but it would be great if we do see that level of innovation.

 

Big thanks to Jonathan, David and Thatcher for sharing their insights and opinions on AI.

If you’re interested in further discussion on AI and Machine Learning please feel free too post a comment here, or join me for the 'What’s New & What’s Ahead for AI & Machine Learning?' Panel on May 1st . I will be joined by John Colias of Decision Analyst, Andrew Konya of remesh, and moderator Kathryn Korostoff of Research Rockstar.

-Tom H. C. Anderson @OdinText

 

PS. If you would like to learn more about how OdinText can help you better understand your customers and employees feel free to request more info here. If you’re planning on attending the confernece feel free use my speaker code for a $150 discount [ODINTEXT]. I look forward to seeing some of you at the event!

 

Artificial Intelligence in Consumer Insights

A Q&A session with ESOMAR’s Research World on Artificial Intelligence, Machine Learning, and implications in Marketing Research  [As part of an ESOMAR Research World article on Artificial Intelligence OdinText Founder Tom H. C. Anderson was recently took part in a Q&A style interview with ESOMAR’s Annelies Verheghe. For more thoughts on AI check out other recent posts on the topic including Why Machine Learning is Meaningless, and Of Tears and Text Analytics. We look forward to your thoughts or questions via email or in the comments section.]

 

ESOMAR: What is your experience with Artificial Intelligence & Machine Learning (AI)? Would you describe yourself as a user of AI or a person with an interest in the matter but with no or limited experience?

TomHCA: I would describe myself as both a user of Artificial Intelligence as well as a person with a strong interest in the matter even though I have limited mathematical/algorithmic experience with AI. However, I have colleagues here at OdinText who have PhD's in Computer Science and are extremely knowledgeable as they studied AI extensively in school and used it elsewhere before joining us. We continue to evaluate, experiment, and add AI into our application as it makes sense.

ESOMAR: For many people in the research industry, AI is still unknown. How would you define AI? What types of AI do you know?

TomHCA: Defining AI is a very difficult thing to do because people, whether they are researchers, data scientists, in sales, or customers, they will each have a different definition. A generic definition of AI is a set of processes (whether they are hardware, software, mathematical formulas, algorithms, or something else) that give anthropomorphically cognitive abilities to machines. This is evidently a wide-ranging definition. A more specific definition of AI pertaining to Market Research, is a set of knowledge representation, learning, and natural language processing tools that simplifies, speeds up, and improves the extraction of meaningful data.

The most important type of AI for Market Research is Natural Language Processing. While extracting meaningful information from numerical and categorical data (e.g., whether there is a correlation between gender and brand fidelity) is essentially an easy and now-solved problem, doing the same with text data is much more difficult and still an open research question studied by PhDs in the field of AI and machine learning. At OdinText, we have used AI to solve various problems such as Language Detection, Sentence Detection, Tokenizing, Part of Speech Tagging, Stemming/Lemmatization, Dimensionality Reduction, Feature Selection, and Sentence/Paragraph Categorization. The specific AI and machine learning algorithms that we have used, tested, and investigated range a wide spectrum from Multinomial Logit to Principal Component Analysis, Principal Component Regression, Random Forests, Minimum Redundancy Maximum Relevance, Joint Mutual Information, Support Vector Machines, Neural Networks, and Maximum Entropy Modeling.

AI isn’t necessarily something everyone needs to know a whole lot about. I blogged recently, how I felt it was almost comical how many were mentioning AI and machine learning at MR conferences I was speaking at without seemingly any idea what it means. http://odintext.com/blog/machine-learning-and-artificial-intelligence-in-marketing-research/

In my opinion, a little AI has already found its way into a few of the applications out there, and more will certainly come. But, if it will be successful, it won’t be called AI for too long. If it’s any good it will just be a seamless integration helping to make certain processes faster and easier for the user.

ESOMAR: What concepts should people that are interested in the matter look into?

TomHCA: Unless you are an Engineer/Developer with a PhD in Computer Science, or someone working closely with someone like that on a specific application, I’m not all that sure how much sense it makes for you to be ‘learning about AI’. Ultimately, in our applications, they are algorithms/code running on our servers to quickly find patterns and reduce data.

Furthermore, as we test various algorithms from academia, and develop our own to test, we certainly don’t plan to share any specifics about this with anyone else. Once we deem something useful, it will be incorporated as seamlessly as possible into our software so it will benefit our users. We’ll be explaining to them what these features do in layman’s terms as clearly as possible.

I don’t really see a need for your typical marketing researcher to know too much more than this in most cases. Some of the algorithms themselves are rather complex to explain and require strong mathematical and computer science backgrounds at the graduate level.

ESOMAR: Which AI applications do you consider relevant for the market research industry? For which task can AI add value?

TomHCA: We are looking at AI in areas of Natural Language Processing (which includes many problem subsets such as Part of Speech Tagging, Sentence Detection, Document Categorization, Tokenization, and Stemming/Lemmatization), Feature Selection, Data Reduction (i.e., Dimensionality Reduction) and Prediction. But we've gone well beyond that. As a simple example, take key driver analysis. If we have a large number of potential predictors, which are the most important in driving a KPI like customer satisfaction?

ESOMAR: Can you share any inspirational examples from this industry or related industries (advertisement, customer service)  that can illustrate these opportunities

TomHCA: As one quick example, a user of OdinText I recently spoke to used the software to investigate what text comments were most likely to drive belonging into either of several predefined important segments. The nice thing about AI is that it can be very fast. The not so nice thing is that sometimes at first glance some of the items identified, the output, can either be too obvious, or on the other extreme, not make any sense whatsoever.  The gold is in the items somewhere in the middle. The trick is to find a way for the human to interact with the output which gives them confidence and understanding of the results.

a human is not capable of correctly analyzing thousands, 100s of thousands, or even millions of comments/datapoints, whereas AI will do it correctly in a few seconds. The downside of AI is that some outcomes are correct but not humanly insightful or actionable. It’s easier for me to give examples when it didn’t work so well since its hard for me to share info on how are clients are using it. But for instance recently AI found that people mentioning ‘good’ 3 times in their comments was the best driver of NPS score – this is evidently correct but not useful to a human.

In another project a new AI approach we were testing reported that one of the most frequently discussed topics was “Colons”. But this wasn’t medical data! Turns out the plural of Colon is Cola, I didn’t know that. Anyway, people were discussing Coca-Cola, and AI read that as Colons…  This is exactly the part of AI that needs work to be more prevalent in Market Research.”

Since I can’t talk about too much about how our clients use our software on their data, In a way it’s easier for me to give a non-MR example. Imagine getting into a totally autonomous car (notice I didn’t have to use the word AI to describe that). Anyway, you know it’s going to be traveling 65mph down the highway, changing lanes, accelerating and stopping along with other vehicles etc.

How comfortable would you be in stepping into that car today if we had painted all the windows black so you couldn’t see what was going on?  Chances are you wouldn’t want to do it. You would worry too much at every turn that you might be a casualty of oncoming traffic or a tree.  I think partly that’s what AI is like right now in analytics. Even if we’ll be able to perfect the output to be 100 or 99% correct, without knowing what/how we got there, it will make you feel a bit uncomfortable.  Yet showing you exactly what was done by the algorithm to arrive at the solution is very difficult.

Anyway, the upside is that in a few years perhaps (not without some significant trial and error and testing), we’ll all just be comfortable enough to trust these things to AI. In my car example, you’d be perfectly fine getting into an Autonomous car and never looking at the road, but instead doing something else like working on your pc or watching a movie.

The same could be true of a marketing research question. Ultimately the end goal would be to ask the computer a business question in natural language, written or spoken, and the computer deciding what information was already available, what needed to be gathered, gathering it, analyzing it, and presenting the best actionable recommendation possible.

ESOMAR: There are many stories on how smart or stupid AI is. What would be your take on how smart AI Is nowadays. What kind of research tasks can it perform well? Which tasks are hard to take over by bots?

TomHCA: You know I guess I think speed rather than smart. In many cases I can apply a series of other statistical techniques to arrive at a similar conclusion. But it will take A LOT more time. With AI, you can arrive at the same place within milliseconds, even with very big and complex data.

And again, the fact that we choose the technique based on which one takes a few milliseconds less to run, without losing significant accuracy or information really blows my mind.

I tell my colleagues working on this that hey, this can be cool, I bet a user would be willing to wait several minutes to get a result like this. But of course, we need to think about larger and more complex data, and possibly adding other processes to the mix. And of course, in the future, what someone is perfectly happy waiting for several minutes today (because it would have taken hours or days before), is going to be virtually instant tomorrow.

ESOMAR: According to an Oxford study, there is a 61% chance that the market research analyst job will be replaced by robots in the next 20 years. Do you agree or disagree? Why?

TomHCA: Hmm. 20 years is a long time. I’d probably have to agree in some ways. A lot of things are very easy to automate, others not so much.

We’re certainly going to have researchers, but there may be fewer of them, and they will be doing slightly different things.

Going back to my example of autonomous cars for a minute again. I think it will take time for us to learn, improve and trust more in automation. At first autonomous cars will have human capability to take over at any time. It will be like cruise control is now. An accessory at first. Then we will move more and more toward trusting less and less in the individual human actors and we may even decide to take the ability for humans to intervene in driving the car away as a safety measure. Once we’ve got enough statistics on computers being safe. They would have to reach a level of safety way beyond humans for this to happen though, probably 99.99% or more.

Unlike cars though, marketing research usually can’t kill you. So, we may well be comfortable with a far lower accuracy rate with AI here.  Anyway, it’s a nice problem to have I think.

ESOMAR: How do you think research participants will react towards bot researchers?

TomHCA: Theoretically they could work well. Realistically I’m a bit pessimistic. It seems the ability to use bots for spam, phishing and fraud in a global online wild west (it cracks me up how certain countries think they can control the web and make it safer), well it’s a problem no government or trade organization will be able to prevent from being used the wrong way.

I’m not too happy when I get a phone call or email about a survey now. But with the slower more human aspect, it seems it’s a little less dangerous, you have more time to feel comfortable with it. I guess I’m playing devil’s advocate here, but I think we already have so many ways to get various interesting data, I think I have time to wait RE bots. If they truly are going to be very useful and accepted, it will be proven in other industries way before marketing research.

But yes, theoretically it could work well. But then again, almost anything can look good in theory.

ESOMAR: How do you think clients will feel about the AI revolution in our industry?

TomHCA: So, we were recently asked to use OdinText to visualize what the 3,000 marketing research suppliers and clients thought about why certain companies were innovative or not in the 2017 GRIT Report. One of the analysis/visualizations we ran which I thought was most interesting visualized the differences between why clients claimed a supplier was innovative VS why a supplier said these firms were innovative.

I published the chart on the NGMR blog for those who are interested [ http://nextgenmr.com/grit-2017 ], and the differences couldn’t have been starker. Suppliers kept on using buzzwords like “technology”, “mobile” etc. whereas clients used real end result terms like “know how”, "speed" etc.

So I’d expect to see the same thing here. And certainly, as AI is applied as I said above, and is implemented, we’ll stop thinking about it as a buzz word, and just go back to talking about the end goal. Something will be faster and better and get you something extra, how it gets there doesn’t matter.

Most people have no idea how a gasoline engine works today. They just want a car that will look nice and get them there with comfort, reliability and speed.

After that it’s all marketing and brand positioning.

 

[Thanks for reading today. We’re very interested to hear your thoughts on AI as well. Feel free to leave questions or thoughts below, request info on OdinText here, or Tweet to us @OdinText]

Shop Talk on Research Trends: Our Interview with the Industry’s Top Pundit!

GreenBook Interview Covers Partnering, AI/Machine Learning and the Latest Insights Applications for Text Analytics “We should be less worried about each other and more worried about the potential new entrants to this industry.”

That’s what I told GreenBook Blog Editor & Chief Leonard Murphy in an interview recently when he asked me about the trend toward partnering and collaboration between research providers.

It’s not often that one gets to talk shop at length with the industry’s top pundit, so Tim Lynch and I were delighted when Lenny invited us for a frank and broad-based discussion that covered some important ground, including:

  • Why partnering and collaboration among research companies is becoming a critically important factor in today’s marketplace;
  • What the buzz around AI and machine learning is really about and what researchers need to know;
  • How text analytics are being deployed in powerful and novel ways to produce insights that either were not accessible or couldn’t be obtained practically in the past.

Check out Lenny’s post about it here and have a look at the interview below:

 

Special thanks again to Lenny Murphy for a great interview and for your efforts to keep us all informed and to help us get better at what we do!

@TomHCAnderson  - @OdinText

P.S. Want to know more about anything we covered in the interview? Contact us here.

 

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

 

Why Machine Learning is Meaningless

Beware These Buzzwords! The Truth About "Machine Learning" and "Artificial Intelligence" Machine learning, artificial intelligence, deep learning… Unless you’ve been living under a rock, chances are you’ve heard these terms before. Indeed, they seem to have become a must for market researchers.

Unfortunately, so many precise terms have never meant so little!

For computer scientists these terms entail highly technical algorithms and mathematical frameworks; to the layman they are synonyms; but as far as most of us should be concerned, increasingly, they are meaningless.

My engineers would severely chastise me if I used these words incorrectly—an easy mistake to make since there is technically no correct or incorrect way to use these terms, only strict and less strict definitions.

Nor, evidently, is there any regulation about how they’re used for marketing purposes.

(To simplify the rest of this blog post, let’s stick with the term “machine learning” as a catch-all.)

Add to this ambiguity the fact that no sane company would ever divulge the specifics underpinning their machine learning solution for fear of intellectual property theft. Still others may just as easily hide behind an IP claim.

Bottom line: It is simply impossible for clients to know what they are actually getting from companies that claim to offer machine learning unless the company is able and chooses to patent said algorithm.

It’s an environment that is ripe for unprincipled or outright deceitful marketing claims.

A Tale of Two Retailers

Not all machine learning capabilities are created equal. To illustrate, let’s consider two fictitious competing online retailers who use machine learning to increase their add-on sales:

  • The first retailer suggests other items that may be of interest to the shopper by randomly picking a few items from the same category as the item in the shopper’s cart.

 

  • The second retailer builds a complex model of the customer, incorporating spending habits, demographic information and historical visits, then correlates that information with millions of other shoppers who have a similar profile, and finally suggests a few items of potential interest by analyzing all of that data.

In this simplistic example, both retailers can claim they use machine learning to improve shoppers’ experiences, but clearly the second retailer employs a much more sophisticated approach. It’s simply a matter of the standard to which they adhere.

This is precisely what I’m seeing in the insights marketplace today.

At the last market research conference I attended, I was stunned by how many vendors—no matter what they were selling—claimed their product leveraged advanced machine learning and artificial intelligence.

Many of the products being sold would not even benefit from what I would classify as machine learning because the problems they are solving are so simple.

Why run these data through a supercomputer and subject them to very complicated algorithms only to arrive at the same conclusions you could come to with basic math?

Even if all these companies actually did what they claimed, in many cases it would be silly or wasteful.

Ignore Buzzwords, Focus on Results

In this unregulated, buzzword-heavy environment, I urge you to worry less about what it’s called and focus instead on how the technology solves problems and meets your needs.

At OdinText, we use advanced algorithms that would be classified as machine learning/AI, yet we refrain from using these buzzwords because they don’t really say anything.

Look instead for efficacy, real-world results and testimonials from clients who have actually used the tool.

And ALWAYS ask for a real-time demo with your ACTUAL data!

Yours truly,

@TomHCanderson

Ps. See firsthand how OdinText can help you learn what really matters to your customers and predict real behavior. Contact us for a demo using your own data here!

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He  tweets under the handle @tomhcanderson.

Why Communicating with Aliens is Easier than You Think – And What It Means for Your Company

The Movie “Arrival,” Text Analytics and Machine Translation When I speak with prospective OdinText users who’ve been exposed to other text analytics software providers, I find they tend to mention and ask about things like POS tagging, taxonomies, ontologies, etc.

These terms come from linguistics, the discipline upon which many of the text analytics software platforms in the market today are predicated.

But you may be surprised to learn that as a basis for text analytics, linguistics is shockingly inefficient compared to approaches that rely on mathematics/statistics.

One of the most popular movies in theaters right now, “Arrival,” inadvertently makes this case rather well.

Understanding Alien Languages is Easy (Provided You’re Not a Linguist)

arrivallanguage

arrivallanguage

“Arrival” begins with a flock of spaceships touching down in locations around the world. Linguistics professor Louise Banks (Amy Adams) is then recruited to lead an elite team of experts in a race against time to find a way to communicate with the extraterrestrial visitors and avert a global war.

The film proceeds to build a lot of drama around a pretty minor problem of language analysis and translation—conveniently consuming several months during which the plot can thicken—when, in fact, the task of understanding an alien language like in the movie would be quite EASY.

I daresay in all modesty that I could have done this in a fraction of the time with OdinText and with a much smaller team than Adams’ character had!

arrival-human

arrival-human

It Only Takes a Few Words

In her first conversation with the aliens, Louise introduces herself by writing the word “human” on a little whiteboard she carries, to which the aliens respond by introducing themselves in their language.

After this initial exchange, in the real world, only a few more words would be necessary to start creating and applying a code book (a taxonomy or ontology in linguistics speak), which would allow one to quickly translate anything else said and to then communicate via a small, imperfect but highly effective vocabulary.

For example, a little later in the movie, one of the aliens tells Louise that another alien who is missing from their meeting that day is “in the death process,” which, of course, means the other alien is absent because he is dying.

Everyone in the audience gets what the alien means by “in the death process.”  Indeed, communicating successfully with a small, imperfect vocabulary like this is far more efficient and reliable than one might assume. My two-year-old son and I are quite good at communicating in these sorts of two- or three-word phrases.  And no parts of speech tagging are necessary (nor would they be very helpful here).

I’ll come back to this idea of small, imperfect but surprisingly efficient vocabularies in a bit. But first, let’s consider a related but more challenging matter: breaking code.

How the Allies Used Text Analytics to Break the German Code

Compared to translating an alien language, it would be only slightly more difficult—though honestly not that much more difficult—to crack the Nazi Enigma code that helped the Allies win WWII today using OdinText.

Why more difficult? Because unlike the aliens in “Arrival,” who actually want the humans to learn their language in order to communicate, the Nazis wanted their encrypted language to stay indecipherable.

BENEDICT CUMBERBATCH stars in THE IMITATION GAME

BENEDICT CUMBERBATCH stars in THE IMITATION GAME

In the 2014 movie “The Imitation Game,” Benedict Cumberbatch stars as Alan Turing, the genius British mathematician, logician, cryptologist and computer scientist who led the effort to crack the German code.

In contrast to “Arrival,” the drama in “The Imitation Game” centers on Turing’s determination to build a decryption machine, instead of attempting to decode Enigma by hand like every other scientist assigned to the task.

When his boss refuses to fund his machine’s construction, Turing writes to Churchill, who arranges the funding and names him team leader. Turing subsequently fires the key linguists from the project and the linguistic approach to this text analysis (i.e., code breaking) is chucked in favor of computational mathematics.

Turing’s machine is, of course, critical to the solution (though the technology is simple by today’s standards), but the real breakthrough happens when the scientists realize that the machine can be sped up by recognizing routinely used phrases like “Heil Hitler” (again providing a basic code frame or taxonomy).

The Turing Test: Did You Know You Were Talking to a Computer?

In computer engineering classes on artificial intelligence there is an oft-mentioned thought experiment called “The Chinese Room,” which is used to think about the differences between human and computer cognition. It’s often referenced when discussing the Turing Test, which assesses computer intelligence based on whether a human being can distinguish between a computer and a human being’s replies to the same questions.

Going back now to my earlier point about a small taxonomy being sufficient for communication, and keeping in mind that today’s far more powerful computers running Google Translate or OdinText can process unstructured text data in any language order of magnitudes faster than any human or Turing’s machine, I think The Chinese Room analogy is not just an interesting AI thought experiment, but a good way to explain why translating the alien language in “Arrival” should have been so much easier than the film made it out to be.

The Chinese Room

Imagine for a moment a room with no windows, only a door with a small mail slot.

In the room, we find an average English speaker recruited randomly off the street, someone without any advanced education or background in foreign languages or linguistics.

This person has been paid to spend the day in this room and given a code book for a “squiggly language” he/she has been tasked with translating. In the story, it’s typically Chinese, but it could be any foreign language with which the person is totally unfamiliar. Let’s assume Chinese to stay close to the original story.

After giving him/her this code book—basically an English-to-Chinese/Chinese-to-English dictionary—we tell this person that on occasion we may pass them a note written in Chinese and that they will need to use the code book to figure out what the message means in English. Likewise, if they need anything—water, food, bathroom break, etc.—they will need to pass the request in a note written in Chinese back through the mail slot to us.

Note that this person has ABSOLUTELY NO TRAINING in the syntax or grammar of Chinese. His/her notes may be rudimentary, but certainly they will still be understood.

What’s more, if a native Chinese speaker walked by and observed the notes coming out, they would probably assume that there was a Chinese speaker in the room.

Now, instead of a code book, suppose the person in the room was using a computer program like Google Translate or OdinText, which can instantaneously translate or otherwise process any number of words coming out of the room, making it even more likely that the Chinese-speaking passerby assumes the person in the room speaks Chinese.

Think about this the next time you’re wondering whether data translated by machine—which is so much faster and cheaper than human translation—is sufficient for text analytics purposes (i.e. understanding what hundreds or hundreds of thousands of humans are saying in some foreign language).

My strong belief is yes, definitely. Whether I’m looking at Swedish or Chinese, I’m always rather impressed by how on point today’s computer translation is, and how irrelevant any nuance is, especially at the aggregate level, which is usually where we need to be.

You don’t need a team of NASA scientists, nor a month to do it. You can have it ready by morning! The technology is already here!

@TomHCAnderson

  1. To learn more about how OdinText can help you learn what really matters to your customers and predict real behavior here on Earth, please contact us or request a FREE demo using your own data here!

[Key Terms: AI, Artificial Intelligence, Machine Translation, Text Analytics, Linguistics, Computational Linguistics, Taxonomies, Ontologies, Natural Language Processing, NLP]

tomtextanalyticstips

tomtextanalyticstips

Tom H. C. Anderson OdinText Inc. www.odintext.com

ABOUT ODINTEXT

OdinText is a patented SaaS (software-as-a-service) platform for advanced analytics. Fortune 500 companies such as Disney and Shell Oil use OdinText to mine insights from complex, unstructured text data. The technology is available through the venture-backed Stamford, CT firm of the same name founded by CEO Tom H. C. Anderson, a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research. Anderson is the recipient of numerous awards for innovation from industry associations such as ESOMAR, CASRO, the ARF and the American Marketing Association. He tweets under the handle @tomhcanderson.