Posts tagged Political Polling
Trump’s Brand Positioning One Year In

State of The POTUS - Text Analytics Reveals the Reasons Behind Trumps Approval Ratings

Over the past few weeks we’ve heard political pundits on all major news networks chime in on how Trump is doing one year after taking office. Part of the discussion is around what he has and hasn’t done, but an even bigger part continues to be about how he is perceived, both domestically and abroad, and some very grim opinion/approval polling is available. Many polls  have Trump as the President with the lowest approval ratings in history.

Sadly, Political Polling, including approval ratings, tells us absolutely nothing about the underlying causes for the ratings. Therefore, I thought I’d share our findings in this area. Utilizing our text analytics software, OdinText, we have been tracking not just sentiment related to Trump, but more importantly, the positioning of 40+ topics/themes that are important predictors of the sentiment.. In the brief analysis below, I will not have time to go into each of the attributes we have identified as important drivers, I will focus on a few of the areas which have seen the most change for Trump during the past year.

How has the opinion of Trump changed in the minds of the American people?

By looking at Trump’s positioning just before he took office (with all the campaign positioning fresh in the minds of the people), and comparing it to half a year into his office, and again now a full year into office, we can get a good idea about the impact various issues have on approval ratings and even more importantly, positioning.

Let’s start by looking back to just before he was elected. OdinText’s Ai uncovered the 15 most significant changes in perception since just before Trump won the election and now. Trump has fallen on 11 of these attributes and increased on 4.

Trump Pre Election Positioning VS One Year In

Tromp-Then-VS-Now.jpg

If we compare Trump just before the election VS Trump today, we several key differences. More recently four themes have become more important in terms of describing what Trump stands for in the minds of Americans when we include everyone (both those who like and dislike him). These newer positions are “Less Regulation”, “Healthcare Reform”, “Money/Greed”, and “Dishonesty”. Interestingly, text analytics reveals that one of the important issues seems to be changing, Trumps supporters are now more likely to be use the term “Healthcare Reform” rather than the previous “Repeal Obamacare”.

Other than the repeal of Obamacare issue, prior to the election, in the minds of Americans Trump was more likely to be associated with “Gun Rights”, “Honesty”, “Trade Deals”, “Change”, Supporting “Pro Life”, pro and con “Immigration” related issues including “The Wall”, and finally his slogan “MAGA” (Make America Great Again).

The decrease in relevance of many of these issues has to do with pre-election positioning, both by the Trump/Republican Party, as well as the Democrats Counter Positioning of him. After the election seemingly, some of these like ‘Gun Control’ have become less important for various reasons.

Five Months from Record Low

If we look at changes between this past Summer and now, there has been significantly less movement in terms of his positioning in American minds. He has seen a slight but significant bump in overall positive emotional sentiment/Joy, and the MAGA positioning as well as on Taxes, the economy, and The Wall, while also seeing a decrease in “Anger” and “Hate/Racism” which peaked this summer.

Trump-OdinText.jpg

His lowest point so far in the minds of Americans was during the August 12th, 2017 White Nationalist Rally in Charlottesville. Trump’s positioning as a Hate Monger was almost as high as the weekend before the election, while simultaneously positive emotional sentiment and ‘MAGA’ among his supporters was at an all time low.

Since the August low Trump does appear to have rebounded some, and while one year into office many believe the one thing Trump now stands for is himself, greed and money are a lesser evil in America than hate and racism.

It seems that one year into office, at least for now, the economy and tax cuts are giving Trump a bit of a bump back to pre-election levels in the minds of many Americans.

I’m not sure what the future holds in this case, but I hope you like me found some of the underlying reasons for his approval ratings of interest. These are after all more important than simple ratings, because these reasons are levers that can be changed to affect the final outcomes and positioning of any brand, including that of a POTUS.

@TomHCAnderson

[Note: Curious if OdinText’s new Ai can help you understand what drives your brands ratings? Request more info or early access to our brand new release here]

How Fear of Frexit Helped Macron Win the French Presidential Election
NEW Text Analytics PollTM Shows a Trump-Style Le Pen Upset May Have Been Averted by Overwhelming Opposition to a Frexit

Last week on this blog, I reported findings from a Text Analytics Poll™ of 3,000 French citizens showing that Marine Le Pen’s positioning going into the runoff looked remarkably similar to that of another recent underdog candidate, Donald Trump, just days before his stunning U.S. election upset.

Indeed, a similar set of circumstances appeared to be in play, as noted by the New York Times in an article on Election Day: “Populist anger at the political establishment; economic insecurity among middle class voters; public alienation toward mainstream political parties; rising resentment toward immigrants.”

Yet on Sunday, the French people elected Emmanuel Macron president over Le Pen by about 66/34. So why wasn’t the race closer?

The answer may be in data we collected from French and British respondents, which shows that the prospect of a Le Pen “Frexit” probably figured highly in Macron’s victory.

Positioning: Voting Against a Candidate

Our data in the French presidential poll were eerily reminiscent of data we collected prior to the U.S. election, which suggested a victory may not so much amount to an endorsement of one candidate as a rejection of the other.

Our analysis showed that first and foremost, the French associated Le Pen with bigotry and hatemongering, but text analysis also showed that among the French she was strongly positioned around immigration reform and putting France first—a platform that worked effectively for Trump, who had also been labeled a bigot in the minds of many Americans. In fact, the perception of Trump as a bigot was only slightly lower among Americans than the perception of Le Pen as a bigot among the French (11% vs 15%, respectively).

In contrast, respondents most frequently associated Macron with “liberalism”—meaning economic liberalism favoring free markets—followed by capitalism, neither of which is necessarily an asset in terms of positioning in French politics, particularly for a wealthy investment banker at a time when job security is a major concern among middleclass voters.

But the main platform issue that people associated with Macron—which trailed just behind people’s view of him as a proponent of free markets/capitalism—was Europe/EU, in stark contrast to Le Pen, who was well known to strongly favor an EU “Frexit.” The EU is also synonymous with the free movement of commerce and people, which, of course, stands in contrast to the dual protectionist/anti-immigration platform championed by Le Pen.

This, naturally, begged the question: How important is EU membership to the French population?

If the mood of the French electorate were anything like that of British Brexit voters, then favoring EU membership could be a liability. So just days ahead of the election we ran a second Text Analytics Poll—once again a single question—only this time we polled 3000 voters each in France and the UK:

  1. “What does the European Union mean to you?” (or “Qu'est ce que l'Union Européenne représente pour vous?” in French).

EU Membership Means “Hope”

It’s worth noting that turnout for this election was reportedly the lowest in 36 years. These were presumably voters who never would’ve cast a ballot for Le Pen, but who also could not be mobilized for Macron. In short, they were Macron’s to lose.

This new poll data helps explain why, in spite of inspiring lackluster confidence and support from anti-Le Pen voters, Macron nonetheless won the election by a sizable margin.

EU UK V FRANCE

While a significant number of the French tell us the EU means nothing to them, this is significantly lower than the Brits who say so.

Conversely, the French are more than five times as likely as Brits to say the EU means “Everything/A Lot” to them. The French are also far less likely than their UK counterparts to criticize the EU for corruption, wastefulness and such.

Instead, the French are extremely optimistic about the EU, with many indicating it provides “future hope” and keeps them out of wars and at “Peace” —something Brits are more likely to attribute to NATO.

High Positive Emotions for EU

Ultimately, emotions are what really drive behavior, and in the end, the French electorate’s highly positive emotional disposition toward the EU—notably their “Anticipation” and hopefulness—may have countered Macron’s relatively weak positioning in this election.

eMOTIONS TOWARD EU 2

Closing Thoughts

I read some responses to our original analysis that I’d characterize as emotionally overwrought. I understand that this is an occupational hazard for anyone conducting political opinion research, but our duty is to present and report objectively what the data tells us—even if what we’re seeing in the data isn’t necessarily pleasant.

The job of these polls was to assess the candidates’ brand positioning in the minds of voters, and to review the potential opportunities and threats in the “marketplace” as we would for any brand.

I want to stress that I am not discounting people’s distaste for Marine Le Pen’s perceived bigotry as being a key factor behind her loss in this election, but I’ll emphasize again that it was only slightly higher (15% vs 11%) than what we saw for Donald Trump, who, as you know, is now the President of the United States.

And at the end of the day, the hard truth is that more than a third of those who voted in this election voted for a right-wing nationalist—a candidate whose background makes Donald Trump look like a civil rights activist by comparison. Moreover, 25% of the electorate were not sufficiently affronted by Madame Le Pen’s politics to at least vote against her by voting for Macron; instead, they just abstained.

Like many people, I am relieved by the outcome of this election, but it seems clear from the positioning of both candidates—as reported by French citizens, unaided, in their own words—and the data on EU membership from our second poll that the French people did not simply reject Marine Le Pen because she is positioned as a racist/hatemonger; she was on the wrong side of Frexit.

@TomHCAnderson

*Note: n=3,000 responses were collected via Google Surveys 3/3-5/5 2017. Google Surveys allow researchers to reach a validated French General Population Representative sample by intercepting people attempting to access high-quality online content or who have downloaded the Google Opinion Rewards mobile app. Results are +/- 2.51% accurate at the 95% confidence interval.

Text Analytics Tips

About Tom H. C. Anderson Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the “Four under 40” market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

When Oprah is President We Can Celebrate Family Day While Skiing!

Text Analytics Poll™ Shows What We’d Like Instead of Presidents Day It’s been less than a week since our Valentine’s Day poll unearthed what people dislike most about their sweethearts, and already another holiday is upon us! Though apparently for most of us it’s not much of a holiday at all; well over half of Americans say they do nothing to commemorate ‘Presidents Day.’

You’ll note I put the holiday in single quotes. That’s because there’s some confusion around the name. Federally, it’s recognized as Washington’s Birthday. At the state level, it’s known by a variety of names—President’s Day, Presidents’ Day, Presidents Day and others, again, depending on the state.

But the name is not the only inconsistency about Presidents Day. If you’re a federal employee OR you happen to be a state employee in a state where the holiday is observed OR you work for an employer who honors it, you get the day off work with pay. Schools may or may not be closed, but that again depends on where you live.

As for what we’re observing exactly, well, that also depends on the state, but people generally regard the holiday as an occasion to honor either George Washington, alone, or Washington and Abraham Lincoln, or U.S. presidents, in general.

Perhaps the one consistent aspect of this holiday is the sales? It’s particularly popular among purveyors of automobiles, mattresses, and furniture.

Yes, it’s a patriotic sort of holiday, but on the whole, we suspected that ‘Presidents Day’ fell on the weaker end of the American holiday spectrum, so we investigated a little bit…

About this Text Analytics Poll™

In this example for our ongoing series demonstrating the efficiency, flexibility, and practicality of the Text Analytics Poll™ for insights generation, we opted for a light-hearted poll using a smaller sample* than usual. While text analytics have obvious value when applied to larger-scale data where human reading or coding is impossible or too expensive, you’ll see here that OdinText also works very effectively with smaller samples!

I’ll also emphasize that the goal of these little Text Analytics Polls™ is not to conduct a perfect study, but to very quickly design and field a survey with only one open-ended question, analyze the results with OdinText, and report the findings in here on this blog. (The only thing that takes a little time—usually 2-3 days—is the data collection.)

So while the research is representative of the general online population, and the text analytics coding applied with 100% consistency throughout the entire sample, this very speedy exercise is meant to inspire users of OdinText to use the software in new ways to answer new questions. It is not meant to be an exhaustive exploration of the topic. We welcome our users to comment and suggest improvements in the questions we ask and make suggestions for future topics!

Enough said, on to the results…

A Holiday In Search of a Celebrant in Search of a Holiday…

Poll I: Americans Celebrate on the Slopes, Not in Stores

When we asked Americans how they typically celebrate Presidents Day, the vast majority told us they don’t. And those few of us lucky enough to have the day off from work tend to not do much outside of sleeping.

But the surprise came from those few who actually said they do something on Presidents Day!

We expected people to say they go shopping on Presidents Day, but the most popular activity mentioned (after nothing and sleeping) was skiing! And skiing was followed by 2) barbecuing and 3) spending time with friends—not shopping.

Poll II: Change it to Family Day?

So, maybe as far as holidays go, Presidents Day is a tad lackluster? Could we do better?

We asked Americans:

Q. If we could create a new holiday instead of Presidents Day, what new holiday would you suggest we celebrate?

While some people indicated Presidents Day is fine as is, among those who suggested a new holiday there was no shortage of creativity!

The three most frequently mentioned ideas by large margins for replacement of Presidents Day were 1) Leaders/Heroes Day, 2) Native American Day (this holiday already exists, so maybe it could benefit from some publicity?) and 3) Family Day (which is celebrated in parts of Canada and other countries).

People also seemed to like the idea of shifting the date and making a holiday out of other important annually occurring events that lent themselves to a day off in practical terms like Election Day, Super Bowl Monday and, my personal favorite, Taxpayer Day on April 15!

Poll III: From Celebrity Apprentice to Celebrity POTUS

Donald Trump isn’t the first person in history to have not held elected office before becoming president, but he is definitely the first POTUS to have had his own reality TV show! Being Presidents Day, we thought it might be fun to see who else from outside of politics might interest Americans…

 Q: If you could pick any celebrity outside of politics to be President, who would it be?

 

Looks like we could have our first female president if Oprah ever decides to run. The media mogul’s name just rolled off people’s tongues, followed very closely by George Clooney, with Morgan Freeman in a respectable third.

Let Them Tell You in Their Own Words

In closing, I’ll remind you that none of these data were generated by a multiple-choice instrument, but via unaided text comments from people in their own words.

What never ceases to amaze me about these exercises is how even when we give people license to say whatever crazy thing they can think up—without any prompts or restrictions—people often have the same thoughts. And so open-ends lend themselves nicely to quantification using a platform like OdinText.

If you’re among the lucky folks who have the holiday off, enjoy the slopes!

Until next time, Happy Presidents Day!

@TomHCAnderson

PS.  Do you have an idea for our next Text Analytics Poll™? We’d love to hear from you. Or, why not use OdinText to analyze your own data!

[*Today’s OdinText Text Analytics PollTM sample of n=500 U.S. online representative respondents has been sourced through of Google Surveys. The sample has a confidence interval of +/- 4.38 at the 95% Confidence Level. Larger samples have a smaller confidence level. Subgroup analyses within the sample have a larger confidence interval.]

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR, and the ARF. He was named one of the “Four under 40” market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson

Poll: What Other Countries Think of Trump’s Immigration Order

Text Analytics PollTM Shows Australians, Brits, and Canadians  Angry About Executive Order Temporarily Barring Refugee (Part II of II)In my previous post, we compared text analysis of results from an open-ended survey instrument with a conventional Likert-scale rating poll to assess where 3,000 Americans really stand on President Trump’s controversial executive order temporarily barring refugees and people from seven predominately-Muslim countries from entering the U.S.

Today, we’re going to share results from an identical international study that asked approx. 9,000 people—3,000 people from each of three other countries—what they think about the U.S. immigration moratorium ordered by President Trump.

But first, a quick recap…

As I noted in the previous post, polling on this issue has been pretty consistent insomuch as Americans are closely divided in support/opposition, but the majority position flips depending on the poll. Consequently, the accuracy of polling has again been called into question by pundits on both sides of the issue.

By fielding the same question first in a multiple-choice response format and a second time providing only a text comment box for responses, and then comparing results, we were able to not only replicate the results of the former but gain a much deeper understanding of where Americans really stand on this issue.

Text analysis confirmed a much divided America with those opposing the ban just slightly outnumbering (<3%) those who support the order (42% vs 39%). Almost 20% of respondents had no opinion or were ambivalent on this issue.

Bear in mind that text analysis software such as OdinText enables us to process and quantify huge quantities of comments (in this case, more than 1500 replies from respondents using their own words) in order to arrive at the same percentages that one would get from a conventional multiple-choice survey.

But the real advantage to using an open-ended response format (versus a multiple-choice) to gauge opinion on an issue like this is that the responses also tell us so much more than whether someone agrees/disagrees or likes/dislikes. Using text analytics we uncovered people’s reasoning, the extent to which they are emotionally invested in the issue, and why.

Today we will be looking a little further into this topic with data from three additional countries: Australia, Canada and the UK.

A note about multi-lingual text analysis and the countries selected for this project…

Different software platforms handle different languages with various degrees of proficiency. OdinText analyzes most European languages quite well; however, analysis of Dutch, German, Spanish or Swedish text requires proficiency in said language by the analyst. (Of course, translated results, including and especially machine-translated results, work very well with text analytics.)

Not inconspicuously, each of the countries represented in our analysis here has an English-speaking population. But this was not the primary reason that we chose them; each of these countries has frequently been mentioned in news coverage related to the immigration ban: The UK because of Brexit, Australia because of a leaked telephone call between President Trump and its Prime Minister, and Canada due to its shared border and its Prime Minister’s comments on welcoming refugees affected by the immigration moratorium.

Like our previous U.S. population survey, we used a nationally-representative sample of n=3000 for each of these countries.

Opposition Highest in Canada, Lowest in the UK

It probably does not come as a surprise to anyone who’s been following this issue in the media that citizens outside of America are less likely to approve of President Trump’s immigration moratorium.

I had honestly expected Australians to be the most strongly opposed to the order in light of the highly-publicized and problematic telephone call transcript leaked last week between President Trump and the Australian Prime Minister (which, coincidentally, involved a refugee agreement). But interestingly, people from our close ally and neighbor to the north, Canada, were most strongly opposed to the executive order (67%). The UK had proportionately fewer opposing the ban than Australia (56% vs. 60%), but the numbers of people opposed to the policy in both countries significantly lagged the Canadians. Emotions Run High Abroad Deriving emotions from text is an interesting and effective measure for understanding people’s opinions and preferences (and more useful than the “sentiment” metrics often discussed in text analytics and, particularly, in social media monitoring circles).

The chart below features OdinText’s emotional analysis of comments for each of the four countries across what most psychologists agree constitute the eight major emotion categories:

We can see that while the single highest emotion in American comments is joy/happiness, the highest emotion in the other three countries is anger. Canadians are angriest. People in the UK and Australians exhibit somewhat greater sadness and disgust in their comments. Notably, disgust is an emotion that we typically only see rarely in food categories. Here it takes the form of vehement rejection with terms such as “sickened,” “revolting,” “vile,” and, very often, “disgusted.” It is also worth noting that in cases, people directed their displeasure at President Trump, personally.

Examples:

"Trump is a xenophobic, delusional, and narcissistic danger to the world." – Canadian (anger) “Most unhappy - this will worsen relationships between Muslims and Christians.” – Australian (sadness) "It's disgusting. You can't blame a whole race for the acts of some extremists! How many white people have shot up schools and such? Isn't that an act of terror? Ban guns instead. He's a vile little man.” –Australian (disgust)

UK comments contain the highest levels of fear/anxiety:

"I am outraged. A despicable act of racism and a real worry for what political moves may happen next." – UK (fear/anxiety)

That said, it is also important to point out that there is a sizeable group in each country who express soaring agreement to the level of joy:

“Great move! He should stop all people that promote beating of women” – Australian (joy) “Sounds bloody good would be ok for Australia too!” – Australian (joy) “EXCELLENT. Good to see a politician stick by his word” – UK (joy) “About time, I feel like it's a great idea, the United States needs to help their own people before others. If there is an ongoing war members of that country should not be allowed to migrate as the disease will spread.” – Canadian (joy)

Majority of Canadians Willing to Take Refugee Overflow Given Canada’s proximity to the U.S., and since people from Canada were the most strongly opposed to President Trump’s executive order, this raised the question of whether Canadians would then support a measure to absorb refugees that would be denied entrance to the U.S., as Prime Minister Justin Trudeau appears to support.

(Note: In a Jan. 31 late-night emergency debate, the Canadian Parliament did not increase its refugee cap of 25,000.)

 

A solid majority of Canadians would support such an action, although it’s worth noting that there is a significant difference between the numbers of Canadians who oppose the U.S. immigration moratorium (67%) and the number who indicated they would be willing to admit the refugees affected by the policy.

When asked a follow-up question on whether “Canada should accept all the refugees which are turned away by USA's Trump EO 13769,” only 45% of Canadians agreed with such a measure, 33% disagreed and 22% said they were not sure.

Final Thoughts: How This Differs from Other Polls Both the U.S. and the international versions of this study differ significantly from any other polls on this subject currently circulating in the media because they required respondents to answer the question in a text comment box in their own words, instead of just selecting from options on an “agree/disagree” Likert scale.

As a result, we were able to not only quantify support and opposition around this controversial subject, but also to gauge respondents’ emotional stake in the matter and to better understand the “why” underlying their positions.

While text analysis allows us to treat qualitative/unstructured data quantitatively, it’s important to remember that including a few quotes in any analysis can help profile and tell a richer story about your data and analysis.

We also used a substantially larger population sample for each of the countries surveyed than any of the conventional polls I’ve seen cited in the media. Because of our triangulated approach and the size of the sample, these findings are in my opinion the most accurate numbers currently available on this subject.

I welcome your thoughts!

@TomHCAnderson - @OdinText

About Tom H. C. Anderson Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the "Four under 40" market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

What Americans Really Think about Trump’s Immigration Ban and Why

Text Analysis of What People Say in Their Own Words Reveals More Than Multiple-Choice Surveys It’s been just over a week since President Trump issued his controversial immigration order, and the ban continues to dominate the news and social media.

But while the fate of Executive Order 13769—“Protecting the Nation from Foreign Terrorist Entry into the United States”—is being hashed out in federal court, another fierce battle is being waged in the court of public opinion.

In a stampede to assess where the American people stand on this issue, the news networks have rolled out a parade of polls. And so, too, once again, the accuracy of polling data has been called into question by pundits on both sides of the issue.

Notably, on Monday morning the president, himself, tweeted the following:

Any negative polls are fake news, just like the CNN, ABC, NBC polls in the election. Sorry, people want border security and extreme vetting.

— Donald J. Trump (@realDonaldTrump) February 6, 2017

Majority Flips Depending on the Poll

It’s easy to question the accuracy of polls when they don’t agree.

Although on the whole these polls all indicate that support is pretty evenly divided on the issue, the all-important sound bite of where the majority of Americans stand on the Trump immigration moratorium flips depending on the source:

  • NBC ran with an Ipsos/Reuters poll that found the majority of Americans (49% vs. 41%) support the ban.

  • Fox News went with similar results from a poll by Quinnipiac College (48% in favor vs. 42% opposed).

  • CNN publicized results from an ORC Poll with the majority opposed to the ban (53% vs. 47%).

  • A widely reported Gallup poll found the majority of Americans oppose the order (55% to 42%).

There are a number of possible reasons for these differences, of course. It could be the way the question was framed (as suggested in this Washington Post column); it could be the timing (much has transpired and has been said between the dates these polls were taken); maybe the culprit is sample; perhaps modality played a part (some were done online, others by phone with an interviewer), etc.

My guess is that all of these factors to varying degrees account for the differences, but the one thing all of these polls share is that the instrument was quantitative.

So, I decided to see what if anything happens when we try to “unstructure” this question, which seemingly lends itself so perfectly to a multiple-choice format. How would an open-ended version of the same question compare with the results from the structured version? Would it add anything of value?

Part I: A Multiple-Choice Benchmark

The first thing we did was to run a quantitative poll as a comparator using a U.S. online nationally representative sample* of n=1,531 (a larger sample, by the way, than any of the aforementioned polls used).

In carefully considering how the question was framed in the other polls and how it’s being discussed in the media, we decided on the following wording:

“Q. How do you personally feel about Trump's latest Executive Order 13769 ‘Protecting the Nation from Foreign Terrorist Entry into the United States’ aka ‘A Muslim Ban’”?

We also went with the simplest and most straightforward closed-ended Likert scale—a standard five-point agreement scale. Below are the results:

TextAnalyticsTrumpOrder1.png

Given a five-point scale, the most popular answer by respondents (36%) was “strongly disagree.” Interestingly, the least popular choice was “somewhat disagree” (6.6%).

Collapsing “strongly” and “somewhat” (see chart below) we found 4% more Americans (43%) disagree with Trump’s Executive Order than agree with it (39%). A sizeable number (18%) indicated they aren’t sure/don’t know.

Trump-Text-Analytics-2.png

Will It Unstructure? - A Text Analytics PollTM

Next, we asked another 1500 respondents from the same U.S. nationally online representative source* EXACTLY the same question, but instead of providing choices for them to select from, we asked them to reply in an open-ended comment box in their own words.

We ran the resulting comments through OdinText, with the following initial results:

Trump-OdinText.png

As you can see, the results from the unstructured responses were remarkably close to those from structured question. In fact, the open-ended responses suggest Americans are slightly closer to equally divided on the issue, though slightly more disagree (a statistically significant percentage given the sample size).

This, however, is where the similarities between unstructured and structured data end.

While there is nothing more to be done with the Likert scale data, the unstructured question data analysis has just begun…

Low-Incidence Insights are Hardly Incidental

It’s worth noting here that OdinText was able to identify and quantify many important, but low-incidence insights—positive and negative— that would have been treated as outliers in a limited code-base and dismissed by human coders:

  • “Embarrassment/Shame” (0.2%)

  • “Just Temporary” (0.5%)

  • “Un-American” (0.9%)

  • “Just Certain/Specific Countries” (0.9%)

  • “Unconstitutional/Illegal” (2%)

  • “Not a Muslim Ban/Stop Calling it that” (2.9%)

An Emotionally-Charged Policy

EMOTIONAL-SENTIMENT-ANALYSIS-TRUMP.png

It shouldn’t come as a surprise to anyone that emotions around this particular policy run exceptionally high.

OdinText quickly quantified the emotions expressed in people’s comments, and you can see that while there certainly is a lot of anger—negative comments are spread across anger, fear/anxiety and sadness—there is also a significant amount of joy.

What the heck does “joy” entail, you ask? It means that enough people expressed unbridled enthusiasm for the policy along the lines of, “I love it!” or “It’s about time!” or “Finally, a president who makes good on his campaign promises!”

Understanding the Why Behind People’s Positions

Last, but certainly not least, asking the same question in an open-ended format where respondents can reply in their own words enables us to also understand why people feel the way they do.

We can then quantify those sentiments using text analytics and see the results in context in a way that would not have been possible using a multiple-choice format.

Here are a few examples from those who disagree with the order:

  • “Just plain wrong. It scored points with his base, but it made all Americans look heartless and xenophobic in the eyes of the world.”

  • “Absolutely and unequivocally unconstitutional. The foundation, literally the reason the first European settlers came to this land, was to escape religious persecution.”

  • “I don't like and it was poorly thought out. I understand the need for vetting, but this was an absolute mess.”

  • “I think it is an overly confident action that will do more harm than good.”

  • “I understand that Trump's intentions mean well, but his order is just discriminating. I fear that war is among us, and although I try my best to stay neutral, it's difficult to support his actions.”

Here are a few from those who agree:

  • “I feel it could have been handled better but I agree. Let’s make sure they are here documented correctly and backgrounds thoroughly checked.”

  • “I feel sometimes things need to be done to demonstrate seriousness. I do feel bad for the law abiding that it affects.”

  • “Initially I thought it was ridiculous, but after researching the facts associated with it, I'm fine with it. Trump campaigned on increasing security, so it shouldn't be a surprise. I think it is reasonable to take a period of time to standardize and enforce the vetting process.”

  • “I feel that it is not a bad idea. The only part that concerns me is taking away from living the American Dream for those that aren’t terrorists.”

  • “good but needed more explanation”

  • “OK with it - waiting to see how it pans out over the next few weeks”

  • “I think it is good, as long as it is temporary so that we can better vet those who would come to the U.S.”

And just as importantly, yet oft-overlooked those who aren’t completely sure:

  • “not my circus”

  • “While the thought is good and just for our safety, the implementation was flawed, much like communism.”

Final Thoughts: What Have we Learned?

First of all, we saw that the results in the open-ended format replicated those of the structured question. With a total sample of 3000, these results are statistically significant.

Second, we found that while emotions run high for people on both sides of this issue, comments from those who disagree with the ban tended to be more emotionally charged than from those who agreed with the ban. I would add here that some of the former group tended not to distinguish between their feelings about President Trump and the policy.

We also discovered that supporters of the ban appear to be better informed about the specifics of the order than those who oppose it. In fact, a significant number of the former group in their responses took the time to explain why referring to the order as “a Muslim ban” is inaccurate and how this misconception clouds the issue.

Lastly, we found that both supporters and detractors are concerned about the order’s implementation.

Let me know what you think. I’d be happy to dig into this data a bit more. In addition, if anyone is curious and would like to do a follow-up analysis, please contact me to discuss the raw data file.

@TomHCAnderson

Ps. Stay tuned for Part II of this study, where we’ll explore what the rest of the world thinks about the order!

*Note: Responses (n=3,000) were collected via Google Surveys. Google Surveys allow researchers to reach a validated (U.S. General Population Representative) sample by intercepting people attempting to access high-quality online content—such as news, entertainment and reference sites—or who have downloaded the Google Opinion Rewards mobile app. These users answer up to 10 questions in exchange for access to the content or Google Play credit. Google provides additional respondent information across a variety of variables including source/publisher category, gender, age, geography, urban density, income, parental status, response time as well as google calculated weighting. Results are +/- 1.79% accurate at the 95% confidence interval.

How Did The Media Get It So Wrong?

And why your research may be just as inaccurate... (An OdinText Text Analytics PollTM ) [Missed ARF’s Post-Election Podcast? Watch the Free VideoSpecial Event Video Explores Election Poll Fail and Implications for Market Research]

I know many of you weren’t able to get into the ARF’s post-election podcast this week due to limited space, but now you can watch the video!

On Tuesday, I had the honor to participate along with a number of industry experts in a special ARF/Greenbook podcast event exploring why pollsters were unable to predict the outcome of the 2016 Presidential Election and the implications for those of us working in marketing and research.

The ARF has now released a free recording of the event complete with presentations and the panel session here.

 

Receive My Deck: New Insights Included!

I’ve previously blogged about research we conducted right before the election that indicated the Clinton campaign was in more trouble from a positioning standpoint than anyone realized.

My presentation took a deeper dive into those data with some additional insights I haven’t covered before and also analyzed some of the key weaknesses in conventional polling that may have accounted for the dramatic differences between projections and actual voter behavior.

You can watch my presentation in the video, if you would also like a copy of the deck you can request it via this link (just type “please send me the Election Analysis” in the comment box).

I cannot emphasize enough that the reasons for the miscalculations we saw in the polls are not much different from those we routinely see in the commercial/consumer sphere. There are lessons to be learned for anyone whose job entails understanding and predicting consumer behavior.

I hope you’ll watch the video and let me know what you think.

Thanks!

@TomHCAnderson

Ps. To learn more about how OdinText can help you bridge the gap between research and actual in-market behavior, contact us for info or a free demo using your own data!

tomtextanalyticstips

Tom H. C. Anderson

OdinText Inc. www.odintext.com

 

ABOUT ODINTEXT

OdinText is a patented SaaS (software-as-a-service) platform for advanced analytics. Fortune 500 companies such as Disney and Shell Oil use OdinText to mine insights from complex, unstructured text data. The technology is available through the venture-backed Stamford, CT firm of the same name founded by CEO Tom H. C. Anderson, a recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research. Anderson is the recipient of numerous awards for innovation from industry associations such as ESOMAR, CASRO, the ARF and the American Marketing Association. He tweets under the handle @tomhcanderson.

What’s Really Wrong with Polling

What Can Researchers Learn From Yet Another Major Polling Fail (Text Analytics PollingTM ) Whatever your politics, I think you’ll agree that Tuesday’s election results were stunning. What is now being called an historic upset victory for Donald Trump apparently came as a complete shock to both of the campaigns, the media and, not least, the polling community.

The question everyone seems to be asking now is how could so many projections have been so far off the mark?

Some pretty savvy folks at Pew Research Center took a stab at some reasonable guesses on Wednesday—non-response bias, social desirability bias, etc.—all of which probably played a part, but I suspect there’s more to the story.

I believe the real problem lies with quantitative polling, itself. It just is not a good predictor of actual behavior.

Research Told Us Monday that Clinton Was In Trouble

On Monday I ran a blog post highlighting responses to what was inherently a question about the candidates’ respective positioning:

“Without looking, off the top of your mind, what issues does [insert candidate name] stand for?”

Interestingly, in either case, rather than naming a political issue or policy supported by the candidate, respondents frequently offered up a critical comment about his/her character instead (reflecting a deep-seated, negative emotional disposition toward that candidate). [See chart below]

odintexttrumpclintonissues

Our analysis strongly suggested that Hillary Clinton was in more trouble than any of the other polling data to that point indicated.

Why?

  1. The #1 most popular response for Hillary Clinton involved the perception of dishonesty/corruption.

 

  1. The #1 and #2 most popular responses for Donald Trump related to platform (immigration, followed by pro-USA/America First), followed thirdly by perceived racism/hatemongering.

Bear in mind, again, that these were unaided, top-of-mind responses to an open-ended question.

So for those keeping score, the most popular response for Clinton was an emotionally-charged character dig; the two most popular responses for Trump were related to political platform.

This suggested that not only was Trump’s campaign messaging “Make America Great Again” resonating better, but that of the two candidates, the negative emotional disposition toward Hillary Clinton was higher than for Trump.

Did We Make a Mistake?

What I did not mention in that blog post was that initially my colleagues and I suspected we might have made a mistake.

Essentially, what these responses were telling us didn’t jibe with any of the projections available from pollsters, with the possible exception of the highly-respected Nate Silver, who was actually criticized for being too generous with Trump in weighting poll numbers up (about a 36% chance of winning or slightly better than expecting to flip tails twice with a coin).

How could this be? Had we asked the wrong question? Was it the sample*?

Nope. The data were right. I just couldn’t believe everyone else could be so wrong.

So out of fear that I might look incompetent and/or just plain nuts, I decided to downplay what this data clearly showed.

I simply wrote, “This may prove problematic for the Clinton camp.”

The Real Problem with Polls

Well, I can’t say I told you so, because what I wrote was a colossal understatement; however, this experience has reinforced my conviction that conventional quantitative Likert-scale survey questions—the sort used in every poll—are generally not terrific predictors of actual behavior.

If I ask you a series of questions with a set of answers or a ratings scale I’m not likely to get a response that tells me anything useful.

We know that consumers (and, yes, voters) are generally not rational decision-makers; people rely on emotions and heuristics to make most of our decisions.

If I really want to understand what will drive actual behavior, the surest way to find out is by allowing you to tell me unaided, in your own words, off the top of your head.

“How important is price to you on a scale of 1-10?” is no more likely to predict actual behavior than “How important is honesty to you in a president on a scale of 1-10?”

It applies to cans of tuna and to presidents.

@TomHCAnderson

 

[*Note: N=3,000 responses were collected via Google Surveys 11/5-11/7 2016. Google Surveys allow researchers to reach a validated (U.S. General Population Representative) sample by intercepting people attempting to access high-quality online content—such as news, entertainment and reference sites—or who have downloaded the Google Opinion Rewards mobile app. These users answer up to 10 questions in exchange for access to the content or Google Play credit. Google provides additional respondent information across a variety of variables including source/publisher category, gender, age, geography, urban density, income, parental status, response time as well as google calculated weighting. All 3,000 comments where then analyzed using OdinText to understand frequency of topics, emotions and key topic differences. Out of 65 topics total topics identified using OdinText 19 topics were mentioned significantly more often for Clinton, and 21 topics were significantly more often mentioned for Trump. Results are +/- 2.51% accurate at the 95% confidence interval. ]

 

Who Are You Voting Against?

Text Analysis Shows Dislike May Decide Presidential Election (A Text Analytics PollTM ) Exit pollsters today will ask thousands of Americans “Who did you vote for?” when they probably should be asking “Who did you vote against?”

A survey we just completed suggests that the outcome of the 2016 U.S. Presidential Election may hinge on which candidate is disliked more intensely by the other side.

One simple question posed interchangeably for the candidates produced such an unexpectedly visceral emotional reaction that one could reasonably conclude a vote for either candidate in many cases may be primarily about preventing the other candidate from being elected.

More than Just the Lesser of Two Evils

They’re both unpopular. We knew that already.

A slew of polls going back to the start of the general election and most recently by Washington Post/ABC News have repeatedly indicated that Hillary Clinton and Donald Trump are the two least popular candidates for U.S. president in the history of political polling.

What conventional, multiple-choice polling does not reveal, although it certainly supports this conclusion, is that apparently this election will not just be a matter of just holding one’s nose and voting for the lesser of two evils.

Unaided responses to one open-ended question analyzed using OdinText suggest that what may drive many voters to cast their ballots for either candidate today is an intense distaste for the alternative.

People’s distaste for each candidate is so intense that when asked to tell us what Hillary Clinton and Donald Trump stands for, respectively, respondents didn’t name a policy issue, they named a character flaw.

Top of Mind: The Crook and the Hatemonger

We took a general population sample* of 3000 Americans via Google surveys, split it in half randomly, and asked each half the same single question substituting only the candidate’s name:

“Without looking, off the top of your mind, what issues does [insert candidate name] stand for?”

The comments—presumably the issues that are truly top of mind for people in this election—were analyzed with OdinText and are captured in the chart below.

odintexttrumpclintonissues

You’ll note that for each candidate (red for Trump, blue for Clinton), respondents frequently offered a negative character perception instead of naming a political issue or policy supported by the candidate.

Indeed, the most popular response for Hillary Clinton involved the perception of dishonesty/corruption and the third most popular response for Donald Trump was perceived racism/hatemongering.

In both cases, the data tell us that people are unusually fixated on perceived problems they have with the candidates personally.

More Joy, Less Anger for Trump

Though the responses tended to be short and direct, a look at the words used to describe the candidates provides a pretty clear picture of the emotions associated with each candidate.

The OdinText visualization below shows the most striking emotional differences between Clinton and Trump around respondents’ levels of joy and anger. [See OdinText Emotions Plot Below, Trump Red, Clinton Blue]

clintonvstrumptextanalytics

While descriptions for both candidates exhibit a lot of anger, the proportion of anger in comments for Clinton is significantly higher (16.4% VS 12.3% for Trump).

The higher level of joy identified in this analysis is partly due to Trump’s positive campaign slogan “Make America Great Again,” which had significantly higher recall among respondents than Clinton’s slogan “Stronger Together.” Among those surveyed, 33 people in our sample specifically referenced the Trump slogan, while only one person referenced Clinton’s slogan—a notable difference in percentage terms (2.2% vs 0.07%, respectively)

More Effective Messaging for Trump

In terms of actual issues identified by respondents, Clinton was most often associated with championing women and civil rights, while Trump was identified with immigration and a pro-America, protectionist platform.

Here one could argue that the Trump campaign has done a more effective job of establishing a signature issue for the candidate.

While neither campaign has done a significantly better job of educating voters on its candidate’s policies than the other (8.2% vs 8.6% for Trump and Clinton, answering “I don’t know”), it may be that the simple message of “Make America Great Again” has clearer meaning to people than Clinton’s “Stronger Together.”

Indeed, the top issue identified for Trump was immigration (12.8% VS 2.3% for Clinton), while the number one issue for Clinton was the negative trait “corruption/lies” (12.5% VS. 1.4% for Trump).

This may prove problematic for the Clinton camp.

When voters don’t like their choices, they tend to stay home. If voter turnout is high today, it won’t be because people are unusually enthusiastic about the candidates; it will be because one of these candidates is so objectionable that people can’t in good conscience abstain from voting.

@TomHCAnderson

 

[*Note: N=3,000 responses were collected via Google Surveys 11/5-11/7 2016. Google Surveys allow researchers to reach a validated (U.S. General Population Representative) sample by intercepting people attempting to access high-quality online content—such as news, entertainment and reference sites—or who have downloaded the Google Opinion Rewards mobile app. These users answer up to 10 questions in exchange for access to the content or Google Play credit. Google provides additional respondent information across a variety of variables including source/publisher category, gender, age, geography, urban density, income, parental status, response time as well as google calculated weighting. All 3,000 comments where then analyzed using OdinText to understand frequency of topics, emotions and key topic differences. Out of 65 topics total topics identified using OdinText 19 topics were mentioned significantly more often for Clinton, and 21 topics were significantly more often mentioned for Trump. Results are +/- 2.51% accurate at the 95% confidence interval. ]