Marketing Research Blooper Reveals Lots of Surprises and Two Important Lessons

Tom H. C. Anderson
April 1st, 2017

April Foolishness: What Happens When You Survey People in the Wrong Language?

I’m going to break with convention today and, in lieu of an April Fool’s gag, I’m going to tell you about an actual goof we recently made that yielded some unexpected but interesting results for researchers.

As you know, last week on the blog we highlighted findings from an international, multilingual Text Analytics Poll™ we conducted around culture. This particular poll spanned 10 countries and eight languages, and when we went to field it we accidentally sent the question to our U.S. sample in Portuguese!

Shockingly, in many cases, people STILL took the time to answer our question! How?

First, bear in mind that these Text Analytics Polls™ consist of only one question and it’s open-ended, not multiple choice. The methodology we use intercepts respondents online and requires them to type an answer to our question before they can proceed to content they’re interested in.

Under the circumstances, you might expect someone to simply type “n/a” or “don’t understand” or even some gibberish in order to move on quickly, and indeed we saw plenty of that. But in many cases, people took the time to thoughtfully point out the error, and even with wit.

Verbatim examples [sic]:

“Are you kidding me, an old american who can say ¡adios!”

“Tuesday they serve grilled cheese sandwiches.”
“What the heck is that language?”

“No habla espanol”

“i have no idea what that means”

“2 years of Spanish class and I still don’t understand”

Others expressed themselves more…colorfully…

“No, I don’t speak illegal immigrant.”

“Speak English! I’m switching to News 13 Orlando. They have better coverage than FT.”

Author’s note: I suspect that last quote was from someone who was intercepted while trying to access a Financial Times article. 😉

While a lot of people clearly assumed our question was written in Spanish, still others took the time to figure out what the language was and even to translate the question!

“I had to use google translate to understand the question.”

“what the heck does this mean i don’t speak Portuguese”

But what surprised me most was that a lot of Americans actually answered our question—i.e., they complied with what we had asked—even though it was written in Portuguese. And many of those replies were in Spanish!!!

We caught our mistake quickly enough when we went to machine-translate the responses and we were told that replies to a question in Portuguese were now being translated from English to English, but two important lessons were learned here:

Takeaway One: Had we made this mistake with a multiple-choice instrument, we either might not have caught it until after the analysis or perhaps not at all. Not only would respondents not have been able to tell us that we had made a mistake, but they would’ve had the easy option of just clicking a response at random. And unless those random clicks amounted to a conspicuous pattern in the data, we could’ve potentially taken the data as valid!

Takeaway Two: The notion that people will not take the time to thoughtfully respond to an open-ended question is total bunk. People not only took the time to answer our question in detail when it was correctly served to them in their own language, but they even spared a thought for us when they didn’t understand the language!

I want to emphasize here that if you’re one of those researchers (and I used to be among this group, by the way) who thinks you can’t include an open-ended question in a quantitative instrument, compel the respondent to answer it, and get a meaningful answer to your question, you are not only mistaken but you’re doing yourself and your client a huge disservice.

Take it from this April fool, open-ended questions not only tell you what you didn’t know; they tell you what you didn’t know you didn’t know.

Thanks for reading. I’d love to hear what you think!


P.S. Find out how much more value an open-ended question can add to your survey using OdinText. Contact us to talk about it.

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the “Four under 40” market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson.

8 thoughts on “Marketing Research Blooper Reveals Lots of Surprises and Two Important Lessons”

  1. Now I’m just disappointed that my wife and daughter didn’t get the question, since they both are fluent in Portuguese. BTW was it real Portuguese, or that funny Brazilian type? 🙂

  2. It was specifically for Brazilian respondents. “How would you describe Brazilian culture…” (see last weeks post)

  3. Tom, here’s some interesting learning I made many years ago in England. I was doing a car clinic for Rover Cars at the time and the head of security for Rover was an ex-policeman. He noticed a number of respondents that he thought were acting a little odd and finally chatted to one who was also an ex-cop. Being fraternal as cops can be, this respondent admitted readily he was part of a ring of people locally who were recruited by local interviewers to have whatever car was needed for the quota of the day – we uncovered extensive cheating, in other words. Once this was clear, we offered and did re-do the entire event with fresh (accurate this time) sample, cost us a fortune to do this. The results? Not statistically different from the ‘bent’ sample at all! Kind of disappointing, in a way.

  4. I’ve always considered the Iridium Subscriber Trials project both my greatest success and most spectacular failure. In essence, this project was a US$7 billion qualitative. However, the lessons learned were applied to the launch of IBM products on many occasions. I’ll never forget pitching that opportunity to NFO senior execs. One unnamed exec said to me,, “I’m not sure what you bring to this project.” I said, “The Client.” The wild ride started that day in the boardroom at 2 Pickwick Plaza and ended for me by joining TNS from NFO. The original Iridium failed, and the assets were acquired. I think some of those satellites still orbit the earth.

  5. Quite apart from this “ooops” language glitch (and, thanks, Tom for sharing it so openly), I thoroughly agree about using open end questions judiciously within a “quantitative” survey. Indeed, I do it all the time, pre- and post- the Internet era, and have never really given it much contrary thought. All things can be badly or poorly done, and there’s then a price to pay, to be sure, But done well and right, open enders are great and I’m wholeheartedly on board with your larger points about them. Keep up the good work.

  6. @Jill Good for you! I remember and worked on the analysis of that project 😉 It was pretty cool, Iridium VS Globalstar. I believe they’re still used for US military when calling home to family.

  7. @Bart One of the more interesting posts on this blog last year IMHO was the survey we ran among marketing researchers asking them about how they used unstructured/open-ended questions. It was quite eye-opening for me: and
    Researchers are quickly beginning to realize that open ends are not only AS important as their structured questions, but actually more important, and are beginning to allocate resources accordingly. The real benefit is in the incredibly deep (and very predictive) insights you can get from them. A side benefit is that they are usually also only way of knowing that something has gone wrong. If this would have been a likert scale chances are no one would have noticed the gross error!

  8. I sent out a mail survey to 35,000 owners of a specific car asking about their ownership experience. The problem was I didn’t identify what car we were surveying. Got a few responses back saying “I assume you mean my XXXXX.”

    Feeling horrible, I offered to pay for the mistake. It turned out to be 50% of my annual salary, which the owner did not ask me to cover. Another “never again will I make that mistake” moment.

Comments are closed.