Posts tagged content analysis
A New Trend in Qualitative Research

Almost Half of Market Researchers are doing Market Research Wrong! - My Interview with the QRCA (And a Quiet New Trend - Science Based Qualitative).

Two years ago I shared some research on research about how market researchers view Quantitative and Qualitative research. I stated that almost half of researchers don’t understand what good data is. Some ‘Quallies’ tend to rely and work almost exclusively with comment data from extremely small samples (about 25% of market researchers surveyed), conversely there is a large group of ‘Quant Jockey’s’ who while working with larger more representative sample sizes, purposefully avoid any unstructured data such as open ended comments because they don’t want to deal with coding and analyzing it or don’t believe in it’s accuracy and ability to add to the research objectives. In my opinion both researcher groups have it totally wrong, and are doing a tremendous disservice to their companies and clients.  Today, I’ll be focusing on just the first group above, those who tend to rely primarily on qualitative research for decisions.

Note that today’s blog post is related to a recent interview, which I was asked to take part in by the QRCA’s (Qualitative Research Consultant’s Association) Views Magazine. When they contacted me I told them that in most cases (with some exceptions), Text Analytics really isn’t a good fit for Qualitative Researchers, and asked if they were sure they wanted to include someone with that opinion in their magazine? I was told that yes, they were ok with sharing different viewpoints.

I’ll share a link to the full interview in the online version of the magazine at the bottom of this post. But before that, a few thoughts to explain my issues with qualitative data and how it’s often applied as well as some of my recent experiences with qualitative researchers licensing our text analytics software, OdinText.

The Problem with Qualitative Research

IF Qual research was really used in the way it’s often positioned, ‘as a way to inform quant research’, that would be ok. The fact of the matter is though, Qual often isn’t being used that way, but instead as an end in and of itself. Let me explain.

First, there is one exception to this rule of only using Qual as pilot feedback for Quant. If you had a product for instance which was specifically made only for US State Governors, then your total population is only N=50. And of course it is highly unlikely that you would ever get all the Governors of each and every US State to participate in any research (which would be a census of all governors), and so if you were fortunate enough to have a group of say 5 Governors whom were willing to give you feedback on your product or service, you would and should obviously hang on to and over analyze every single comment they gave you.

IF however you have even a slightly more common mainstream product, I’ll take a very common product like hamburgers as an example, and you are relying on 5-10 focus groups of n=12 to determine how different parts of the USA (North East, Mid-West, South and West) like their burgers, and rather than feeding  directly into some quantitative research instrument with a greater sample, you issue a ‘Report’ that you share with management; well then you’ve probably just wasted a lot of time and money for some extremely inaccurate and dangerous findings. Yet surprisingly, this happens far more often than one would imagine.

Cognitive Dissonance Among Qual Researchers when Using OdinText

How do I know this you may ask? Good Text Analytics software is really about data mining and pattern recognition. When I first launched OdinText we had a lot of inquiries from Qualitative researchers who wanted some way to make their lives easier. After all, they had “a lot” of unstructured/text comment data which was time consuming for them to process, read, organize and analyze. Certainly, software made to “Analyze Text” must therefore be the answer to their problems.

The problem was that the majority of Qual researchers work with tiny projects/sample, interviews and groups between n=1 and n=12. Even if they do a couple of groups like in the hamburger example I gave above, we’re still taking about a total of just around n=100 representing four or more regional groups of interest, and therefore fewer than n=25 per group. It is impossible to get meaningful/statistically comparable findings and identify real patterns between the key groups of interest in this case.

The Little Noticed Trend In Qual (Qual Data is Getting Bigger)

However, slowly across the past couple of years or so, for the first time I’ve seen a movement of some ‘Qualitative’ shops and researchers, toward Quant. They have started working with larger data sets than before. In some cases, it has been because they have been pulled in to manage larger ongoing community/boards, in some cases larger social media projects, and in others, they have started using survey data mixed with qual, or even better, employing qualitative techniques in quant research (think better open-ends in survey research).

For this reason, we now have a small but growing group of ‘former’ Qual researchers using OdinText. These researchers aren’t our typical mixed data or quantitative researchers, but qualitative researchers that are working with larger samples.

And guess what, “Qualitative” has nothing to do with whether data is in text or numeric format, instead it has everything to so with sample size. And so perhaps unknowingly, these ‘Qualitative Researchers’ have taken the step across the line into Quantitative territory, where often for the first time in their career, statistics can actually be used. – And it can be shocking!

My Experience with ‘Qualitative’ Researchers going Quant/using Text Analytics

Let me explain what I mean. Recently several researchers that come from a clear ‘Qual’ background have become users of our software OdinText. The reason is that the amount of data they had was quickly getting “bigger than they were able to handle”. They believe they are still dealing with “Qualitative” data because most of it is text based, but actually because of the volume, they are now Quant researchers whether they know it or not (text or numeric data is irrelevant).

Ironically, for this reason, we also see much smaller data sizes/projects than ever before being uploaded to the OdinText servers. No, not typically single focus groups with n=12 respondents, but still projects that are often right on the line between quant and qual (n=100+).

The discussions we’re having with these researchers as they begin to understand the quantitative implications of what they have been doing for years are interesting.

Let me preface this with the fact that I have a great amount of respect for the ‘Qualitative’ researchers that begin using OdinText. Ironically, the simple fact that we have mutually determined that an OdinText license is appropriate for them means that they are no longer ‘Qualitative’ researchers (as I explained earlier). They are in fact crossing the line into Quant territory, often for the first time in their careers.

The data may be primarily text based, though usually mixed, but there’s no doubt in their mind nor ours, that one of the most valuable aspects of the data is the customer commentary in the text, and this can be a strength

The challenge lies in getting them to quickly accept and come to terms with quantitative/statistical analysis, and thereby also the importance of sample size.

What do you mean my sample is too small?

When you have licensed OdinText you can upload pretty much any data set you have. So even though they may have initially licensed OdinText to analyze some projects with say 3,000+ comments, there’s nothing to stop them from uploading that survey or set of focus groups with just n=150 or so.

Here’s where it sometimes gets interesting. A sample size of n=150 is right on the borderline. It depends on what you are trying to do with it of course. If half of your respondents are doctors (n=75) and half are nurses (n=75), then you may indeed be able to see some meaningful differences between these two groups in your data.

But what if these n=150 respondents are hamburger customers, and your objective was to understand the difference between the 4 US regions in the I referenced earlier? Then you have about n=37 in each subgroup of interest, and you are likely to have very few, IF ANY, meaningful patterns or differences.

Here’s where that cognitive dissonance can happen --- and the breakthroughs if we are lucky.

A former ‘Qual Researcher’ who has spent the last 15 years of their career making ‘management level recommendations’ on how to market burgers differently in different regions based on data like this, for the first time is looking at software which says that there are maybe just two to 3 small differences, or even worse, NO MEANINGFUL PATTERNS OR DIFFERENCES WHATSOEVER, in their data, may be in shock!

How can this be? They’ve analyzed data like this many times before, and they were always able to write a good report with lots of rich detailed examples of how North Eastern Hamburger consumers preferred this or that because of this and that. And here we are, looking at the same kind of data, and we realize, there is very little here other than completely subjective thoughts and quotes.

Opportunity for Change

This is where, to their credit, most of our users start to understand the quantitative nature of data analysis. They, unlike the few ‘Quant Only Jockie’s’ I referenced at the beginning of the article already understand that many of the best insights come from text data in free form unaided, non-leading, yet creative questions.

They only need to start thinking about their sample sizes before fielding a project. To understand the quantitative nature of sampling. To think about the handful of structured data points that they perhaps hadn’t thought much about in previous projects and how they can be leveraged together with the unstructured data. They realize they need to start thinking about this first, before the data has all been collected and the project is nearly over and ready for the most important step, the analysis, where rubber hits the road and garbage in really should mean garbage out.

If we’re lucky, they quickly understand, its not about Quant and Qual any more. It’s about Mixed Data, it’s about having the right data, it’s about having enough data to generate robust findings and then superior insights!

Final Thoughts on the Two Meaningless Nearly Terms of ‘Quant and Qual’

As I’ve said many times before here and on the NGMR blog, the terms “Qualitative” and “Quantitative” at least the way they are commonly used in marketing research, is already passé.

The future is Mixed Data. I’ve known this to be true for years, and almost all our patent claims involve this important concept. Our research shows time and time again, that when we use both structured and unstructured data in our analysis, models and predictions, the results are far more accurate.

For this reason we’ve been hard at work developing the first ever truly Mixed Data Analytics Platform, we’ll be officially launching it three months from now, but many of our current customers already have access. [For those who are interested in learning more or would like early access you can inquire here: OdinText.com/Predict-What-Matters].

In the meantime, if you’re wondering whether you have enough data to warrant advanced mixed data and text annalysis, check out the online version of article in QRCA Views magazine here. Robin Wedewer at QRCA really did an excellent job in asking some really pointed questions that forced me too answer more honestly and clearly than I might otherwise have.

I realize not everyone will agree with today’s post nor my interview with QRCA, and I welcome your comments here. I just please ask that you read both the post above, as well as the interview in QRCA before commenting solely based on the title of this post.

Thank you for reading. As always, I welcome questions publicly in post below or privately via LinkedIn or our Inquiry form.

@TomHCAnderson

Share Your Text Analytics Success with us at The Sentiment Analysis Symposium

Emotion—Influence—Activation: Call for Speakers, 2017 Sentiment Analysis Symposium  Writing today to OdinText users as well as other fellow practitioners, especially those on the client side.

I’m working with Seth Grimes Chairman of the Sentiment Analysis Symposium to get the call out for speakers as well as panelists for an interesting and interactive discussion at the event this Summer.

OdinText has been a long time supporter of the event which this year takes place June 27-28 in New York. The Seniment Analysis Symposium tackles the business value of sentiment, opinion, and emotion in our big data world.

Emotion is one of the keys to customer (and patient, voter, and market) understanding. The symposium is _the_ place to stay current with the technologies and their research and insights applications. Please join us in June, as either an attendee or presenter...

The key to a great conference is great speakers. Whether you're a business visionary, experienced user, technologist, or consultant, please consider presenting. You may submit your proposal here. Choose from among the suggested topics or surprise us. Help us build on our track record of bringing attendees useful, informative technical and business content (along with excellent networking opportunities). Submit by January 31 if possible.

We're inviting talks that focus on customer experience, brand strategy, market research, media & publishing, social insights, healthcare, and financial markets. On the tech side, show off what you know about natural language processing, machine learning, speech and emotion AI, and the data economy.

Please help us create another great symposium! I look forward to seeing you at the event. Feel free to reach out if you have any questions. @TomHCAnderson

About Tom H. C. Anderson

Tom H. C. Anderson is the founder and managing partner of OdinText, a venture-backed firm based in Stamford, CT whose eponymous, patented SAS platform is used by Fortune 500 companies like Disney, Coca-Cola and Shell Oil to mine insights from complex, unstructured and mixed data. A recognized authority and pioneer in the field of text analytics with more than two decades of experience in market research, Anderson is the recipient of numerous awards for innovation from industry associations such as CASRO, ESOMAR and the ARF. He was named one of the “Four under 40” market research leaders by the American Marketing Association in 2010. He tweets under the handle @tomhcanderson

Preventing Customer Churn with Text Analytics

3 Ways You Can Improve Your Lost Customer Analysis

Preventing Customer Churn with Text Analytics

Lapsed Customers, Customer Churn, Customer Attrition, Customer Defection, Lost Customers, Non-Renewals, whatever you call them this kind of customer research is becoming more relevant everywhere, and we are seeing more and more companies turning to text analytics in order to better answer how to retain more customers longer.  Why are they turning to text analytics? Because no structured survey data does a better job predicting customer behavior as well as actual voice of customer text comments!

Today’s post will highlight 3 mistakes we often see being made in this kind of research.

1. Most Customer Loss/Churn Analysis is done on the customers who leave, in isolation from customers who stay. Understandable since it would make little sense to ask a customer who is still with you a survey question such as “Why have you stopped buying from us?”. But customer churn analysis can be so much more powerful if you are able to compare customers who are still with you to those who have left. There are a couple of ways to do this:

  • Whether or not you conduct a separate lapsed customer survey among those who are no longer purchasing, also consider doing a separate post-hoc analysis of your customer satisfaction survey data. It doesn’t have to be current. Just take a time period of say the last 6-9 months and analyze the comment data from those customers who have left VS those who are still with you. What did the two groups say differently just before the lapsed customers left? Can these results be used to predict who is likely to churn ahead of time? The answer is very likely yes, and in many cases you can do something about it!
  • Whenever possible text questions should be asked of all customers, not just a subgroup such as the leavers. Here sampling as well as how you ask the questions both come into play.

Consider expanding your sampling frame to include not just customers who are no longer purchasing from you, but also customers who are still purchasing from you (especially those who are purchasing more) as well as those still purchasing, but purchasing less. What you really want to understand after all is what is driving purchasing – who gives a damn if they claim they are more or less likely to recommend you – promoter and detractor analysis is over hyped!

Reducing Customer Churn

You may also consider casting an even wider sampling net than just past and current customers. Why not use a panel sample provider and try to include some competitor’s customer as well? You will need to draw the line somewhere for scope and budget, but you get the idea. The survey should be short and concise and should have the text questions up front, starting very broad (top of mind unaided) and then probe.

Begin with a question such as “Q. How, if at all, has your purchasing of Category X changed over the last couple of months?” and/or “Q. You indicated your purchasing of category X has changed, why? (Please be as specific as possible)”. Or perhaps even better, “Q. How if at all has your purchasing of category X changed over the past couple of months? If it has not changed please also explain why it hasn’t changed? (please be as specific as possible)”. As you can see, almost anyone can answer these questions no matter how much or little they have purchased. This is exactly what is needed for predictive text analytics! Having only leaver’s data will be insufficient!

2. Include other structured (real behavior data in the analysis). Some researchers analyze their survey data in isolation. Mixed data usually adds predictive power, especially if it’s real behavior data from your CRM database, and not just stated/recall behavior from your survey. In either case, the key to unlocking meaning and predictability is likely to come from the unstructured comment data. Nothing else can do a better job explaining what happened to them.

3. PLEASE PLEASE, Resist the urge to start your leaver survey with a structured question asking a battery of “check all that apply” reasons for leaving/shopping less. Your various pre-defined reasons, even if you include an “Other Specify_____” will have several negative effects on your data quality.

First, the customer will often forget their primary reason for their change in purchase frequency, they will assume incorrectly that you are most interested in these reasons you have pre-identified. Second there will be no way for you to tell which of these several reasons they are now likely to check, is truly the most important to them. Third, some customers will repeat themselves in the other specify, while others will decide not to answer it at all since they checked so many of your boxes. Either way, you’ve just destroyed the best chance you had in accurately understanding why your customers purchasing has changed!

These are many other ways to improve your insights in lapsed customer survey research by asking fewer yet better comment questions in the right order.  I hope the above tips have given you some things to consider. We’re happy to give you additional tips if you like, and we often find that as customers begin using OdinText their use of survey data both structured and unstructured improves greatly along with their understanding of their customers.

@TomHCanderson

Beyond Sentiment - What Are Emotions, and Why Are They Useful to Analyze?
Text Analytics Tips - Branding

Text Analytics Tips - Branding

Beyond Sentiment - What are emotions and why are they useful to analyze?Text Analytics Tips by Gosia

Emotions - Revealing What Really Matters

Emotions are short-term intensive and subjective feelings directed at something or someone (e.g., fear, joy, sadness). They are different from moods, which last longer, but can be based on the same general feelings of fear, joy, or sadness.

3 Components of Emotion: Emotions result from arousal of the nervous system and consist of three components: subjective feeling (e.g., being scared), physiological response (e.g., a pounding heart), and behavioral response (e.g., screaming). Understanding human emotions is key in any area of research because emotions are one of the primary causes of behavior.

Moreover, emotions tend to reveal what really matters to people. Therefore, tracking primary emotions conveyed in text can have powerful marketing implications.

The Emotion Wheel - 8 Primary Emotions

OdinText can analyze any psychological content of text but the primary attention has been paid to the power of emotions conveyed in text.

8 Primary Emotions: OdinText tracks the following eight primary emotions: joy, trust, fear, surprise, sadness, disgust, anger, and anticipation (see attached figure; primary emotions in bold).

Sentiment Analysis

Sentiment Analysis

Bipolar Nature: These primary emotions have a bipolar nature; joy is opposed to sadness, trust to disgust, fear to anger, and surprise to anticipation. Emotions in the blank spaces are mixtures of the two neighboring primary emotions.

Intensity: The color intensity dimension suggests that each primary emotion can vary in ntensity with darker hues representing a stronger emotion (e.g., terror > fear) and lighter hues representing a weaker emotion (e.g. apprehension < fear). The analogy between theory of emotions and the theory of color has been adopted from the seminal work of Robert Plutchik in 1980s. [All 32 emotions presented in the figure above are a basis for OdinText Emotional Sentiment tracking metric].

Stay tuned for more tips giving details on each of the above emotions.

Gosia

Text Analytics Tips with Gosi

Text Analytics Tips with Gosi

[NOTE: Gosia is a Data Scientist at OdinText Inc. Experienced in text mining and predictive analytics, she is a Ph.D. with extensive research experience in mass media’s influence on cognition, emotions, and behavior. 

Text analysis answers: Is the Quran really more violent than the Bible?

Text Analytics Tips: Is the Quran really more violent than the Bible? by Tom H. C. Anderson Part I: The Project

With the proliferation of terrorism connected to Islamic fundamentalism in the late-20th and early 21st centuries, the question of whether or not there is something inherently violent about Islam has become the subject of intense and widespread debate.

Even before 9/11—notably with the publication of Samuel P Huntington’s “Clash of Civilizations in 1996—pundits have argued that Islam incites followers to violence on a level that sets it apart from the world’s other major religions.

The November 2015 Paris attacks and the politicking of a U.S. presidential election year—particularly candidate Donald Trump’s call for a ban on Muslim’s entering the country and President Obama’s response in the State of the Union address last week—have reanimated the dispute in the mainstream media, and proponents and detractors, alike, have marshalled “experts” to validate their positions.

To understand a religion, it’s only logical to begin by examining its literature. And indeed, extensive studies in a variety of academic disciplines are routinely conducted to scrutinize and compare the texts of the world’s great religions.

We thought it would be interesting to bring to bear the sophisticated data mining technology available today through natural language processing and unstructured text analytics to objectively assess the content of these books at the surface level.

So, we’ve conducted a shallow but wide comparative analysis using OdinText to determine with as little bias as possible whether the Quran is really more violent than its Judeo-Christian counterparts.

A few words of caution…

Due to the sensitive nature of this subject, I must emphasize that this analysis is by no means exhaustive, nor is it intended to advance any agenda or to conclusively prove anyone’s point.

The topic and data sources selected for this project constitute a significant departure from the consumer intelligence use cases for which clients typically turn to text analytics, so we thought this would be an interesting opportunity to demonstrate how this tool can be much more broadly applied to address questions and issues outside the realm of market research and business intelligence.

Again, this is only a cursory analysis. I believe there is more than one Ph.D. thesis awaiting students of theology, literature or political science who want to take a much deeper dive into this data.

About the “Data” Sources

First off, it seemed sensible and appropriate to analyze the Old and New Testaments separately. (The Jewish Torah makes up the first five books of the Christian Old Testament, of course, while the New Testament is unique to Christianity.)

We decided to split them for analysis for a couple of reasons: 1) They were written hundreds of years apart and 2) their combined size relative to the Quran.

Though all data (Old Testament, New Testament and Quran) were combined and read into OdinText as a single file, the Old Testament is the largest with over 23K verses and about 623K words, followed by the New Testament with just under 8K verses and 185K words, and then the Quran with just over 6K verses and less than 78K words.

Secondly, there are obviously multiple versions and translations of the texts available for study. We’ve selected the ones that were most accessible and best suited for this kind of analysis.

With regard to the Christian Bible, instead of the King James version, we opted to use the New International Version (NIV) because the somewhat updated language should be easier to work with.

In selecting an English translation of the Quran, we considered the Tafsir-ul-Quran (1957) by the Indian scholar Abdul Majid Daryabad, but decided to go with The Holy Qur'an (1917, 4th rev. ed. 1951) by Maulana Muhammad Ali because this version is more widely used and the data are more easily accessed.

We do not believe the text in either of these choices to differ materially.

Approach: A ‘Top-Down/Bottom-Up’ Inquiry

We recommend and OdinText employs a  ‘Top-Down/Bottom-Up’ approach to text analysis.

This means that identification of issues for investigation will be partly a priori or ‘Top-Down’ (i.e. the analyst determines specific topic areas to explore such as “violence”).

But there will also be a data-driven or ‘Bottom-Up’ aspect in which the software helps to identify topics or areas that may not have occurred to the analyst, but which could be important given the data.

For example…

OdinText looks for sentiments and emotions in the data as soon as it has been uploaded to our servers; however, as this particular data set is rather unique, certain custom dictionary definitions—what we refer to as “issues”—will also need to be created through the Top-Down/Bottom-Up approach.

One simple and unbiased way to do this is to allow the process by which these definitions are created to be as data-driven as possible. There are several ways to look to the data for information. For instance, we might start by looking at the top words mentioned in each source to understand what concepts cut across our data, and how they might be defined. (See figure 1)

3WayTextAnalyticsComparison

In this way, an overarching concept for comparison in each of the three sources can then be developed. For instance, a concept like “God” would need to include all common terms for this concept in each text source.

We can name such a concept something like “God All Inclusive,” and allowing all common definitions/terms for God in each of the texts to be picked up under this concept.

Accordingly, “God All Inclusive” would include any mention of “Lord” (28%) or “God” (11%) in the Old Testament, as well as any mentions of “Jesus” (17%), “God” (16%), “Lord” (8%) or “Christ” (7%) in the New Testament, and any mentions of “Allah” (30%) or “Lord” (14%) in the Quran.

As mentioned earlier, in order to keep this analysis as unbiased as possible (and in order to do it as quickly as possible), we will also rely on OdinText’s built in functionality to understand broader concepts such as positive and negative sentiment as well as other psychological constructs and emotion in text.  In other words, when we look at positive and negative emotion we will be using this broad-based metric across the three texts without any customization at all.

Now that I’ve laid the groundwork for this project, please join me tomorrow as we take a look at the initial results!

Ps.! Considering many people take at least a year to read just one of these texts, you may find it interesting that it took OdinText less than 120 seconds to read, parse and analyze all three texts at once!

 

Up Next: Part II – One of these texts is angrier!

 

Text Analytics Tips

Text Analytics Tips, with your Hosts Tom & Gosia: Introductory Post Today, we’re blogging to let you know about a new series of posts starting in January 2016 called ‘Text Analytics Tips’. This will be an ongoing series and our main goal is to help marketers understand text analytics better.

We realize Text Analytics is a subject with incredibly high awareness, yet sadly also a subject with many misconceptions.

The first generation of text analytics vendors over hyped the importance of sentiment as a tool, as well as ‘social media’ as a data source, often preferring to use the even vaguer term ‘Big Data’ (usually just referring to tweets). They offered no evidence of the value of either, and have usually ignored the much richer techniques and sources of data for text analysis. Little to no information or training is offered on how to actually gain useful insights via text analytics.

What are some of the biggest misconceptions in text analytics?

  1. “Text Analytics is Qualitative Research”

FALSE – Text Analytics IS NOT qualitative. Text Analytics = Text Mining = Data Mining = Pattern Recognition = Math/Stats/Quant Research

  1. It’s Automatic (artificial intelligence), you just press a button and look at the report / wordcloud

FALSE – Text Analytics is a powerful technique made possible thanks to tremendous processing power. It can be easy if using the right tool, but just like any other powerful analytical tools, it is limited by the quality of your data and the resourcefulness and skill of the analyst.

  1. Text Analytics is a Luxury (i.e. structured data analysis is of primary importance and unstructured data is an extra)

FALSE – Nothing could be further from the truth. In our experience, usually when there is text data available, it almost always outperforms standard available quant data in terms of explaining and/or predicting the outcome of interest!

There are several other text analytics misconceptions of course and we hope to cover many of them as well.

While various OdinText employees and clients may be posting in the ‘Text Analytics Tips’ series over time, Senior Data Scientist, Gosia, and our Founder, Tom, have volunteered to post on a more regular basis…well, not so much volunteered as drawing the shortest straw (our developers made it clear that “Engineers don’t do blog posts!”).

Kidding aside, we really value education at OdinText, and it is our goal to make sure OdinText users become proficient in text analytics.

Though Text Analytics, and OdinText in particular, are very powerful tools, we will aim to keep these posts light, fun yet interesting and insightful. If you’ve just started using OdinText or are interested in applied text analytics in general, these posts are certainly a good start for you.

During this long running series we’ll be posting tips, interviews, and various fun short analysis. Please come back in January for our first post which will deal with analysis of a very simple unstructured survey question.

Of course, if you’re interested in more info on OdinText, no need to wait, just fill out our short Request Info form.

Happy New Year!

Your friends @OdinText

Text Analytiics Tips T G

[NOTE: Tom is Founder and CEO of OdinText Inc.. A long time champion of text mining, in 2005 he founded Anderson Analytics LLC, the first consumer insights/marketing research consultancy focused on text analytics. He is a frequent speaker and data science guest lecturer at university and research industry events.

Gosia is a Senior Data Scientist at OdinText Inc.. A PhD. with extensive experience in content analytics, especially psychological content analysis (i.e. sentiment analysis and emotion in text), as well as predictive analytics using unstructured data, she is fluent in German, Polish and Spanish.]