Are We Over Collecting? Qualitative Data Overload, aka ‘Little Big Data’

A hot topic in the market research industry has been Big Data. Online technologies have made it easier than ever to collect huge amounts of qualitative data, however that presents a challenge when it comes to analysis… especially quick analysis.

Big Qualitative Data

The meme Big Data has rapidly taken hold over the past few years.  Big data refers to all the digital information from social media, connected devices, company CRMs,  Enterprise Feedback Management systems, analytics systems, and on and on.  The amount of data being generated every day is estimated to be over 3 Exabytes! That’s an immense amount of data.

Qualitative researchers have a different big data challenge when it comes to online and mobile qual.  One of the great things about online Immersive Research is the ability to quickly capture a large amount of expression from engaged participants. However this also represents a challenge for analysis.  For example in a recent project, seventy participants produced 400 pages of text and over 1,100 images in 5 days of activities.

While the size of all this data can be measured in mere gigabytes vs. exabytes, to a qualitative researcher, it is a huge mountain of in-depth information.  We have a phrase for this phenomenon here at Revelation – we call it “Little Big Data.”

Faced with Little Big Data, qual researchers have understandably looked to technology for help, especially looking at text analytics for some help. However, I have talked to researchers who have utilized different text analytics tools (some integrated into qualitative analysis packages, some fully dedicated text analysis) and  quite a few reported to be less than satisfied.   The effort to set up and tune the tools to be effective was often greater than the perceived benefit. At the end of the day, many of the researchers I talked with felt they weren’t much better off than printing out transcripts and whipping out their trusty highlighters.

The problem with all of these tools isn’t about the quality of the tools in their own right, it’s that moving data out of an online qualitative system into a text analysis system results in much more effort.  Think about it:  a well designed online qualitative research app knows who the participants are in the study, knows in which segment they belong, and knows what questions they’ve answered.  So to export everything out of an online qual app and then import it into another analysis tool means a researcher has to reassemble the context of the entire study before the value of the analysis tool is realized.

The tools will continue to improve (such as the Word Tree contextual text visualization in Revelation shown below), but I think it’s safe to say that a silver bullet has not emerged for the challenge of ‘Little Big Data.’

Revelation Word Tree and Little Big Data

However, there may be one strategy being left out.  That is, gasp!, collect less information.  This notion seems contrary to researchers’ natural desire to get as much data as possible.  But I will argue that managing data flow is now part of the online qual research gig.

In traditional face-to-face qualitative research, the conventions of project scoping are well established. Researchers can count on showing up at a facility or a home and know exactly how much time they will spend speaking with participants, and have a good sense for how much data they will be getting.  In a sense, the duration of in-person interviews represents natural “time boxes” that also provides a basis for scoping the effort of the overall project.

But what happens when a project is “on” 24 hours a day? What happens when the “time boxes” of interview durations are not in play, yet project timelines still call for a quick turnaround on reporting?

Suddenly, paying very close attention to the amount of data you collect starts to become a viable and necessary strategy to deal with ‘Little Big Data.’  Being very thoughtful about the number of activities or participants or duration of a study, as well as a defining a clear analysis strategy upfront, makes a lot of sense.

 

You can leave a response, or trackback from your own site.

5 Responses to “Are We Over Collecting? Qualitative Data Overload, aka ‘Little Big Data’”

  1. Pingback: Qualitative Data Overload, aka ‘Little Big Data’ | NewQualitative.org « analyticalsolution

  2. Susan Sweet says:

    December 12th, 2012 at 10:19 am

    Steve, love this post. It’s a huge issue (pun intended) with great consequences for QRC’s who are literally swimming in data at the end of an online/immersive research project.

    I embraced the idea of ‘collect less data’ on my last online immersive project, and was SO glad that I did. By setting up fewer activities, with fewer questions/collectors in each activity, I was able to really home in on the salient issues, and clients were able to do some collaborative analysis on very specific tasks. Everyone was happier, and we got just as much ‘information’ and possibly greater insight into the issues at hand. There was simply none of that “Ooh, a shiny object! Oh, look at this one!” happening in the project.

    Just as we push back when a client asks us to ‘throw something in’ at the end of a focus group, we have to be careful to not over-collect when doing online qual. This was a great reminder for me, and confirmation that collecting less can actually provide more.

  3. Ian Roberts says:

    January 9th, 2013 at 1:15 pm

    Steve. Qualitative researchers are slow to adapt to changes and their typical reaction to the availability of such a large amount of data is to discredit its value and restate the importance of interview and focus group data. Word clouds, and word trees are not the most useful tools to handle a large volume of text, so it may explain in part your view on this subject. I have been using Provalis Research text analytics tools for quite some times now and with proper understanding of the possibilities and the limits of such kinds of tools, you could get a lot of useful information pretty quickly and efficiently. In some case, you could even automate the analysis of online data. I would suggest reading Fern Harper “Top 5 Challenges of Text Analytics” for a more optimistic yet realistic view on the value of text mining.

    http://www.allanalytics.com/author.asp?section_id=2013&doc_id=252381

  4. Jacqui Greeff says:

    January 10th, 2013 at 12:56 am

    Well said Steve! ‘Less is More’ is a mantra not easily embraced by either qual traditionalists or new agers. But it certainly works for me.

  5. Bernadette Wright says:

    March 18th, 2013 at 12:25 am

    Software is no substitute for the iterative process of reading the transcripts and interpreting what all the data mean. As Morse noted, “Over-reliance on technical devices such as computer packages for data analysis may also interfere with the quality of the end result. Computer software facilitates the organisation of data, placing it in configurations that enhance analysis–nothing more.” (Janice Morse, Ch 6 “Biased reflections: principles of sampling and analysis in qualitative inquiry,” in J. Popay (Ed. Moving beyond effectiveness in evidence synthesis, p. 57), http://www.nice.org.uk/aboutnice/whoweare/aboutthehda/hdapublications/moving_beyond_effectiveness_in_evidence_synthesis__methodological_issues_in_the_synthesis_of_diverse_sources_of_evidence.jsp)

Leave a Reply

You must be logged in to post a comment.

We value your privacy . We will not rent or sell your email address.