Quality Over Quantity in Online Panel Based Research

In 5 Crucial Steps to Choosing a Panel Provider, we talked about the importance and considerations of choosing a quality online panel provider. However, that is just the first step in getting quality results for your research. Not every respondent, even from the best panel, is guaranteed to respond sensibly every time. Thus it is equally important to implement a reliable and efficient QA process. To do this we implemented a bespoke QA system supported by survey design. We wanted a set of rules that maximizes removal of those “bad” responses whilst minimizing good respondent fall out and most importantly - rules that actually improve our results.

1. One-size doesn’t fit all

Based on the target audience, length and form of the questionnaire, survey delivery platform, country and language, etc., a unique formula to identify bad responses is needed. At AdExperiments, we narrowed down to the following checkpoints:

  • Speedsters and laggers based on length of survey
  • Flatliners
  • Inconsistent responses
  • Nonsensical responses to open-ended questions

However, to use any of the above checkpoints in isolation was insufficient. In isolation, the check was not sensitive enough to find all bad responses. When combining all, many good responses were be removed causing a dent on the precious sample size. We found the best balance by manually classifying a sample of respondents into good and bad responses, and then find the set of rules which removes most bad responses while minimizing false positives. Our rules remove close to 100% of bad responses while maintaining a false positive rate lower than 30%.

2. Timing makes a difference

After the formula was locked down, we still needed to actually remove the “bad” responses. There are two ways to do this: in real-time or via post survey reconciliation. Post survey is easier but real-time has its advantages:

  • Identify recruitment issues sooner: I have to admit that I have had more than one panic attack throughout my career after finding out the recruitment was flawed with no time to rerun the study. With a real-time system, automation can be implemented to alert us when the “bad” response rate is too high.
  • Better control of sample size: We have a rough idea of the percentage of the sample that will be removed through QA. However it can fluctuate a lot from one study to next. There is a risk of not hitting the target sample size unless a generous buffer in recruitment is given with post survey reconciliation. With a real-time system, there is no such concern and WYSIWYG.
  • Less biases: With a real-time system, sample across all cohorts can be well balanced without worrying that the QA will remove more sample from one bucket than others, which will introduce more bias into the study.

One word of caution - if you're rejecting respondents in real time it's vital you are confident you have a low false positive rate.

3. Smart surveys support smart QA

A smartly designed survey can go a long way in making the QA process easier and more accurate. Although every survey questionnaire is different, here are a few things that worked well for us:

  • Conflict questions are great for checking inconsistency of responses. As our survey is about opinions of advertisements, we have questions asking if a respondent finds the ad likable and also irritating. If one finds the ad both likable and irritating at the same time, it is very likely that it is a bad response.
  • Likert scale questions are great for identifying flatliners. However, it can backfire if your survey is not well designed for such questions. Flatlining of likert questions is where we had the highest false positive rate in identifying bad responses. Even good respondents can easily find likert questions repetitive and boring. We counter this issue by designing our survey interface in a dynamic and interactive way so that respondents are not just clicking/tapping a list of radio buttons (Let us know if you'd like a demo!)
  • Open-ended questions help to filter out respondents who don’t pay enough attention. While it is the most difficult QA checkpoint to automate, it has the lowest false positive rate among all checkpoints we use.

4. Choose quality over quantity

Very strict QA processes are not always implemented for online panel based research. It is understandable as we are always striving for a good sample size and sample is expensive (and recruitment can be slow)! With the strict QA process we put in place, we are removing up to 15% of completed responses. But this is a rare case where the bigger sample wasn’t the better sample. Using a series of 10 recent control/exposed studies, we had 40% more success in obtaining conclusive results from statistical tests after implementing the QA process. Removal of bad responses gets rid of the noise and makes it easier for the statistical test to detect the true difference across testing cells.

In summary, developing a rigorous customized QA process for online panel based research is not as easy as it sounds and requires decent investment in both time and resource. But the investment will pay off in the long run, not just in terms of the cost-savings from more efficient recruitment and diverted recruitment risks, but more importantly - from the improved quality of the research.

Please feel free to share your experience or questions.