The coronavirus pandemic has changed most things. For behavioral scientists, whose job it is to study human behavior, that includes the nature of data collection. Many researchers are looking for ways to move data collection online. In this blog, we provide an overview of online participant recruitment practices and some advice for getting started with online research.
Sources of Online Participants
Lots of people are connected to the internet, which means there are lots of ways to recruit research participants online. Researchers have, for example, found participants from the following sources among others:
- Facebook (Rife, Cate, Kosinski, & Stilwell, 2014)
- Reddit (Jamnik & Lane, 2017)
- Online volunteer laboratories (Strange, Enos, Hill, & Lakeman, 2019)
- Market research panels (Coppock & McClellan, 2019; Chandler, Rosenzweig, Moss & Litman, 2019)
- Amazon Mechanical Turk (Buhrmester, Talaifar, & Gosling, 2018)
Finding volunteers is attractive because volunteers participate for free. However, soliciting volunteers often means data collection will be slower than using a participant recruitment platform. In addition, there are limits to the length and types of tasks volunteers will complete. For these reasons, researchers commonly turn to participant recruitment platforms.
In the new book Conducting Online Research on Amazon Mechanical Turk and Beyond, my co-author Jonathan Robinson and I provide an overview of commonly used participant recruitment platforms and give new users a guide for how to get started. Much of the book is focused on Amazon Mechanical Turk (MTurk)—one of the most popular sources of online participants within academic research over the last 10 years. We also describe how market research panels can complement platforms like MTurk and outline several best practices for maintaining data quality and avoiding sources of sampling bias.
Differences Between Participant Platforms
Mechanical Turk’s popularity among behavioral scientists began skyrocketing in 2011 following the publication of a seminal paper by Buhrmester, Kwang, and Gosling (2011) that showed high-quality data can be collected on MTurk quickly and inexpensively. Since then, papers with MTurk data have appeared in more than 1,000 different journals. Why has MTurk been so popular among academics?
There are several answers to that question, but two very important ones are cost and control. Generally speaking, conducting a study on MTurk is more affordable than alternatives like market research panels. In addition, platforms like MTurk give researchers more control over the data collection process.
When using MTurk, the researcher decides how much to compensate participants, whether certain participants should be included or excluded from the study, and when to start and stop data collection. When using market research panels, these decisions have historically been made by the sample provider. For obvious reasons then, many researchers prefer the flexibility of MTurk.
Beyond differences in how participant recruitment platforms operate, there are important differences in how various platforms were built that have implications for research. For example, MTurk was not built as a platform for social science research. Instead, MTurk was built as a microtasking platform where “requesters” can post tasks that require human intelligence to complete. This means that things like participant diversity, platform size, and a mechanism for targeting certain participants either were not considered beforehand, arose organically, or only emerged after third-party applications formed to expand the flexibility of MTurk.
The same is not true of market research panels. Market research panels were intentionally designed to meet the needs of companies in the market research industry. As a result, online panels tend to be much larger than Mechanical Turk and to allow for more complicated demographic targeting. Using a market research panel, researchers could easily gather responses from tens of thousands of people, target participants in specific US zip codes, set quotas to match the US census, or sample internationally. All of these things would be difficult or impossible to do on MTurk.
What Kinds of Studies Can I Conduct on MTurk? What about Market Research Panels?
The history behind MTurk and market research panels affects the types of studies researchers can conduct. Because MTurk is still a microtask platform, participants are willing to engage in more complicated and demanding tasks than market research participants. People on MTurk are willing to:
- write essays for open-ended qualitative research
- participate in video interviews
- engage with other participants in interactive games and group-based social experiments
- participate in longitudinal studies, including studies that require intensive, daily tracking
- complete a wide variety of social and behavioral experiments, including those that measure reaction times
On market research panels, participants are used to shorter research surveys. Studies that last longer than 30 minutes begin to grow challenging, and participants are less willing to participate in longitudinal studies with multiple waves or complex studies that require downloading software to measure reaction time. Although participants on market research panels do complete a wide variety of studies, it is harder to get the same level of engagement in long or complex studies as is possible on MTurk.
Putting this information together reveals that MTurk and market research panels can be complementary sources of research participants. When researchers want to run a study that contains a difficult task or requires lots of engagement, MTurk is probably the best bet. However, when researchers want to gather a very large sample, target a narrow demographic group, or gather data internationally, a market research panel is the better bet. Conducting successful online studies requires knowing when participants are and are not fit for the purposes of a study.
Find more details about these topics in Conducting Online Research on Amazon Mechanical Turk and Beyond.
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality data? Perspectives on Psychological Science, 6, (3-5). DOI: 10.1177/1745691610393980
Coppock, A., & McClellan, O. A. (2019). Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Research & Politics. https://doi.org/10.1177/2053168018822174
Chandler, J., Rosenzweig, C., Moss, A.J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavioral Research, 51, 2022–2038. https://doi.org/10.3758/s13428-019-01273-7
Jamnik, M. R. & Lane, D. J. (2017). The use of Reddit as an inexpensive source for high quality data. Practical Assessment, Research, and Evaluation, 22, 1-10. DOI:https://doi.org/10.7275/swgt-rj52
Rife, S. C., Cate, K. L., Kosinski, M. & Stillwell, D. (2016). Participant recruitment and data collection through Facebook: the role of personality factors. International Journal of Social Research Methodology, 19, 69-83. DOI: 10.1080/13645579.2014.957069
Strange, A. M., Enos, R. D., Hill, M., & Lakeman, A. (2019). Online volunteer laboratories for human subjects research. PloS one, 14, e0221676. https://doi.org/10.1371/journal.pone.0221676