Online Surveys

Categories: Data Collection, Online Research, Other, Research, Research Methods, Research Skills

Tags: ,

We will define our April focus broadly to include any qualitative or quantitative methods that involve questioning, prompting, or working with participants to collect or generate data. Find the unfolding series here.


An Introduction to Web Surveys

What is a survey? Allen (2017) defined this research method as:

A survey is a set of questions or statements to which participants give responses. A survey provides one of the best methods to obtain a large amount of data from participants. In this manner, survey data can provide a quantitative, qualitative, and/or numeric description of trends, attitudes, or opinions of a population by studying a sample of that population. 

While research terms are often used differently across disciplinary and methodological contexts, I use the term survey to describe a way to collect data for quantitative research, using a tested instrument. I use the term questionnaire to describe a way to collect data for qualitative studies based on questions devised by the researcher. By this definition questionnaires can include more open-ended questions that invite narrative responses, while surveys include more check-box and Likert scale options. The definition from Groves et al. (2004) aligns with this way of thinking:

 A survey is a systematic method for gathering information from (a sample of) entities for the purpose of constructing quantitative descriptors of the attributes of the large population of which the entities are members (p. 2).

This excerpt from an entry in the SAGE Research Methods Foundations by Stern, LeClere & Fordyce (2019) discusses surveys conducted online:

Since the early 2000s, surveys completed via the Internet have become an important part of the data collection lexicon. It is the case that web surveys are now used for large multimillion-dollar survey efforts with tens of thousands of survey units as well as for small-scale evaluation and appraisal work. While Internet use is ubiquitous, the best practices for designing, implementing, and administering web surveys still remain in flux because of the evolving nature of technology use as well as rapid changes in the technology itself. The majority of respondents no longer simply use desktop computers tied to a fixed position to navigate through a survey hosted on the Internet. Surveys are now completed on smartphones, tablets, laptops using a mouse, touch screen, voice, or linked device. The design process thus also requires designing the new as well as extant technologies.

As a result of this rapid evolution, there is uncertainty surrounding the very definition of what constitutes a web survey; that is, with the multitude of ways in which researchers can deliver surveys that are technically “over the web,” the web survey has been transformed from a mode of data collection to a “multidimensional” concept (Couper, 2011, p. 892). For the purposes of this discussion, a web survey is defined as a survey whereby respondents can be reached and complete a questionnaire via location-based broadband Internet access, Internet-enabled mobile devices, social media, or text messaging.

Groves R.M., Fowler F.J., Couper M.P., Lepkowski J.M., Singer E. and Tourangeau R. (2004) Survey Methodology, Hoboken: Wiley.

Stern, M. J., LeClere, F., & Fordyce, E. (2019). Web Surveying Design and Implementation. In P. Atkinson, S. Delamont, A. Cernat, J.W. Sakshaug, & R.A. Williams (Eds.), SAGE Research Methods Foundations. https://www.doi.org/10.4135/9781526421036827920


Open-Access Articles about Conducting Web Surveys

Burnett, C. M. (2016). Exploring the difference in participants’ factual knowledge between online and in-person survey modes. Research & Politics. https://doi.org/10.1177/2053168016654326

Abstract. Over the past decade, an increasing number of scholars and professionals have turned to the Internet to gather samples of subjects for research ranging from public opinion surveys to experiments in the social sciences. While there has been a focus on whether online samples are representative and accurate, fewer studies examine the behavioral differences between individuals who participate in surveys and experiments on a computer versus in-person. Here, I use an experiment to gauge whether respondents who self-complete surveys online are more likely to register higher knowledge scores compared with respondents who self-complete surveys with pen and paper in a laboratory. The results show that subjects in the online group are significantly more likely to answer knowledge questions correctly across a range of topics. Patterns in the data imply respondents are researching some answers.

Decorte, T., Malm, A., Sznitman, S. R., Hakkarainen, P., Barratt, M. J., Potter, G. R., Werse, B., Kamphausen, G., Lenton, S., & Asmussen Frank, V. (2019). The challenges and benefits of analyzing feedback comments in surveys: Lessons from a cross-national online survey of small-scale cannabis growers. Methodological Innovationshttps://doi.org/10.1177/2059799119825606

Abstract. It is common practice in survey questionnaires to include a general open and non-directive feedback question at the end, but the analysis of this type of data is rarely discussed in the methodological literature. While these open-ended comments can be useful, most researchers fail to report on this issue. The aim of this article is to illustrate and reflect upon the benefits and challenges of analyzing responses to open-ended feedback questions. The article describes the experiences of coding and analyzing data generated through a feedback question at the end of an international online survey with small-scale cannabis cultivators carried out by the Global Cannabis Cultivation Research Consortium. After describing the design and dataset of the web survey, the analytical approach and coding frame are presented. The analytical strategies chosen in this study illustrate the diversity and complexity of feedback comments which pose methodological challenges to researchers wishing to use them for data analyses. In this article, three types of feedback comments (political/policy comments, general comments of positive and negative appreciation, and methodological comments) are used to illustrate the difficulties and advantages of analyzing this type of data. The advantages of analyzing feedback comments are well known, but they seem to be rarely exploited. General feedback questions at the end of surveys are typically non-directive. If researchers want to use these data for research and analyses, they need a clear strategy. They ought to give enough thought to why they are including this type of question, and develop an analytical strategy at the design stage of the study.

Hargittai, E., Nguyen, M. H., Fuchs, J., Gruber, J., Marler, W., Hunsaker, A., & Karaoglu, G. (2020). From Zero to a National Data Set in 2 Weeks: Reflections on a COVID-19 Collaborative Survey Project. Social Media + Society. https://doi.org/10.1177/2056305120948196

Abstract. In March 2020, like much of the rest of the world, we went into lockdown. A week into our new reality, we decided to do a survey study about how people were experiencing the COVID-19 pandemic. In this piece, we describe what motivated us to do the study, how we went about it, and what others can learn from our experiences.

Kalleitner, F., Mühlböck, M., & Kittel, B. (2020). What’s the Benefit of a Video? The Effect of Nonmaterial Incentives on Response Rate and Bias in Web Surveys. Social Science Computer Review. https://doi.org/10.1177/0894439320918318

Abstract. Traditional survey research faces declining response rates due to changing cultural habits and technological developments. Researchers have developed novel approaches to increase respondents’ likelihood of participating in web surveys. However, we lack information about whether these methods indeed increase response rates and, if so, whether they bias the resulting data. This article focuses on the use of nonmaterial incentives in the form of a video that provides the invitees with information tailored to their life situation. Analysis of our experimental data shows that instead of increasing respondents’ probability of starting the survey, the video treatments actually decrease it. We provide evidence that the lower salience of the intrinsic benefits of survey participation in the invitation email significantly contributes to this reduction. Additionally, the effect of the nonmaterial incentive differs across subgroups, affecting nonresponse biases in line with employment status, gender, and migration background. We therefore conclude that using additional information in the form of a video as a nonmaterial survey incentive is only suitable under specific conditions.

Liu, M. (2020). Soliciting email addresses to re-contact online survey respondents: Results from web experiments. Methodological Innovations. https://doi.org/10.1177/2059799120937237

Abstract. There are many occasions where contact information needs to be collected from survey participants in order to achieve future contacts and conducting follow-up surveys. This article reports findings from two experiments into collecting respondent emails and sending the second survey invites. In the email collection experiment, when only one follow-up survey was mentioned, more respondents provided their emails, compare to when the emphasis was on the research purpose of the follow-up survey. However, the follow-up survey participation rates are similar among respondents who provided their emails regardless of the wording of the request. The invitation email subject line experiment shows that a generic requesting for opinion reduces the follow-up survey participation compared to the elements emphasizing survey sponsor and specialty opinions.

Liu, M., & Cernat, A. (2018). Item-by-item Versus Matrix Questions: A Web Survey Experiment. Social Science Computer Review, 36(6), 690–706. https://doi.org/10.1177/0894439316674459

Abstract. While the choice of matrix versus item-by-item questions has received considerable attention in the literature, it is still unclear in what situation one is better than the other. Building upon the previous findings, this study expands this line of research by examining whether the difference between the two question types is moderated by the number of response options. Through a web survey experiment, this study compares matrix and item-by-item questions with 2, 3, 4, 5, 7, 9, and 11 response options. Additionally, we also investigate the impact of the device used to complete the survey on data quality. The results show that straight lining and response time are similar between the two question types across all response lengths, but item nonresponse tends to be higher for matrix than item-by-item question, especially among mobile respondents. Also measurement models reveal measurement equivalence between the two question types when there are fewer than seven response options. For matrices with 9 or 11 response options, analyses reveal substantial differences compared to item-by-item questions.

Maineri, A. M., Bison, I., & Luijkx, R. (2019). Slider Bars in Multi-Device Web Surveys. Social Science Computer Reviewhttps://doi.org/10.1177/0894439319879132

Abstract. This study explores some features of slider bars in the context of a multi-device web survey. Using data collected among the students of the University of Trento in 2015 and 2016 by means of two web surveys (N = 6,343 and 4,124) including two experiments, we investigated the effect of the initial position of the handle and the presence of numeric labels on answers provided using slider bars. It emerged that the initial position of the handle affected answers and that the number of rounded scores increased with numeric feedback. Smartphone respondents appeared more sensitive to the initial position of the handle but also less affected by the presence of numeric labels resulting in a lower tendency to rounding. Yet, outcomes on anchoring were inconclusive. Overall, no relevant differences have been detected between tablet and PC respondents. Understanding to what extent interactive and engaging tools such as slider bars can be successfully employed in multi-device surveys without affecting data quality is a key challenge for those who want to exploit the potential of web-based and multi-device data collection without undermining the quality of measurement.

Sakshaug, J. W., Vicari, B., & Couper, M. P. (2019). Paper, E-mail, or Both? Effects of Contact Mode on Participation in a Web Survey of Establishments. Social Science Computer Review, 37(6), 750–765. https://doi.org/10.1177/0894439318805160

Identifying strategies that maximize participation rates in population-based web surveys is of critical interest to survey researchers. While much of this interest has focused on surveys of persons and households, there is a growing interest in surveys of establishments. However, there is a lack of experimental evidence on strategies for optimizing participation rates in web surveys of establishments. To address this research gap, we conducted a contact mode experiment in which establishments selected to participate in a web survey were randomized to receive the survey invitation with login details and subsequent reminder using a fully crossed sequence of paper and e-mail contacts. We find that a paper invitation followed by a paper reminder achieves the highest response rate and smallest aggregate nonresponse bias across all-possible paper/e-mail contact sequences, but a close runner-up was the e-mail invitation and paper reminder sequence which achieved a similarly high response rate and low aggregate nonresponse bias at about half the per-respondent cost. Following up undeliverable e-mail invitations with supplementary paper contacts yielded further reductions in nonresponse bias and costs. Finally, for establishments without an available e-mail address, we show that enclosing an e-mail address request form with a prenotification letter is not effective from a response rate, nonresponse bias, and cost perspective.

Relevant MethodSpace Posts

Leave a Reply