Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Explore core features and advanced tools in one powerful platform.

Build and customise online forms to collect info and payments.

Integrate with 100+ apps and plug-ins to get more done.

Purpose-built solutions for all of your market research needs.

Create better surveys and spot insights quickly with built-in AI.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in
Survey Science

Academic research on use of incentives in surveys

Academic research on use of incentives in surveys

If you are interested in understanding the impact of using incentives in surveys, following is a summary of academic literature that covers the topic quite well.

Jamshid S Kalantar, Nicholas J Talley
Volume 52, Issue 11, Pages 1117-1122 (November 1999)
Journal of Clinical Epidemiology

ABSTRACT

Maximizing the response rate of self-administered questionnaires is key in survey research. We aimed to evaluate the effects of lottery incentive and length of questionnaire on health survey response rates when used in isolation or combined. A random sample of 440 residents in Western Sydney, Australia was randomly allocated to four equal groups to receive or not receive an instant lottery ticket and a long (seven page) or short (one page) questionnaire. The overall response rate was 71.8%. The final response rates were higher among those receiving the short, rather than the long, questionnaire (75.6% versus 68.2%) (P = 0.08); and among those receiving the lottery incentive compared with those not receiving the incentive (75% versus 68.2%) (P = 0.09). By logistic regression analysis, the success of obtaining a completed questionnaire without any follow-up reminders was significantly associated with the lottery incentive but not the questionnaire length (P = 0.03 and P = 0.54, respectively). The difference between lottery and no lottery groups decreased gradually during the follow-up. A lottery incentive is associated with an increased response after the first mailing. A small up-front cost for a lottery ticket may be worthwhile, since it can save further costs by obviating the need for repeated follow-ups.

Colleen Cook, Fred Heath and Russel L. Thompson
Educational and Psychological Measurement 2000; 60; 821

ABSTRACT

Incentives. Our results were particularly interesting with regard to the use of incentives. As reported in Figure 5, the use of incentives in Web surveys
actually seemed to be associated with more homogeneous and lower response rates. This paradox may have occurred because persons implementing
disproportionately long or tedious surveys may have recognized the necessity of providing substantial rewards for survey completions.

Stephen R. Porter and Michael E. Whitcomb
Research in Higher Education, Vol. 44, No. 4, August 2003

ABSTRACT

Unlike the inclusion of prepaid incentives with surveys, promises of payment on survey completion do not appear to affect respondent behavior. Several studies have been conducted that compare the impact of both pre- and post-payment of incentives, with the general finding that promised payments for survey completion in the range of $5–$20 (Berk, Mathiowetz, Ward, and White, 1987; Berry and Kanouse, 1987; Singer et al., 2000) and $50 (James and Bolstein, 1992) have no statistically significant impact on response rates. The same holds for non-monetary postpaid incentives (Jobber, Mirza, and Wee, 1991). Church (1993), in his widely cited meta-analysis of 74 surveys that used incentives, found that rewards contingent on the return of the survey had no significant effect on response rates. While the studies cited above used telephone or mail surveys, a meta-analysis of 68 electronic surveys found that incentives actually slightly depressed response rates (Cook, Heath, and Thompson, 2000). The authors of that analysis do not describe whether the types of incentives used by the surveys under study were pre- or post-payments, but given the logistical difficulties of pre-payments in electronic surveys (Couper, 2000), it is likely that these incentives were post-payments.

The one exception in the literature is the work of Collins, Ellickson, Hays, and McCaffrey (2000). In their study of pre- and post-payments in the eighth wave of a decade-long longitudinal study, they found a significant positive effect for a promise of payment on survey completion. However, they indicate this anomalous finding is probably the result of their sample. The respondents surveyed had participated in the study for many years and had developed a close relationship with the survey team. Thus, trust and a norm of reciprocity had already been established by the time of their study, and any promises of payment would be seen as much more credible than if the respondents had been contacted for the very first time.

Finally, some researchers have tested the effect of lottery post-payments in which the incentive is contingent on both completing the survey and the outcome of a drawing. Similar to the literature on simple post-payments, these researchers have found no effect for lottery incentives on response rates. (Golden, Anderson, and Sharpe, 1980; Hubbard and Little, 1988; Paolillo and Lorenzi, 1984; Warriner et al., 1996).

The survey administration consisted of an initial e-mail notification with an embedded survey link, and each group was administered the same survey. Although students were asked to enter their e-mail address during the survey, the sample groups were given links to five separate Web sites to ensure we could track differences between groups. The four incentive groups were informed that if they responded to the survey they would be entered into a drawing for a $50, $100, $150, or $200 gift certificate to the online retailer Amazon.com, depending on the experimental condition. The e-mails sent to each group were identical except for those sent to the incentive groups.

Our results indicate that more is not better: increasing the size of the prize did not result in a linear increase in response rates. Researchers are thus faced with a quandary. If the prize is not valuable enough, the prize will not affect response rates. If the prize is too valuable, it also will not affect response rates. It is likely, then, that resources might be spent on prizes that in turn will not affect the survey response rate.

Anja S. Göritz, Hans-Georg Wolff
University of Erlangen–Nürnberg

ABSTRACT

Independent of the incentive, the response rate of 61.4% in Wave 1 rose to 78.0% in Wave 2, which is a result of inviting to Wave 2 only those people who had responded to Wave 1 (see Table 1). Throughout the remainder of the longitudinal study, the response rate fell to 58.8% in Wave 3 and to 55.1% in Wave 4. Table 1 lists the response and retention rates for the lottery and control conditions and the net effect of the lottery. The net effect of the lottery on response declines from a positive value of 12.7% to a negative value of –2.7%. Net effects of the lottery on retention do not yield a systematic pattern over time. Table 1 also shows the total effects of the lottery by combining the effects on response and retention (= useable data). The total effect of the lottery declines from a positive value of 12.2% to –1.8%.