Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Explore core features and advanced tools in one powerful platform.

Build and customise online forms to collect info and payments.

Integrate with 100+ apps and plug-ins to get more done.

Purpose-built solutions for all of your market research needs.

Create better surveys and spot insights quickly with built-in AI.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in
Survey Science

Survey responses: Can you have too much of a good thing?

Survey responses: Can you have too much of a good thing?

So your friend Barney is single on Valentine’s Day (again). He insists, however, that it is not due to his lack of appeal, but simply to the fact that he doesn’t try to get dates. You want to tell him that’s ridiculous—but, tired of listening to him go on and on about it, you decide to settle this once and for all.

The plan is this: Barney will go out and “try” to get women to call him for an hour, and then he will go out and “not try” like he usually does for an hour. Each of these hours he has to approach 20 women (either “trying” or “not trying”). If he gets more dates when he’s “trying” then clearly he’s right. If not, then you’re right. Case closed.

Here’s what happens: Barney first goes out for an hour and “tries” to get 20 women to agree to go on a date with him, miraculously 17 agree to the date (85%). Barney then goes out for an hour with his usual “not trying” strategy. Out of the 20 women he asks out, 8 agree to go on a date with him (40%). Feeling smug, Barney comes back telling you how wrong you were and how he’s sure he won’t be single next Valentine’s Day.

Barney thinks he’s proved you wrong because of the increased response rate he got. A response rate is just a fancy term for the proportion of people who actually respond to a request to do something—like a date with Barney, or a request to fill out your survey. Response rates are usually trumpeted as the most important measure of how good or useful a survey is. If more people choose to complete your survey then your survey must be better!

This metric makes sense. More responses are better than fewer. Still, what if it’s different people who reply to more aggressive survey tactics? What if it somehow changes the actual responses that you get back? Going back to our example, what if the people who called Barney back when he was trying were (on average) much less attractive or funny or wealthy than when he wasn’t trying? What if the dates he ultimately went on were (on average) worse?

So, when it comes to surveys, the question remains, does increasing the response rate somehow change the responses—or the respondents—that you get back?

In 2000, Keeter (a public opinion researcher) and a team of researchers tested this very question. They administered an identical survey to two randomly selected groups. With Group A, the researchers made an effort to increase the response rate and with Group B they didn’t. As a result, Group A’s survey had a 61% response rate and Group B’s survey had a 36% response rate. They found that the level of the response rate didn’t affect the actual responses given to the surveys or change the demographic makeup of either group. In other words, people who were in the high response group were not more likely to be brunette or like shopping anymore than people who were in the low response group. The level of the response rate of the survey did not change the characteristics of the respondents or the actual responses.

The moral? Higher response rates are better—but not because they change your responses or your responders—just because they get the job done faster. As for Barney, whether or not he won the bet, he’ll probably still be single next Valentine’s Day.

Keeter, S., Miller, C., Kohut, A., Groves, R., & Presser, S. (2000). Consequences of reducing nonresponse in a large national telephone survey. Public Opinion Quarterly 64, 125–148.