Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Explore core features and advanced tools in one powerful platform.

Build and customise online forms to collect info and payments.

Integrate with 100+ apps and plug-ins to get more done.

Purpose-built solutions for all of your market research needs.

Create better surveys and spot insights quickly with built-in AI.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees, and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business, and more.

Tutorials and how to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in

Ad testing: Why it matters and how to successfully perform it

How confident are you in your advertisement?

Whether you want your ad to increase sales or raise awareness of your brand, you need one that you know will work. There’s no better way to be confident about an ad than to see that it’s tested well.

Ad testing is the process of putting different ads in front of a sample of your target audience and asking for feedback on them. You can run ad tests on an entire ad or specific aspects of it, and collect feedback on anything from how much the ad stands out to how believable they find it.

Throughout this page we’ll talk about how to perform ad testing and the best practices when performing it, but first, let’s review why it’s so important.

The amount customers spend on ads is astronomically high (more than $500 billion dollars are spent on ads worldwide), and it’s only growing.

Why are brands spending so aggressively on ads? Because they’re effective. For example, consumer packaged goods (CPG) brands see a solid return on their ads across media types. When you incorporate pre-launch testing to home in on particular ad concepts, the chances of a strong return only grows.

Measuring advertising effectiveness through testing offers 4 additional benefits:

  1. It gives you data to back up your decisions. Nobody can argue against hard numbers. If you can prove which ad concepts are the best, you should have an easy time persuading colleagues to run with the winning ads.
  2. It provides ideas for further improvement. Your winning ad might not be perfect. For example, say you’re focused on copy testing. Your audience likes the winner, but they don’t find its message completely believable. This input gives you a chance to take already likeable copy and make it even better!
  3. It allows you to understand and segment different audiences. Once your responses come back, you can filter them to see how different groups (e.g. male vs. female) feel about each of your ads. These insights can help you pick specific ads for individual groups, or make a single ad that tries to incorporate elements each group likes.
  4. It helps your organization iterate quickly. The ability to do so is associated with performing agile market research: A method of research that involves frequent rounds of data collection to account for an organization’s needs over time. Agile research empowers your team to make better decisions more often, and rely less on insights organizations, agencies, or other types of businesses.

Our built-in research capabilities can handle every stage of testing at a fraction of the cost of a market research agency!

Now that you know why assessing advertising effectiveness is so important, you’re ready to dive into the process of testing your ads. Here are the 4 key steps you’ll need to follow:

ad-testing-steps

Your stimuli can take the form of videos, images, copy, audio, or a combination of these elements. As you decide on the formats your stimuli take, also consider their content based on the places they’ll be used. For instance, you might want to tailor Linkedin ads to professional networkers, as opposed to Facebook ads or Google search ads.

Your team should try to use at least 3 stimuli per test to get a fair assessment of your target audience’s preferences and opinions. But the number of stimuli you choose also depends on how you want to present them. In other words, if you plan to use a monadic survey design or a sequential monadic survey design.

A monadic survey design is when you ask each respondent for feedback on a single stimuli. Once all your responses come back, you’d then compare them across your stimuli to pick the winning concept.

Here’s an example of a monadic design.

This survey design allows you to ask more questions on each stimuli. It’s also more likely to result in a relatively short questionnaire, which benefits your survey’s completion rate and prevents respondents from racing through your survey. However, since you’re only showing each respondent one stimuli, you’ll need to target a larger audience. This can prove costly and it might not be feasible.

A sequential monadic survey design is when you present each respondent with 2 or more stimuli and ask the same questions on each. Once you’ve collected your responses, you can directly compare your stimuli using a single survey.

This type of design lets you target a relatively smaller audience than a monadic design, which makes it more cost-effective and feasible. However, if you want to keep your survey to a manageable length, you might not be able to ask as many questions on each stimuli.

Learn more about the pros and cons of each design, and how you can set them up in SurveyMonkey.

Bottom line: If you’d prefer to use a monadic design, you should probably stick to testing only a few stimuli; and If you want to use a sequential monadic design, you should feel comfortable testing more.

What makes one ad better than another? The metrics you measure can help you decide.

Here are the top ad metrics to consider:

  • Appeal: Is your ad’s imagery and message enticing?
  • Believability: Does your ad’s message seem credible?
  • Purchase intent: Will your ad motivate people to buy what it’s selling?
  • Relevance: Does the ad cater to your audience’s needs and wants?
  • Uniqueness: Is your ad different from others?

The value of each metric depends on your situation. Say the top goal for your ad is to influence sales. Purchase intent may then be the most important metric. However, if the focus is on brand differentiation, uniqueness may be more of a priority.

You can measure any metric in your ad testing survey using a Likert scale. It can follow the formula, “How (metric) is the ad?” where your answer choices range from “Extremely (metric)” to “Not at all (metric).”

For example, here’s how the question can look if our metric is believable:

How believable is the ad?

  1. Extremely believable
  2. Very believable
  3. Somewhat believable
  4. Not so believable
  5. Not at all believable

Your ad testing survey should also ask screener, category, and demographic questions. Learn about each of them in our most comprehensive resource for concept testing.

These resources can be just what you’re looking for!

Ultimate guide to running market research: This resource has everything you need to run market research, from planning your study to taking action!

Ad testing survey template: This expert-certified survey template can help you brainstorm your questions. You can also use it and edit it however you’d like.

SurveyMonkey Audience: Our global consumer panel allows you to survey people in more than 130 countries.

Market Research Services: From survey consultation to team trainings, our experts will ensure that your research on SurveyMonkey Audience runs smoothly.

You can only determine the quality of your ads once your target audience evaluates them. Find your target profile of individuals and give them an opportunity to provide feedback in either one of 2 ways:

  1. A focus group—This option works great in that you can collect in-depth feedback on a variety of different questions. However, focus groups can be costly to organize, and you likely won’t be able to gather feedback from a representative sample of those who’ll ultimately see your ads.
  2. SurveyMonkey Audience—Our market research panel allows you to specify who, exactly, you want to reach, how many people you want to survey, and then collect feedback from them within minutes. This option is more cost-effective than a focus group, as you’re able to reach more people at a lower cost, but you might not be able to gather feedback that’s as in-depth.

Once you’ve collected responses, you’re ready to compare the ads against each other. To help you focus on the data you care about most and to make your analysis more straightforward, we recommend you use Top 2 Box scores. This method of analysis combines the 2 most positive answer choices for each metric and turns them into a single percentage. For instance, if 30% of respondents said an ad was extremely believable and 20% said it was very believable, your Top 2 Box score would be 50%.

You can then take an ad’s Top 2 Box scores across every metric and compare it to the other ads’.

ad-testing-top-box-scorejpg

In the example above, we calculated the Top 2 Box scores for 2 ads across 3 metrics. Ad B wins on purchase intent and appeal; while Ad A wins on believability. Assuming you consider each metric equally important, you should pick Ad B over Ad A.

Don’t forget to look at the responses from your open-ended questions (those that don’t include answer choices). Our word cloud can help you quickly spot the key takeaways on each ad.

ad-testing-word-cloud

We’ve only scratched the surface on running a concept test with ads. To get a more comprehensive understanding of each step, read “The ultimate guide to concept testing.”

Alternatively, you can try in-market testing. This method pushes your concepts live and measures their performance through A/B testing (where a certain percent of people see each ad). In-market testing is valuable because it tells you how people actually respond to the ads. But it can be a more risky and costly approach, as some of your concepts may perform poorly. It’s a good idea to use both methods: Test ads before they go live and then A/B test the shortlist of winners to see how each actually performs.

When your product is in the process of being developed, you’ll have other testing priorities, like name or logo testing.

Begin ad testing once your product gets introduced. At this point, since everything about your ads will be brand new to your audience, you’ll want to look for broad, directional feedback. Home in on the messages that resonate the most, and move forward from there.

As your product develops, your testing should become increasingly nuanced. For instance, when your product is in the maturity phase, you might be testing different image concepts for the ad. Then, when your product reaches its saturation phase, you might have it so dialed in that you’re just testing the colors of the image you plan to use.

An outline of each step in the product life cycle.

The idea of running several ad tests throughout the product life cycle is rooted in agile market research—a research methodology that involves frequent rounds of data collection to account for an organization’s needs over time. It offers a manageable process for running research and it empowers your team to make better decisions more often.

Here are 5 things to stay on top of as you begin testing:

  1. Keep the metrics consistent across your ads. In other words, ask the same set of questions on each ad so you can compare them fairly.
  2. Use high-quality visuals. Ads that appear fuzzy—whether it’s a video or an image—can make it difficult for the respondent to give feedback, and they can worsen the perception of your brand.
  3. Survey a statistically significant number of people. If you have a hard time reaching enough people who fit your target audience specifications, you can use our panel.
  4. Track the number of questions you ask. We recommend limiting any ad testing survey to 30 questions. Any more increases the chances that respondents race through your questions with little thought or drop out of your survey.
  5. Benchmark your results. You can better determine the advertising effectiveness of any concept by seeing how it compares to competitors’ ads or to ones you’ve used previously. Simply include rivals’ ads and your older ones in the survey to see how they measure up.

Well done! You’ve learned why ad testing is important, discovered how to run your own test, and even got some tricks under your belt to ensure that your future tests run smoothly. Now you can rest easy knowing that the days of launching high-risk ads are over.

Biri marka stratejisiyle ilgili bir belgeyi inceleyen, diğeri de elinde grafiklerin çıktısını tutan iki pazarlama çalışanı

Brand marketing managers can use this toolkit to understand their target audience, grow their brand and prove ROI.

Dizüstü bilgisayarlarında bir makaleye bakan ve yapışkanlı not kağıtlarına bazı bilgiler yazan bir kadın ve bir adam

Master the 4 Ps – Product, Price, Place and Promotion – and discover how to use them to create a marketing strategy that drives results.

Dizüstü bilgisayarına bakarak gülümseyen gözlüklü adam

How to combine the power of Salesforce customer data with the feedback capabilities of SurveyMonkey to improve CX.

Dizüstü bilgisayarındaki bilgileri inceleyen kadın

Create data-informed marketing strategies that align with your target audience. Learn how to leverage your market research with thorough data analysis.