As any marketer may know, having a clear indicator of success and maximizing it is critical in order to build an efficient marketing strategy. And in order to do that, iteration is a crucial part of understanding what raises your performance (or tanks it). And also, since our digital ecosystem changes so quickly nowadays, not everything that is working today will work tomorrow, so having a clear testing process is crucial when working with media, especially on digital platforms, to always try to stay ahead of the curve.

But you could be asking yourself, how is a test built? What are the variables available to iterate? How long do I keep the test online? There is no one right answer for these questions, and there are a lot of ways to actually run a creative test, but we’ll go through a list of 3 questions that you need to answer in order to build a more effective creative test for your campaign.

  • What is the Purpose of your test?
  • What type of test should I use in my case?
  • How to set up my testing structure?

What is the purpose of your test?

Of course every test is run in order to maximize the results on one specific KPI, but what assumptions are you trying to validate? Every hypothesis has a different testing recommendation, and we’ll run by some of the most common ones

Understand if my hypothesis is correct

What if I change my call-to-action from “see more” to “download now”, do I raise my app installs? What if I change my background from yellow to green, will I raise my CTR? 

These are generally tweaks to the asset that you want to validate if it drives performance. It’s important to always run this type of test against the original asset and in the same conditions in order to avoid any false positives/negatives. It is possible to run more than one asset variation at a time (what is called an ABC test, for example), but always run variations of the same tweak.

Explore new territories

What if I use a celebrity in my next asset, will that bring me better results? What if I made assets focused on each NFL team rather than have one generic League asset, will that be better for my brand? 

These hypotheses are much more complex and will need more probing to arrive at a solution. You probably will not need one test, but a series of smaller tests during a larger period of time. Other things such as audience tests will need to be made as well in order to drive the best results, sky’s the limit here so it’s good to try and keep it as simple as possible in order to not lose yourself in the middle of the data you’ll extract.

New platforms and formats

Will this Instagram Feed asset work better if I format it to Stories? What about a version of it for YouTube, will that work? These hypotheses are very valuable when you think of expanding a campaign to new channels, but are very complicated to formally test, since every channel and placement behave differently from one another, having different baseline KPIs, having different audiences, etc. Another important thing to add is that you won’t be able to segment the audiences to run the test, one asset can impact the other’s performance if you let them run at the same time (only possible if you’re using your own audiences, depending on certain conditions). 

What type of test should I use in my case?

There are multiple different testing structures that may be applicable for the same situation, so there is no one right answer. There is though, wrong answers, and the idea is to go through some simple testing types that can be applied in most cases:

A/B testing

The AB test is the most common creative testing structure. It’s when you run two assets one against another, at the same time and with the same conditions. The 2 assets need to be very similar from each other, distinguishing in only one detail, the one you want to test. Smaller variations such as changing the color of a button, or a different call to action are great examples.

You can also run more than two assets at a time, if they all have the same element variation, multiple colors of the same button for example.

On/Off testing

The On/Off testing is a simple way to add a bit of statistical rigor to a very simple routine. It works by comparing results of 2 assets running one after the other, using the same conditions. So you run one, wait for it to stabilize performance (from 3 days to week depending on the media conditions), pause the ad and run the other one with the same conditions (investment, audience, placements, period) and compare results.

With this kind of test, you can compare assets who are much different from one another, whilst still being made with the same objective, placements and audience in mind.

Natural selection of ads

This is not a formal way of testing, but it is a quick and easy way to infer results that you may need to make a decision. It’s as simple as letting all assets of the same platform run at the same time within the same campaign and/or ad set and comparing the results.

The algorithms of the digital media platforms tend to leave all ads on for a while and, after what they call a “training phase” they choose 1-2 ads that show the most promising results for the objective that you selected and prioritize them. So you will get a couple of ads with much more impressions than the rest, but you can also infer that these ads “won” over the other ads, meaning that they should impact your performance positively. 

This is a very quick way to infer if your creative hypothesis or insight is valid or not, although we can’t technically call it a creative test. But sometimes having a quick answer is more important than having a small margin of error. Occasions such as: 

  • Need validation as soon as possible with no time to pre-test creatives before airing them
  • You produce a large number of assets at a high frequency and can’t always stop and test every single one of them.

Sometimes agility is more important than accuracy, and that is ok and needs to be accounted for.

How to set up my testing structure?

So you know why you are testing and you also have a testing structure in mind. Now, how can I set up my test? We can summarize this in 3 steps:

  • Choose the target KPI of the test, which will be verified throughout the experiment
  • Choose what variable to test and how many variations to build
  • Fix all other variables

The first thing you have to do is fix the KPI that you will be measuring throughout your testing phase is the most important thing. Ideally, you should pick only one KPI and measure it throughout your tests, in order to set a baseline and understand the levers that are bringing your performance up.

Next, you’ll need to pick what you will be variating on your test. Ideally, the less you will vary per test is best, in order to really understand what is impacting the result. But there are multiple variables to choose from, aside from your creative, such as:

  • Your optimization method
  • Your audience
  • Your Platform and Placement
  • Your creative asset
  • Your copy

So you should pick which variable you will test, what options of that variable will you test, and then make all other factors constant.

For example:

  • I am a marketing manager of an eCommerce company and want to optimize my call to action button on my creative in order to get more installs on my app. So I would:
    • Fix my KPI: cost per app install
    • Understand that I will variate my creative asset, making 4 variations of the same asset, changing my calls to action
    • Fixing my optimization method to conversion, audience to former visitors of my website, platform and placement to Facebook/Instagram Feed and making one copy that will work for all creative assets
creative
research + innovation
testing