Don't fall into the a/b test trap
It's a great time to be a marketer. Since so much of our work is online, we can see how many people click links on our landing pages, we can see who signs up for our product after reading our content, and we can even track people after they leave our websites and see what they do next.
All these tools mean it's tempting to test everything to refine your communications and make them as strong as they can be. But if you're not careful, this mindset can end up wasting your time and teaching you nothing. To get value out of your testing, you need to approach it in a strategic, disciplined way.
The wrong way to do it
The main issue that comes up with the “test everything” mindset is that it’s thrown in at the end of a project as an afterthought. This results in testing something like whether people prefer red or green “buy now” buttons on a landing page. You gather your results (which are not conclusive based on just one page), then you move on.
Next project rolls around, and you test something else that you thought of at the last moment. Rinse and repeat.
The letter not the spirit
The problem with the above example is that it’s following the letter of “test everything” without following the spirit of that ethos. You could technically test everything by choosing random things every time you build anything, but you don’t really get anything meaningful from this approach.
The smarter way takes a bit more work, but gives you better results.
Be a scientist
Testing in your comms is basically just running an experiment, so take your cues from the pros at running experiments: scientists. They even have a method named after themselves that you can follow.
To refresh your memory, it goes like this:
Question
Background research
Hypothesis
Experiment
Draw conclusions
Repeat
Now you can assign some meaning to your testing by applying a bit of rigour to it. By starting with a question, or a business problem, you’re already narrowing down what you’re testing. It could work like this:
Problem: Our customers aren’t opening our emails
Background research shows that emotive subject lines tend to be less effective than straight-to-the-point subject lines
Hypothesis: If we make our subject lines less “fun” and more “to the point,” we will get higher open rates.
Experiment: We are going to a/b test our subject lines on the next 10 eDMs that we send out to our entire audience (e.g., newsletters). This will give a good snapshot of whether our hypothesis is accurate.
Now you have a clear set of things you are testing, and a clear hypothesis. You can then turn this into a test plan for your next set of eDMs, then build on what you learn in your next set of tests.
This is a much more effective approach than just testing random variables every time you do something. Random variables give you random results, which are isolated and possibly spurious. You can’t connect results like this to a trend, hypothesis or business problem. Approaching your testing in a structured way gives you real data that you can use to drive real decisions. It takes only a little bit more time, and it’s significantly more valuable. It’s a no-brainer.
Image credit: Pixabay