Conversion Testing Basics

Before you jump into the deep end of the testing pool, there are some important things to think about that will help you understand your results and what to do next.  

Develop a Hypothesis and Next Steps

The first thing to do before launching any test is to come up with a hypothesis. Basically, what do you think is true and how will you go about proving (or justifying) it? For example, something as basic as a button color test has a hypothesis that the blue button will drive more conversions than the green one. This is important because it forces you to have a reason to run a test instead of saying, “it would be interesting to know…” you’re going to reach a firm point of view once you have the data in hand.

With that hypothesis in mind, you can then have clear decisions that you know you’ll make ahead of time depending on the results you get. For example, if the blue button wins, you will change all of the buttons to blue in your pop-ups.

I know, that seems obvious but trust me, it’s really helpful.

 
DecisionTree.png
 

When you’re thinking about any test, it’s also important to limit the number of variables to one. Simply put, if you change a bunch of elements in your campaign all at once, you’ll never know exactly what drove the change in results. For example, if you’re running that same button color test but you also change the copy of your pop-up, how will you know whether it was the button or the copy that drove the difference in results.  

Does that mean you can’t test full sets of creative (image + color + copy) against each other? No, it just means you need to be conscious of what you are learning and how you apply it to other related items. So, you can now that one full pop-up performed better than another but you’ll want to avoid taking a single element of the pop up like a button color and making the leap to site wide changes of button color.

Directional v Statistically Significant

Data purists will tell you that the only reliable tests are one that are statistically significant. That basically means that enough people have been a part of the test to make the results projectable more broadly. You can read way more info about that here. The important thing is that whenever possible, you want a large sample set that can reach the point of being trustworthy.

Unfortunately, most of us simply do not have the volume of web traffic that make running those types of tests practical. So, we run tests that are directional in nature but can still be incredibly valuable, even if not 100% reliable.

In this case, think about it this way. Would you be better off asking 1,000 friends a question to see what a majority of people think or would you rather just trust your gut? While the results of that questioning might not be bullet proof, they certainly should help shape your opinion about what to do next.

A/B tests v Sequential tests

If you’re new to the testing game, you might be wondering what exactly is an A/B test. It’s actually very straightforward. You are creating two versions of something like a pop-up or landing page (hopefully with just a single variable changed) and splitting your web traffic at random to send a certain percentage of people to one page v the other. Then you evaluate which version drive more conversions and pick a winner.  

The great thing about an A/B test is that it automatically accounts for all other factors because the only difference between one set of visitors and another is what they are seeing on your site. The time of year is the same, your offer is the same. The weather is the same. You get the idea. You’re limiting the outside influences that are impacting the results of your test.

 
 

Sequential testing on the other hand, means that you are doing one thing for a period of time. You make some changes and leave them for the same period of time. And then you compare the results. This is easy to execute but harder to correctly analyze because any number of other things could have impacted the results that are out of your control.

So which is better?

In a perfect world, we would all be running statistically significant A/B tests and we’d be learning and improving rapidly. The next best scenario is to run directionally valid A/B tests. The last choice (still way better than nothing) is to run sequential tests, or as I like to call them “let’s do something and see what happens.” You can still learn a lot, just make sure you're combining your instincts with the results.