Let’s start by stating the obvious: basing marketing decisions purely on guesses and assumptions is never a good idea. However, some marketers still rely on their feeling and intuition while evaluating alternatives for their website design, call-to-action messages, email content and other sorts of online factors. Instead of just predicting the outcome of these decisions by mere guesswork, you should run experiments like A/B tests and multivariate tests in order to optimize conversion rates and serve everyone with a personalized experience across your touch points. In one of our previous blogposts we talked about the what, why and how of A/B testing, now we’ll list 6 tips to get the most out of your online experiments.

1. Define a clear hypothesis and success metrics

Don’t launch an experiment out of the blue or base your test on so called ‘best practices’ you find online, but start with your own data instead. What works for other organizations, doesn’t necessarily work for yours, so don’t just assume that the outcome of someone else’s experiment automatically applies to your case. Rather take a look at your analytics data to find testing opportunities and develop a substantiated hypothesis before you run an experiment.

Furthermore, predefining a clear hypothesis and accompanying key performance indicators (KPIs) will eventually make it a lot easier to determine the success of your tests.

2. Watch out for the flicker effect

The flicker effect occurs when the original page content is briefly (this could be a case of milliseconds) displayed on an A/B tested page before the test variant appears. This is caused by loading the script and the time it takes for the browser to process modifications. You obviously wouldn’t want your audience to know they’re in a test and absolutely want to avoid your test experiences being influenced by the flicker effect.

Luckily there are a few ways to do this. Some popular tools, like Google Optimize and Adobe Target, provide their users with anti-flicker scripts. It’s also recommended to avoid the WYSIWYG editors in these testing tools and instead edit the code to create variants. Some other options include optimizing your site’s load time and not using a tag manager to call your tags for experiments.

3. Test your tests

This is an important one. Before setting an experiment live, ALWAYS verify the lay-out and functionality of your test variants in different browsers, devices and viewports. A certain design may look nice on your computer screen, but could look completely different on mobile devices, which could lead to distorted test with skewed results.

4. Do not stop early

Before you run a test, always calculate how much time it will need to reach statistical significance. With a handy test calculator you can determine test duration and sample size while accounting for your desired confidence level, statistical power and estimated audience numbers. Always respect this predefined test duration and sample size. After all, ending a test prematurely may result in misleading or inconclusive insights.

Also make sure you have enough representativeness in your sample by running your tests full weeks (or business cycles) at a time. This way you won’t miss relevant data that is dependent on time.

5. Don’t overanalyze

Digging into your test data and analyzing it on a deeper level could give you really useful insights. You could for example apply different segments to you test results in order to examine how different audiences react to the different test variants. This way you could uncover insights that aren’t applicable to the total test audience, but may apply for certain segments. But be aware, don’t take it too far and definitely always keep an eye on the significance level and power of your tests. Stick to your predefined KPIs and always keep in mind: “if you torture data long enough, it will confess to anything.”

6. Standardize and document

Make you job a lot easier by defining a uniform way of how you and your team approach each step of the testing process and how you report on your findings. A smooth structure will enable you to speed up test deployment and ease the evaluation process.

To avoid repeating experiments you’ve already performed in the past it’s also strongly recommended to systematically document your tests. Besides, building a library of your findings doesn’t only enable you to archive your knowledge, it may also help educate your employees.

Interested in taking a deep dive into the wonderful world of A/B testing? Join our ‘A/B Testing & Conversion Training’ and become an optimization expert in just one day! More info here. Also Don't hesitate to take a look at our services or contact us for more questions.