A/B testing & optimization

There are many possible pitfalls to fall into when conducting A/B tests. In this blog, I outline 1 of the most common ones I have seen happen more than once.

Running A/B tests with too narrow a scope

I have closed A/B tests way too early.

Also when I have found something to be statistically significant for better or for worse.

The significant results have convinced me enough to draw conclusions, I then communicated to the client, which created excitement and expectations.

However, several times it turned out that there was a high initial interest in the new design, which increased engagement initially, only for it to drop later throughout the experiment.

This was especially true in cases where the company had a high share of loyal customers a.k.a Returning Visitors who would either bluntly reject the newly added functionality or outright love it and interact heavily with it.

In these cases, it can be tempting to stop the current experiment and move on to the next one, and this is especially true for clients who want to see quick results.

However, I urge you to reject this early sensation and stick to the original plan, which should include an already agreed-upon running time for the experiment.

Lesson: Now I always make sure to agree upfront with the client that we will not call off any test that shows statistical results within a week, and preferably longer.

I make sure to follow the rate of Returning Visitors (with a cohort chart). I also do an additional iteration of the test to show that the test delivers value over time. A good argument for taking a bit longer is that the change is likely to deliver value throughout the next year before the effect starts to decay.

Happy testing and make sure not to call of your test too early 🤘