This article claims that "MOST WINNING A/B TEST RESULTS ARE ILLUSORY”. I agree.
Please ignore the well-meaning advice that is often given on the internet about A/B testing and sample size.
For instance, a recent article recommended stopping a test after only 500 conversions. I’ve even seen tests
run on only 150 people or after only 100 conversions . This will not work. The truth is that nearer 6000
conversion events (not necessarily purchase events) are needed.
Almost two-thirds of winning tests will be completely bogus. Don’t be surprised if revenues stay flat or even go down after
implementing a few tests like these.