Over the years, I’ve seen countless CRO experiments deliver underwhelming results, not because the ideas were lacking or unjust but because the wrong metrics were measured or critical insights were missed during testing.
It’s a tough lesson, but one that highlights why measuring micro and nano conversions is just as important as tracking the final conversion rate. Below, I’ve highlighted the most common mistakes I’ve encountered and how a small shift in focus can usually drive real, measurable improvements and successful results for CRO teams.
1. Focusing on the wrong metrics.
When I first experienced A/B testing, it all felt like smoke and mirrors to me, as I spent a considerable amount of time obsessively analysing behavioural data and metrics through various analytics tools. Seeing teams celebrate an increase in conversion rate from a CTA design on a product detail page, only to realise later that the Add-to-Cart rate during the same test period barely moved.
What I began to understand early on in my experimentation journey is that CRO platform providers have focused on KPIs when selling the platform. It's what the decision makers and budget holders understand: increase conversion rates, and revenue will increase. This is somewhat true of the tools themselves, elevating core KPIs like Conversion Rate and Revenue to show the broader impact of an A/B test.
However, if your experiment is aimed at improving a specific behaviour, like getting users to add items to their basket, then that’s the metric that matters. Revenue and conversion rate are important, but in this scenario, they’re secondary outcomes, not primary indicators of success.
2. Ignoring micro & nano conversions.
Micro conversions like wishlist additions, search usage, or email sign-ups are often dismissed because they don’t directly generate revenue. I’ve seen so much pushback from those on the front line of trading, even members of executive management, flagging what’s the point: “It’s not impacted revenue”. But I’ve learned that these behaviours are solid indicators of purchase intent, stages of awareness, consideration and nurture, which I later discovered (thanks to a paid media colleague) is called the ‘Messy Middle’. Similarly, nano conversions, things like scroll depth, hover time on product images, or partial form completions reveal user engagement and friction points, helping you diagnose why users don’t reach the finish line.
Read more about the importance of micro and nano conversions for driving eCommerce growth.
3. Rushing hypotheses without proper insight.
I hate to say it, but I’ve made this mistake myself… jumping into tests based on assumptions rather than data. The most impactful experiments I’ve run always come from pairing quantitative data (heatmaps, funnel analysis) with qualitative insights (session recordings, surveys). A solid hypothesis always answers, “What’s the user struggling with, and how will this change help?”
4. Giving up on tests too soon.
Ending a test early because a result looks promising or disappointing is a common trap. Again, I’ve learned the hard way that statistical significance takes time; this is especially true on low-traffic pages. Rushing to conclusions leads to false positives or negatives, undermining the entire experimentation process. The key here is understanding sample sizes and the volumes required to reach statistical significance within the planning and prioritisation process, as you’ll quickly learn if hypotheses are viable.
5. Overcomplicating tests & losing focus.
Whilst it’s tempting to test everything at once, with multiple CTAs and layout changes, etc., this simply creates more complexity and can often cloud your results. The key here is to test everything within the constraints of the primary metric being measured. There are some considerations and factors that would challenge this, like traffic volumes and the ability to reach statistical significance. In these situations, less experimentation and more of a benchmarking approach are needed. However, with experiments, in reality, the most successful tests I’ve run were simple, single-metric tests that answered one question clearly. Within this, yes, I’ve made significant changes to PDP pages and redesigned landing pages, but I've always focused on the primary metric that supports the hypothesis.
6. Neglecting audience segmentation.
It didn’t take me long to realise that when a test performs well on desktop, visitors might tank on mobile. I’ve seen experiments fail and thrown by the wayside simply because results weren’t segmented correctly. Similar to my core point of measuring the right metric, applying traffic source, device, or even user type provides greater targeting and more opportunity for the experiment to yield successful results. Segmentation is critical for discovering who your changes genuinely benefit.
7. Skipping the review.
I’ve probably spent the majority of my time with CRO teams gathering insight and reviewing the impact of testing, and some of my most valuable insights have come from reviewing why a test failed. Yet, I’ve seen teams skip post-test reviews altogether, failing to document outcomes and retaining only the CRO Platform Data. I’m a stickler for documenting results, storing benchmarks and making notes on performance; even those experiments that miss the mark all add and build a knowledge base that really focuses on future experiments.
Things to consider.
It took me a few years to learn early on in my optimisation career, but CRO isn’t about chasing quick wins; it's about building a solid qualifying process where every test teaches you something. By aligning metrics with goals, focusing on micro and nano conversions, and committing to a disciplined review process, we drive more meaningful improvements over time.
If you’ve run into these pitfalls or are just starting your experimentation journey, I’d be happy to talk, so get in touch. I’ve been there, and I’d love to help you turn my hard-earned lessons into successful conversion opportunities for your eCommerce business.
