Skip to main content
Mathew Vermilyer

How can I forecast the value of winning tests?

December 12, 2022

This interview is part of Kameleoon's Expert FAQs series, where we interview leaders in data-driven CX optimization and experimentation. Mathew Vermilyer is an experienced data and analytics leader & United States Marine Corps Veteran. He is currently the Senior Director of Analytics & Optimization at At Home, a home décor chain operating 255 stores in 40 states.

Should I forecast revenue uplifts based on the conversion increase of winning tests?

Yes, it's a great way to measure the overall program performance. There are a few things to keep in mind, however. 

Don’t use a 365 forecast. It's impossible to assume your “Blue vs. Light Blue button color” test will yield the same incremental results as it did in your testing time frame. Too many things are changing, so I recommend using a 120-day forecast for the winning test. 

Not all tests can be forecasted the same. If you have a statistically significant positive conversion rate impact but a negative revenue impact, you don't want to forecast a negative impact. It's essential to look at multiple metrics for each test. 

Consider the cost of running all tests. Even the losing tests should be accounted for. You won't be forecasting losing tests, but they did have a cost. Each test you run costs money before they even start (meetings, design, development, pre-test analytics). It's essential to know your internal operational costs. 

Another recommendation to measure the program's success is calculating your win/loss ratio. An industry benchmark is that around 25% of all tests are successful.

Set a 33% win ratio as a goal for your testing program. If you are below that, there are flaws in how your testing program is set up. 33% may seem like a low winning record, but remember that means you are learning and potentially saving money from the 67% of tests that lost.
Mathew Vermilyer
Senior Director of Analytics & Optimization

How can I prevent personal biases from entering my testing program?

The primary way to prevent personal biases or too many opinions from overriding test ideas is to use data to support all tests. Every test needs a sample size estimation, a power analysis, and some kind of low, medium, or high estimate of the return on investment.

Those three metrics make prioritizing tests relatively simple, and it takes a lot of opinions out of the equation. 

Next time you have a test idea you’re not sure about, ask yourself, “Would I bet my paycheck that this test is a winner?” If the answer is no, don't run it. This is an excellent way to put some skin into the game and improve your win/loss ratio. It's also fun to ask executives when they suggest tests that have zero supporting data.
Mathew Vermilyer
Senior Director of Analytics & Optimization

How can I incorporate segmentation and targeting into my testing opportunities? 

I like to use the 80/20 rule for segmentation/personalization testing. 80% of your tests should focus on micro-conversion tactics and significant site changes that are on the roadmap. Then 20% of your tests should concentrate on segmentation/personalization.

Follow a crawl-walk-run philosophy, and start simple. New vs. returning users is always a good first segment, and then you can begin with personas & eventually 1-to-1 personalization

What does the ideal A/B testing team look like? 

Here’s my ideal testing CRO team:

  • Conversion Rate Optimization Lead - Owns the testing roadmap and methodology. Presents test ideas and results. 
  • UI/UX Designer - Responsible for turning hypotheses into an optimal experience for the user when they interact with the website.
  • Frontend developer - Develops the test in the A/B testing tool or codebase.
  • Web Analyst/Business Intelligence - Identifies areas of opportunities, pulls results, visualizes findings, and ensures the test is set up to be tracked correctly. 
  • Quality Assurance - Ensure tests go live without bugs. 
  • (Optional) Data Scientist/Statistician - You don't need a data scientist or a statistician to have a robust testing team. Good testing tools ensure you don't make decisions on insufficient data. As long as your CRO lead is an excellent A/B testing practitioner, then you can feel confident you aren't making decisions on false positives.

At what point does having gaps in data negatively impact experimentation? 

It depends. Do you care more about the precision of the outcome of your A/B test or making money? 

The world is full of noise, and you shouldn't worry about the gaps in the data. The success of your testing program is comprised of the number of tests run, the percentage of winning tests, and the average impact per successful experiment. If you let a lack of data prevent you from testing, you are missing out.
Mathew Vermilyer
Senior Director of Analytics & Optimization

Always remember that your competitors are running tests and learning how to beat you. Unless you are testing some lifesaving technology, test fast & fail fast.

Mathew as a veteran, what leadership lessons did you learn from the Marine Corps? 

Throughout my military and professional career, I've found that decisiveness is one of the most powerful traits of a good leader. Although I'm not in the military anymore and the decisions I am making hold vastly different consequences, in the end, the worst thing I can do is not to do anything. 

‘Analysis Paralysis’ in A/B testing is a real thing. Sometimes you want the test to be a winner so badly that you slice and dice the data in various ways to find out who it might work for. Ultimately, you must decide on the test and move on to the next. 
 

Topics covered by this article