How can I forecast the value of winning tests?
This interview is part of Kameleoon's Expert FAQs series, where we interview leaders in data-driven CX optimization and experimentation. Mathew Vermilyer is an experienced data and analytics leader & United States Marine Corps Veteran. He is currently the Senior Director of Analytics & Optimization at At Home, a home décor chain operating 255 stores in 40 states.
Should I forecast revenue uplifts based on the conversion increase of winning tests?
Yes, it's a great way to measure the overall program performance. There are a few things to keep in mind, however.
Don’t use a 365 forecast. It's impossible to assume your “Blue vs. Light Blue button color” test will yield the same incremental results as it did in your testing time frame. Too many things are changing, so I recommend using a 120-day forecast for the winning test.
Not all tests can be forecasted the same. If you have a statistically significant positive conversion rate impact but a negative revenue impact, you don't want to forecast a negative impact. It's essential to look at multiple metrics for each test.
Consider the cost of running all tests. Even the losing tests should be accounted for. You won't be forecasting losing tests, but they did have a cost. Each test you run costs money before they even start (meetings, design, development, pre-test analytics). It's essential to know your internal operational costs.
Another recommendation to measure the program's success is calculating your win/loss ratio. An industry benchmark is that around 25% of all tests are successful.
How can I prevent personal biases from entering my testing program?
The primary way to prevent personal biases or too many opinions from overriding test ideas is to use data to support all tests. Every test needs a sample size estimation, a power analysis, and some kind of low, medium, or high estimate of the return on investment.
Those three metrics make prioritizing tests relatively simple, and it takes a lot of opinions out of the equation.
How can I incorporate segmentation and targeting into my testing opportunities?
I like to use the 80/20 rule for segmentation/personalization testing. 80% of your tests should focus on micro-conversion tactics and significant site changes that are on the roadmap. Then 20% of your tests should concentrate on segmentation/personalization.
Follow a crawl-walk-run philosophy, and start simple. New vs. returning users is always a good first segment, and then you can begin with personas & eventually 1-to-1 personalization.
What does the ideal A/B testing team look like?
Here’s my ideal testing CRO team:
- Conversion Rate Optimization Lead - Owns the testing roadmap and methodology. Presents test ideas and results.
- UI/UX Designer - Responsible for turning hypotheses into an optimal experience for the user when they interact with the website.
- Frontend developer - Develops the test in the A/B testing tool or codebase.
- Web Analyst/Business Intelligence - Identifies areas of opportunities, pulls results, visualizes findings, and ensures the test is set up to be tracked correctly.
- Quality Assurance - Ensure tests go live without bugs.
- (Optional) Data Scientist/Statistician - You don't need a data scientist or a statistician to have a robust testing team. Good testing tools ensure you don't make decisions on insufficient data. As long as your CRO lead is an excellent A/B testing practitioner, then you can feel confident you aren't making decisions on false positives.
At what point does having gaps in data negatively impact experimentation?
It depends. Do you care more about the precision of the outcome of your A/B test or making money?
Always remember that your competitors are running tests and learning how to beat you. Unless you are testing some lifesaving technology, test fast & fail fast.
Mathew as a veteran, what leadership lessons did you learn from the Marine Corps?
Throughout my military and professional career, I've found that decisiveness is one of the most powerful traits of a good leader. Although I'm not in the military anymore and the decisions I am making hold vastly different consequences, in the end, the worst thing I can do is not to do anything.
‘Analysis Paralysis’ in A/B testing is a real thing. Sometimes you want the test to be a winner so badly that you slice and dice the data in various ways to find out who it might work for. Ultimately, you must decide on the test and move on to the next.