This article helps you:
Add recommendation and guardrail metrics to your experiment
Create new metrics from scratch, and edit existing metrics
An experiment can’t tell you anything without events to track. Adding metrics to your experiment occurs in the Goals segment of the experiment design panel. Here, you’ll tell Amplitude Experiment what you want your recommendation metric to be, as well as define any secondary metrics. The recommendation metric determines whether your hypothesis is accepted or rejected, and therefore, whether your experiment has succeeded or failed.
There’s a lot riding on your recommendation metric, so it’s important to choose the right one. If you’re not experienced in A/B testing, it can be hard to know which one that is. But if you know what to look for, your odds of a successful variant improve dramatically:
One common mistake is defaulting to a revenue metric when it’s not appropriate. This happens when your variant introduces a change that's separate from the metric you’ve selected. If your variant changes how your product page looks and functions, you should choose a metric on that page as your recommendation metric, instead of a revenue metric that might not come into play for several more steps down the funnel.
Amplitude Experiment lets you define multiple metrics when running an experiment. Unlike a recommended metric, non-recommended metrics aren’t required, but they're often helpful. They can not only improve the quality of your analysis, but help evaluate whether it’s even worthwhile to roll out your experiment at all.
To set up the metrics for your experiment, follow these steps:
Turn on the Enable Recommendation option to enable recommendations duration estimates, result takeaways, and statistical significance notifications.
The duration estimator estimates the time and sample size you need to achieve significant results in your experiment, given your metric settings. Amplitude Experiment pre-populates reasonable industry defaults based on historical data, but you can adjust the confidence level, statistical power, minimum detectable effect, standard deviation, and test type as needed.
If you don’t want to use any of the metrics in the drop-down list, you can create a new metric. To do so, follow these steps:
By default, the Retention metric doesn't support CUPED, exposure attribution settings, nor calendar day windows. Instead, the metric calculates exposure attribution settings using any exposure and the nth day value based on 24-hour window increments.
In your experiment, open the Design Experiment panel, or the Analysis Settings, and choose the exposure event. When a user trigger this event, Amplitude Experiment buckets them into the experiment. The Amplitude exposure event is the most accurate and reliable way to track user exposures to your experiment’s variants, so you should use that if possible.
Amplitude sends the Amplitude exposure
event when your app calls .variant()
. It sets the user properties Amplitude Experiment uses to conduct its analyses. When you use the Amplitude exposure event, you can be certain your app triggers the event at the correct time.
You can select a custom exposure event instead. Click Custom Exposure, then Select event … to do so. There's a much greater risk of triggering a custom exposure event at the wrong time, which can lead to a sample ratio mismatch.
For more information, see this article about exposure events in Amplitude Experiment.
The next step is defining your experiment's audience.
Thanks for your feedback!
April 30th, 2024
Need help? Contact Support
Visit Amplitude.com
Have a look at the Amplitude Blog
Learn more at Amplitude Academy
© 2024 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.