A/B testing can be an effective method for increasing the total conversions of an app. It works by publishing two versions of an app’s product detail page (PDP) and comparing the conversions between them. This week we launched an
A/B Testing tool in the
Developer Dashboard that enables developers of Meta Quest Store apps to run A/B tests on a variety of PDP assets.
We’re excited about the value this tool provides in evaluating PDP asset strategy and understanding comparative improvements across Meta Quest Store conversion metrics. A/B testing allows you to gather statistically significant results quickly and determine a clear winner between assets so you can build on your hypotheses. Below we dive deeper into how you can use the tool effectively along with some best practices to help you achieve significant results.
Choose Your Assets & Define Your Audience
A/B tests allow you to compare assets within three PDP asset variables: Trailer Video, Description (Short and Long), and Cover Art (Square and Landscape). After creating and submitting your test for approval, your test assets are shown to either 10%, 25%, or 50% of the Meta Quest audience in the Store. While the audience split can be changed, a 50/50 split helps ensure a higher probability of statistically significant results and can help you achieve significant results faster.
Your assets are your main driver for drawing in audiences and purchases. When A/B testing, consider testing assets that are substantially different from your original. This encourages significant results and can help you evaluate audiences’ tastes. The A/B testing tool also allows you to localize your test descriptions to reach diverse audiences and more accurately measure performance among audiences who don’t speak English as their primary language.
To encourage statistically significant and meaningful results that can be used to verify your hypothesis, the A/B Testing tool only allows you to test one type of PDP variable at a time.
Dig Into Engagement & Conversion Metrics
A/B testing compares the performance of your original and test assets by measuring Conversion Rate, Reach, Unique Clicks, and Unique Conversions. The goal of the test is to generate results that are statistically significant with confidence that asset performance was attributed to something other than chance or external variables.
Statistical significance is achieved when your PDP generates enough traffic with a 95% confidence interval between assets. While A/B tests can last up to 30 days, your test may reach statistical significance sooner. Measurements are withheld for the first week to ensure test accuracy. The amount of visitors and error rate may vary, but these metrics help ensure that your results are significant and can be used to verify your hypothesis.
If statistically significant results are achieved, the better-performing asset will be declared the winner. If your test asset is declared the winner or achieves better results than your original asset (in the instance of statistically insignificant results), you have the option to publish it to the Meta Quest Store and display it to 100% of your audience.
Make the Most of Your Results
To gain the most value from your results, it’s important to create a hypothesis that can be measured by the parameters of the test. A specific and data-driven hypothesis can help you make incremental changes and understand asset performance. After your test has ended, build on what you’ve learned by running more tests with an altered hypothesis.
We also recommend avoiding marketing moments such as discounts, major updates, and new add-ons or DLC releases during the duration of your test. These moments can influence behavior and make it difficult to attribute the test results to your PDP assets.
Get Started with A/B Tests
All developers with apps listed on the Meta Quest Store can now access the A/B Testing tool in the
Developer Dashboard. Select your app and navigate to the “A/B Testing” tab on the left-hand side. Here you can view test results and manage upcoming, active, and completed tests in one location.
A/B tests are submitted and reviewed prior to publishing. During the review period, you may be requested to make changes to your submission. Please note that only one A/B test may run at a time per app, and you can’t publish other app metadata changes while a test is active. To help you manage your A/B tests, automated emails will notify you about requested changes, test approval, statistically significant results, and test completion.
Get started A/B testing by visiting the
documentation, which contains additional best practices and FAQ.