A/B Testing and Control Groups with Mobile In-App Campaigns

This article covers the following topics around your Mobile In-App Campaigns -

  • Creating A/B or a multivariate experiments
  • Static vs Dynamic Distribution for your tests
  • Adding a control group
  • Measuring the results of your experiment
  • Sample Experiments for Mobile In-App Campaigns.

Creating A/B or multivariate experiments

To create an A/B or multivariate experiment with your mobile in-app campaigns, start by going into the campaign creation workflow for mobile in-app campaigns as mentioned in this article.

 

Further, when you are on Step-2 of your campaign creation, you would be able to add A/B test as shown below -

Screen_Shot_2021-04-07_at_3.42.46_PM.png

You can add up to 5 different variations for your campaign by either copying one of the existing variations or by creating a new one from scratch.

For more information on A/B or multivariate testing, please refer this article.

 

Static vs Dynamic Distribution for your tests

While creating an A/B or Multivariate experiment for your Mobile In-App Campaigns, you can choose to distribute users across different variations either dynamically using MoEngage's AI Agent - Sherpa or manually by defining a percentage distribution for each variation as shown below -

Screen_Shot_2021-04-07_at_5.38.23_PM.png

  •  Dynamically using Sherpa: Select this option if you want moengage's AI agent - Sherpa, to dynamically optimize the user distribution across different variations basis the performance of each variant.
    When you select this option, you will need to specific one of the following optimization metric which Sherpa will use to identify the performance of each variant and automatically push more users to this variation.

    The below optimization metrics are available and depending upon the goal for your campaign, you can select the most relevant one -

    - Engagement: Select this when you want to optimize the distribution across different variations basis the click through rate of each individual variant.

    - Conversions: Select this when you want to optimize the distribution across different variations basis the conversion rate of each individual variant. 

    - Revenue: Select this when you want to optimize the distribution across different variations basis the revenue generated by each individual variant.

    For more information on how Sherpa works, please refer this article.

  • Static Distribution: If you select this option, you can define static percentage distribution for each variation which will be maintained throughout the campaign lifetime.
    Please note that there could be a slight variance in the user distribution in static distribution model.

 

Adding a control group

Control groups will allow you to restrict the campaign from being shown to a subset of your target audience. With control groups, you can measure the impact of a showing a campaign against not showing it.

To know more about control groups, please refer this article.

You can add a control group in your mobile in-app campaigns from Step-1 of your campaign creation as shown below -

Screen_Shot_2021-04-07_at_3.42.16_PM.png

Once a control group is added in your campaign, the defined percentage of users will not see the campaign's message.

 

Measuring the results of your experiment

Once you have created an A/B or Multivariate experiment with Mobile In-App campaigns, you would be able to analyze the performance of this experiment in your campaign analytics page (All Campaigns Page -> Campaign -> Analytics).

Please note the below key terms that you will come across while analyzing your A/B experiment.

  • Best Variant : This denotes the best performing variant of your campaign. If the user distribution  for the experiment is static then MoEngage will leverage click through rate of each variant to declare a winning variation. If the distribution is dynamic then moengage will leverage the selected optimization metric to declare a winning variation.

  • Chances to beat all other variations: This denotes to chances of each variation to beat all other variations. This is again calculates basis Click Through Rate for Static user distribution and basis the selected optimization metric for dynamic distribution.

  • Uplift: This denotes the uplift in primary conversion goals of the campaign as compared to your control group. This metric will inform you if the campaign is successful or not. Please read this article for more information on how we measure uplift.

Sample Experiments for Mobile In-App Campaigns.

Hypothesis: Having a solid red background color will improve the click through rate of mobile in-app campaigns.

Experiment: Create an A/B Testing experiment with three variations of the in-app message where only the background color is different.

Measurement: Observer the click through rate for each variation and decide if the variation with red background is actually having the best click through rate of all variations.

 

Note: This feature is currently in closed beta and hence is available only for few accounts. If you wish to have access to this feature, please contact your Customer Success Manager.

Was this article helpful?
0 out of 0 found this helpful