Uplift allows you to:
- create A/B and Multivariate experiments to deliver messages that convert
- use control group to measure conversion uplift due to notification
- analyze what works with your audiences
In this article we will talk about how using Uplift, you can create and edit a multivariate experiment for Push Notifications. Before this, you might want to know what multivariate testing is.
Creating Multivariate Experiment
Creating a multivariate experiment for pushes is very simple and intuitive. Let us take a sample experiment:
Sample Hypothesis: Having an emoji in message title give better CTRs
Experiment: Create two message variation and use emoji in Variation 1 Title and send plain message in Variation 2 Title
Measure: Compare CTRs of Variation 1 & Variation 2 . If CTRs of Variation 1 are better than Variation 2, there are high chances you have a valid hypothesis
Step 1: Choose the platform and target audience
Step 2: Messaging of Push campaign
Variation 1: With Emoji
Once done with Variation 1, you can Add another Variation to create A/B Test by clicking:
You can add a new as well as copy from existing variations.
For the example, as we are only experimenting with title, we can use Copy Variation 1 which will copy all the payload fields from Variation 1. Change the title for Variation 2 accordingly.
Variation 2: Without Emoji
For the example above, we have created only two variations to test only one variable - emoji. One can create up-to 5 variations to test one or more hypothesis.
Read why you need a control group in your experiment. Start your experiments with 2% allocated to control. To add a control group, you can click on Add button.
You can then set the percentage of users you want to add to control group (start your experiments with 2-5% allocated to control) and click ADD.
Once set, you will see a confirmation like the one below. Once added, you can choose to remove or edit the control group split.
Decide User Distribution among variations
Once done setting the message for variation,Click Next to set split percentage to distribute users among the variations. There are currently two ways to do that:
1. Static Multivariate Testing - In this mechanism, marketer distributes users across variations as a fixed ratio e.g. sending 50% to Variation 1 and sending 50% to Variation 2 to test which variation performs best.
For the example below, we have divided the users 50% across each variation.
2. Dynamic Multivariate Testing (powered by Sherpa) - In this mechanism, the variation split is decided on the run-time as per variation performance till that time. Read how Sherpa dynamically optimizes your multivariate campaigns.
Important Note: The allocation of user to any of the group - test or control is non-sticky in nature i.e. for any active campaign, if users qualifies twice for receiving the campaign, we do not guarantee that if User A has received Variation 1 once, he would or wouldn't receive it again.
Complete Step 3 (Scheduling and Goal setting) and finish creating the campaign and measure results by comparing variation performances on Campaign Analytics.
You should be able to see analytics as below where you can compare variation performance, platform wise. The column with crown represents the Winner variation - the variation with maximum Click Through Rate.
You can also view a graphical comparison:
Editing Multivariate experiment
Campaign Type |
Status |
Allowed edit options |
General Push |
Scheduled but not sent yet |
- Add/remove variations; |
General Push - Periodic |
Scheduled and not even 1 instance is sent |
- Add/remove variations; |
General Push - Periodic |
Scheduled but at-least 1 instance is executed |
- Edit variation message but cannot add/remove a variation; |
Smart Triggers |
Active/Paused |
- Edit variation message but cannot add/remove a variation; |
Next in series: Sample experiments that you can create