Multivariate or split testing are experiments conducted by marketers by modifying one or more variables to determine the effect of the modified variable on the other variables of a campaign.

The effect of a dependent variable on one or more independent variables:

  • Help predict phenomena
  • Explain the causal relationships between variables if there are any. 

What is A/B testing?

A/B testing also known as split testing is an experimental set-up of comparing two versions (known as variations) of a message copy against each other, by sending it to an equal number of randomly chosen groups of users, to know which one performs better to drive your engagement metrics. It is used when the comparison is done to test only one variable.

What is Multivariate testing?

Multivariate testing has more than two variations of communication sent to different users chosen at random to compare the performance of variations.

Differences between A/B or Multivariate testing

While creating an A/B or multivariate experiment, ensure that the only difference between the variations are the differences that marketers want to test upon.

For example:

If you only want to test whether Android Push Notifications with images get better CTRs than the ones without, you should create two variations with similar content with the only difference that you send out the image with one of the variations and not with the other.

Different types of user distribution

Manual User Distribution

Manual distribution is defined for a fixed set of users, analyze the variation campaign, and then send the best or winner campaign to all the users.

  • Marketers set the percentage of users receiving the campaign.
  • Marketers look at the analysis and modify the percentage as required. 

For more information, refer to Static User Distribution.

Dynamic User Distribution using Sherpa

Dynamic variation means MoEngage Sherpa automatically decides the winner campaign and sends the winning campaign to the users. 

For more information, refer to Dynamic User Distribution.

How should I split my variations?

Split is for a minimum sample size, which in turn is a function of the baseline numbers and the campaign variation you want to detect before declaring the experiment successful.

A suitable split between your variations:

  • Depending upon the hypothesis or the objective of the experiment.
  • Probabilistic chances of which one of the variations is expected to perform the best.

For example, if you feel that there are 80% chances of any variant being successful, you should allocate 80% audience to it, and the remaining 20% to the variation you have lesser confidence.

For the experiment:

  • Set the percentage division between your variations in the campaign
  • Remove users who uninstalled or breached the frequency capping

MoEngage out of all the active tokens the members of the control group or multivariate experiment are randomly selected. 

Randomization determines members under every variation are statistically similar to the members of the entire group and thus be exposed to the same set of conditions, except for the particular marketing campaign or campaign instance being tested.

Before Experiment

Before creating your experiment, you should:

  1. Know your experiment objective i.e. the hypothesis you want to test; 
  2. Understand if you want to create a control group for baseline performance
  3. Know the segment that you would want to test this hypothesis upon
  4. Create a number of variations (2 ^ Number of hypotheses i.e. 2 variations for testing one variable and 4 variations for testing two variables) and finalize their content 

    Hypothesis 1

    Hypothesis 2

    Variant Name



    Variation 1



    Variation 2



    Variation 3



    Variation 4

  5. Define clearly the metric that you want to impact and use them in your experiment conversion goals. For example, do you just want to drive the higher open rates or want to get more conversions?


    While testing content as the variable in an A/B Testing Campaign, measure the winner by click rate, as better communication generally improves your click-through rates.

During Experiment

While creating the experiment:

  1. Ensure that users across each split are substantial in number i.e. at least 5000 across every variation to get useful insights out of your campaigns
  2. Use the metric to be tracked as a primary conversion goal

After Experiment

While analyzing the experiment:

Check which variation has performed best over your baseline numbers and what combination of hypotheses is represented. For example, based on the table, variation four represents two hypotheses. So if Variant 2 performs better than Variant 1, we know Hypothesis 1 is valid. If Variant 4 performs better than Variant 2, we know Hypothesis 2 holds.

Next Steps

When you identify the winning message, don't just stop there. Use this learning to create the next round of experiments. Mantra is Test. Measure. Learn. Repeat.

Was this article helpful?
5 out of 5 found this helpful

How can we improve this article?