Multivariate Testing

In an experiment, marketer manipulates one or more variable and controls/randomizes rest of other variables, to determine the effect on a dependent variable, of one or more independent variables.

The objective is to test hypothesis, predict phenomenas and to explain the causal relationships between variables, if any exists. 

What is A/B or Multivariate testing?

A/B testing also known as split testing is an experimental set-up of comparing two versions (known as variations) of a message copy against each other, by sending it to an equal number of randomly chosen group of users, to know which one performs better to drive your engagement metrics. It is used when the comparison is done to test only 1 variable.

Multivariate testing, refers to presence of more than two variations of a communication that are sent to different users chosen at random to compare the performance of variations.

While creating a A/B or multivariate experiment, it should be ensured that the only differences between the variations are the differences that marketer want to test upon.

e.g. If you only want to test whether Android Push Notifications with images get better CTRs than the ones without, you should create 2 variations with completely similar content with only difference that you send out the image with one of the variations and not with the other.

What is control group?

A control group, is a subset of the customers we are targeting with a particular campaign, who will not receive the campaign and will hence serve as a baseline to compare campaign performance. When one compares the variation with control, one can check the impact of campaign i.e. whether customer engagement and conversions were better or worse off if specific campaign was not sent to some customers.

How should I split my variations?

Your split should take care of the minimum sample size which in turn is a function of the baseline numbers and the change you want to detect over it before declaring the experiment successful.

A suitable split between your variations will depend upon your hypothesis and subsequently probabilistic chances of which one of your variations is expected to perform the best. For an example if you feel that there are 80% chances of any variant being successful, you should allocate 80% audience to it and remaining 20% to the variation you have lesser confidence.

Once you set upon the percentage division between your variations in the campaign, after removing users who uninstalled or breached the frequency capping, out of all the active tokens the members of the control group or multivariate experiment are randomly selected. Randomization is done so that members under every variation are statistically similar to the members of the entire group and thus be exposed to the same set of conditions, except for the particular marketing campaign or campaign instance being tested.

Before creating your experiment, you should:

  1. Know your experiment objective i.e. the hypothesis you want to test; 
  2. Understand if you want to create a control group for baseline performance
  3. Know the segment that you would want to test this hypothesis upon
  4. Create number of variations (2 ^ Number of hypotheses i.e. 2 variation for testing one variable and 4 variation for testing two variables) and finalize their content 

    Hypothesis 1

    Hypothesis 2

    Variant Name



    Variation 1



    Variation 2



    Variation 3



    Variation 4

  5. Define clearly the metric that you want to impact and use them in your experiment conversion goals. e.g. Do you just want to drive the higher open rates or want to get more conversions. Tip: When you test content as the variable in an A/B Testing Campaign, measure the winner by click rate as better communication generally improves your click-through rates

While creating the experiment:

  1. Ensure that user across each split are substantial in number i.e. at-least 5000 across every variation to get the useful insights out of your campaigns
  2. Use the metric to be tracked as primary conversion goal

While analyzing the experiment:

Check which variation has performed best over your baseline numbers and what combination of hypothesis it represents i.e. in example above Variation 4 represents two hypothesis. So if Variant 2 performs better than Variant 1, we know Hypothesis 1 is valid. If Variant 4 performs better than Variant 2, we know Hypothesis 2 holds.

Once the winning message is identified, don't just stop there. Use this learning to create next round of experiments. Mantra is: Test. Measure. Learn. Repeat.

Next in series is: Creating multivariate experiment for Push

Was this article helpful?
2 out of 2 found this helpful