A/B tests Analysis¶
The AB test calculator helps you design and analyze A/B tests on rate-based metrics such as click-through rate, conversion rate, or cure rate.
This capability is provided by the AB test calculator plugin, which you need to install. Please see Installing plugins.
How To Use¶
The capability is organized into two phases:
Experimental design: estimate sample sizes and split a population into two groups.
Results analysis: summarize experiment outcomes and assess statistical significance.
Design the experiment¶
1. A/B test sample size calculator¶
The AB test sample size calculator webapp computes the sample sizes required for an A/B test and can save the selected parameters to a managed folder.
To create the webapp:
Open the
</>tab and select Webapps.Create a new visual webapp.
Choose AB test sample size calculator.
In the webapp settings, choose an output managed folder for the saved parameters. You can select an existing folder or create a new one, then click Save and view webapp.
The sample size computation uses the following inputs:
Baseline success rate (%): Success rate of the control variant.
Minimal detectable effect (%): Smallest change you want the test to detect.
Statistical significance (%): Significance threshold for the test.
Power (%): Probability of detecting a real effect.
Size ratio (%): Relative size of group B compared to group A.
Two tailed test: Whether to test for differences in both directions or in a single direction.
Daily number of people exposed: Optional input used to estimate experiment duration.
Percentage of the traffic affected: Share of the traffic actually included in the experiment.
The webapp displays the computed sample sizes together with a visualization of the statistical test. It also includes a confusion matrix and a detailed derivation of the sample size formula.
When you click Save sizes in the folder, the webapp saves the selected parameters and computed sample sizes as a JSON file in the chosen managed folder.
2. Population split¶
The Population split recipe assigns users to group A or group B, typically using sample sizes previously computed by the sample size calculator.
To create the recipe, go to the Flow, click + ADD ITEM > Recipe > AB test calculator > Population split.
Inputs¶
Population: Dataset containing the users involved in the experiment.
Folder (optional): Managed folder containing JSON parameter files saved by the sample size calculator webapp.
Output¶
Experiment dataset: Dataset containing the deduplicated population and an additional
dku_ab_groupcolumn with the assigned A/B group.
Settings¶
User reference: Column containing the user identifier. It must uniquely identify each user.
Sample size definition: Choose whether to retrieve sizes from the webapp output or enter them manually.
Parameters (computed in the web app): JSON file to use when retrieving values from a managed folder.
Sample size for variation A: Size of group A in manual mode.
Sample size for variation B: Size of group B in manual mode.
Deal with leftover users: If the population is larger than the requested sample sizes, leftover users can be assigned to A, assigned to B, or left blank.
The recipe removes duplicate user references before splitting the population. It raises an error if the input population is smaller than the requested sample sizes.
Analyze the results of the experiment¶
After the experiment is complete, you can compute per-group statistics and then analyze the result of the statistical test.
3. Experiment summary¶
The Experiment summary recipe computes the statistics needed for the A/B test analysis.
To create the recipe, go to the Flow, click + ADD ITEM > Recipe > AB test calculator > Experiment summary.
Input¶
Experiment results: Dataset containing one row per user, with a group column and a binary conversion column.
Output¶
AB testing statistics: Two-row dataset containing the group value, the sample size, and the success rate for each group.
Settings¶
User reference: Column containing the user identifier.
Conversion column: Column indicating whether the user converted. Values must be
0or1.AB group column: Column indicating the group assignment.
The recipe drops rows with missing user, group, or conversion values. It requires exactly two groups and exactly one row per user.
4. Results analysis¶
The AB test results analysis webapp analyzes the outcome of the experiment.
To create the webapp:
Open the
</>tab and select Webapps.Create a new visual webapp.
Choose AB test results analysis.
In the webapp settings, choose how the statistics are provided:
an input dataset: Load the statistics produced by the Experiment summary recipe.
this web app: Enter the values manually.
If you use a dataset, the webapp expects the exact output structure of the Experiment summary recipe: two rows, with the group column plus sample_size and success_rate columns.
The settings also include:
Dataset: Statistics dataset to load.
AB group column: Group column in that dataset.
Output folder for results: Managed folder where the computed results will be saved. You can select an existing folder or create a new one.
The analysis uses the following inputs:
Sample size of group A and group B.
Success rate of group A and group B.
Desired statistical significance.
One-tailed or two-tailed testing.
The webapp displays:
A text summary of the outcome.
The uplift between the two variants.
The Z-score and p-value.
A visualization of the null distribution, the significance threshold, and the observed test score.
When you click Save results, the webapp saves the displayed values as a JSON file in the selected managed folder.