🎉 Try the public beta of the new docs site at algolia.com/doc-beta! 🎉
Guides / A/B testing

A/B test implementation checklist

This guide helps to avoid issues like:

  • Reaching statistical significance on tests where nothing changes between variants (A/A tests)
  • None of the tests reached statistical significance.
  • An A/B test behaving opposite to expectations.

Send valid events

Your events must be valid for A/B testing to work.

Identify your users

For successful A/B testing, Algolia needs to identify your users. Do this by generating a userTokento identify users, even those using multiple devices. You could base it on your internal user ID

If you’re using InstantSearch or the API clients, you can set the userToken in the headers or search parameters.

By default, the Insights client generates a userToken for you and stores it in a cookie on your site. If you don’t override this behavior, you need to send this token with every search to link that search to the user.

Either of these options enables the engine to identify a user and keep them on a single variant during the test.

Forwarding from a server

If you search from your backend, you shouldn’t use a server to forward queries from multiple users without differentiation, as all users will be associated with one variant, which can skew results. To resolve this, you should forward the userToken or the user’s IP address.

Use the Personalization implementation help page on the dashboard to confirm your queries have associated userTokens.

Personalization is available on the Build and Premium pricing plans.

Anonymous users can skew results

Sometimes, you may have a mix of anonymous and identified users. For instance, you might want to assign an anonymous user token to users who haven’t yet accepted cookie consent.

However, when A/B testing with a mixture of anonymous and identified users, results may be inaccurate.

The user tokens track which variant the user is in: A or B. Since anonymous users share the same user token, they all appear to be one person. Despite having a similar number of anonymous and identified testers, the anonymous users appear to be carrying out significantly more searches, which can make it look like one variant is performing better than the other when it’s not.

The best way to fix this issue is to turn off A/B testing for anonymous user queries by creating a contextual rule to turn off A/B testing and then assign that rule to anonymous user searches.

Create a contextual rule

  1. Add a rule context. Call it something like “anonymous”
  2. In the Consequences section of the rule, Add Query Parameter: { "enableABTest": false }.

Assign contextual rule to anonymous user searches

The following sample code checks if the user token is anonymous (null, undefined, or equal to YOUR_ANONYMOUS_USER_TOKEN). If it is, it applies the anonymous contextual rule to the query. Otherwise, run the query without the rule.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
// Set the query and get the current user token
const query = 'User search query';
const userToken = getUserToken();

// Clear the request options
let requestOptions = {};

// Is the user token anonymous?
if (userToken === null || userToken === undefined || userToken === 'YOUR_ANONYMOUS_USER_TOKEN') {
  // Apply the contextual rule called 'anonymous'
  requestOptions.ruleContexts = ['anonymous'];
} else {
  // Set the user token to the current user token
  requestOptions.userToken = getUserToken();
}

// Perform the search
index.search(query, requestOptions)
  .then(result => {
    // HSsearch results
    console.log(result);
  })
  .catch(error => {
    // Search errors
    console.error(error);
  });

See also

Query parameters

Any parameters you send at query time overwrite those within the A/B test. For example, you enabled personalization on variant B and every query. In such a situation, the test results will be meaningless because both variants have personalization enabled.

To get accurate results from your A/B test, make sure that you only change the settings for the test itself, and not for individual queries.

Relevant searches

Part of your search implementation may send searches from a dashboard or internal page. You should exclude any searches not performed by your users from analytics by setting the analytics parameter to false.

To exclude the search from A/B testing set enableABTest to false.

Export A/B test data to an external analytics platform

You can use the getRankingInfo parameter to retrieve the A/B test ID and variant ID. This can help you to track user variants and behavior in third-party tools you already use, like Google Analytics.

By default, the engine creates an analytics tag per A/B test variant, which you can export while filtering on these tags to view statistics in other systems.

Avoiding outliers

Outliers can be caused by bots crawling your web pages and performing thousands of searches. Your A/B test may be skewed since bots are counted as users, which can significantly affect the click-through rate. To avoid outliers in your A/B test, you can use rate limits, HTTP referrers, and robots.txt.

Outliers are automatically removed from the A/B test results.

Using rate limits

You can set up API keys with rate limits to limit the number of API calls per hour and per IP address. The default value is 0 (no rate limit). Set this value to avoid automated searches skewing A/B test results.

Using HTTP referrers

Most browsers send referrer URLs through the referer or the Origin HTTP header. Like all HTTP headers, it can be spoofed, so you shouldn’t rely on it to secure your data. For example, while referer should always be valid when requesting from a browser, tools like curl can override it. Some browsers, like Brave, don’t send these headers at all.

Using robots.txt

Tell Google not to visit your search pages by configuring a robots.txt file. What some websites do is only allow Google to visit the search page home but not other search pages.

Did you find this page helpful?