Search by Algolia
Add InstantSearch and Autocomplete to your search experience in just 5 minutes
product

Add InstantSearch and Autocomplete to your search experience in just 5 minutes

A good starting point for building a comprehensive search experience is a straightforward app template. When crafting your application’s ...

Imogen Lovera

Senior Product Manager

Best practices of conversion-focused ecommerce website design
e-commerce

Best practices of conversion-focused ecommerce website design

The inviting ecommerce website template that balances bright colors with plenty of white space. The stylized fonts for the headers ...

Catherine Dee

Search and Discovery writer

Ecommerce product listing pages: what they are and how to optimize them for maximum conversion
e-commerce

Ecommerce product listing pages: what they are and how to optimize them for maximum conversion

Imagine an online shopping experience designed to reflect your unique consumer needs and preferences — a digital world shaped completely around ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

DevBit Recap: Winter 2023 — Community
engineering

DevBit Recap: Winter 2023 — Community

Winter is here for those in the northern hemisphere, with thoughts drifting toward cozy blankets and mulled wine. But before ...

Chuck Meyer

Sr. Developer Relations Engineer

How to create the highest-converting product detail pages (PDPs)
e-commerce

How to create the highest-converting product detail pages (PDPs)

What if there were a way to persuade shoppers who find your ecommerce site, ultimately making it to a product ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

Highlights from GopherCon Australia 2023
engineering

Highlights from GopherCon Australia 2023

This year a bunch of our engineers from our Sydney office attended GopherCon AU at University of Technology, Sydney, in ...

David Howden
James Kozianski

David Howden &

James Kozianski

Enhancing customer engagement: The role of conversational commerce
e-commerce

Enhancing customer engagement: The role of conversational commerce

Second only to personalization, conversational commerce has been a hot topic of conversation (pun intended) amongst retailers for the better ...

Michael Klein

Principal, Klein4Retail

Craft a unique discovery experience with AI-powered recommendations
product

Craft a unique discovery experience with AI-powered recommendations

Algolia’s Recommend complements site search and discovery. As customers browse or search your site, dynamic recommendations encourage customers to ...

Maria Lungu

Frontend Engineer

What are product detail pages and why are they critical for ecommerce success?
e-commerce

What are product detail pages and why are they critical for ecommerce success?

Winter is coming, along with a bunch of houseguests. You want to replace your battered old sofa — after all,  the ...

Catherine Dee

Search and Discovery writer

Why weights are often counterproductive in ranking
engineering

Why weights are often counterproductive in ranking

Search is a very complex problem Search is a complex problem that is hard to customize to a particular use ...

Julien Lemoine

Co-founder & former CTO at Algolia

How to increase your ecommerce conversion rate in 2024
e-commerce

How to increase your ecommerce conversion rate in 2024

2%. That’s the average conversion rate for an online store. Unless you’re performing at Amazon’s promoted products ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

How does a vector database work? A quick tutorial
ai

How does a vector database work? A quick tutorial

What’s a vector database? And how different is it than a regular-old traditional relational database? If you’re ...

Catherine Dee

Search and Discovery writer

Removing outliers for A/B search tests
engineering

Removing outliers for A/B search tests

How do you measure the success of a new feature? How do you test the impact? There are different ways ...

Christopher Hawke

Senior Software Engineer

Easily integrate Algolia into native apps with FlutterFlow
engineering

Easily integrate Algolia into native apps with FlutterFlow

Algolia's advanced search capabilities pair seamlessly with iOS or Android Apps when using FlutterFlow. App development and search design ...

Chuck Meyer

Sr. Developer Relations Engineer

Algolia's search propels 1,000s of retailers to Black Friday success
e-commerce

Algolia's search propels 1,000s of retailers to Black Friday success

In the midst of the Black Friday shopping frenzy, Algolia soared to new heights, setting new records and delivering an ...

Bernadette Nixon

Chief Executive Officer and Board Member at Algolia

Generative AI’s impact on the ecommerce industry
ai

Generative AI’s impact on the ecommerce industry

When was your last online shopping trip, and how did it go? For consumers, it’s becoming arguably tougher to ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

What’s the average ecommerce conversion rate and how does yours compare?
e-commerce

What’s the average ecommerce conversion rate and how does yours compare?

Have you put your blood, sweat, and tears into perfecting your online store, only to see your conversion rates stuck ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

What are AI chatbots, how do they work, and how have they impacted ecommerce?
ai

What are AI chatbots, how do they work, and how have they impacted ecommerce?

“Hello, how can I help you today?”  This has to be the most tired, but nevertheless tried-and-true ...

Catherine Dee

Search and Discovery writer

Looking for something?

facebookfacebooklinkedinlinkedintwittertwittermailmail

It was time to migrate our hefty Analytics data to another server. We had been processing the Analytics data on Citus Data’s Citus Cloud on Amazon’s AWS for five years, when we learned that Microsoft acquired Citus Data. We also learned that the Citus startup’s co-founders and team started working on providing a similar service on Microsoft Azure, called HyperScale (Citus). With their help, we decided to migrate our Citus Cloud database to Citus on Azure. 

One of the big advantages of migrating at this time was that the Citus team had ownership of both systems, so they would be able to perform parts of the migration process for us. 

The data, pipeline, and PostgreSQL

Our real-time Analytics data exists in two different regions: one in the US, one in Europe. To give you some perspective about the migration, those two clusters represent approximately 5TB of data spread over 20 machines.

Citus data transforms PostgreSQL into a distributed database. Here’s our Analytics pipeline:

analytics pipeline

  1. At the top left, a user searches, which generates logs
  2. The logs are aggregated and sent to a remote storage
  3. Whenever the storage receives a new log file, a message is added to the queue
  4. Our writer-workers read those messages and compute aggregates that are stored in our Citus database

The bottom left shows the customer-facing access to the Analytics data via our Analytics APIs. The APIs query our Citus database, performing additional aggregates on top of the aggregates already stored inside the database.

So how did we migrate this data without any downtime?

Upfront constraints: no disruption of services, 100% transparency

We started by defining guidelines and constraints:

  • Minimal downtime during the migration. Our analytics data is available globally, 24/7. It’s central to our product base and machine learning systems. With Analytics, our customers can scale and continuously improve and configure their solution.
  • Transparent to the users. We could not disturb the usage. Transparency is not only about no downtime, it’s also about not introducing any performance or services differences or regressions.
  • No refactoring. We wanted to focus on the infrastructure and data, not on adding any new features or instability into the existing codebase.
  • No change in pricing. We did not want to add infrastructure costs to our customers under any circumstances.

Plan and design before doing: writing a playbook

Because of these no-service-disruption/transparency constraints, we decided to write down every action we would take before starting the project. We knew from experience that migrating a real-time, distributed big data system is not simply about moving the data onto a different machine and changing connection strings. 

The playbook

  • Comparison analysis:
    • ensure there were no significant differences between the old and the new systems
  • Preliminary considerations:
    • Coordinate actions with suppliers and internal teams (e.g., synchronize with Microsoft and our infrastructure teams to list the actions they would take)
    • Establish verification steps at multiple stages of the process, to make sure no silent failures went undetected
    • Plan a recovery procedure that could be used at any point to revert and rollback to our previous instance in case anything went wrong
  • Migration steps:
    • Copy the old to the new
    • Test each piece of the migration process
    • Perform simulation tests
    • Test in production
    • Rollout / Go live
    • Monitor

For each item in our playbook, we wrote down the actions, tools, and people we needed for each task. 

One thing to note. While a playbook should build confidence, it should not create a false sense of security. Every playbook will contain hidden biases and sometimes false assumptions. In our case, the playbook contained an important bias which we will discuss below. Luckily, we detected the bias early in the pre-testing phase. 

We’ll discuss two parts of our playbook: the comparative analysis and migration steps. 

Comparing the old with the new

To ensure a perfect switch-over, we examined each physical and software layer. We wanted to make sure that the target source would not introduce any different or new element into the current workflow. 

Adhering to our constraints, we looked at the following:

  • Hardware: CPU, RAM, and disk performance
    • Good news: Azure had comparable machines, everything looked good on this side
  • Software
    • Good news: Same Citus & PostgreSQL versions, so we could migrate without having to bump up to any of the major versions
    • Good news: Microsoft would manage the versioning during the migration process
  • Infrastructure
    • Good news: Same SLA
    • Good news: Same regions available for our servers
    • Good news: Same security & confidentiality features

One difference we noted was related to the file system. With AWS, we relied on the ZFS file system for its compression capabilities. However, the ZFS file system is not supported in Azure. We checked whether this would be a problem for the migration. Without such compression, we had to keep in mind that we needed more disk space on HyperScale – in our case, 2 to 3 times more. 

Thanks to this comparison analysis, we were sure that the new system would not differ from the old in any significant way.

So, with all that upfront design and analysis, we were ready to go. 

It’s best to separate the rest of this article into two parts:

  • Testing strategies
  • Replication and replacement

Testing strategy: test every link in the chain

Central to testing is to break down a process into its most discrete parts. In our case, the big picture is the read/write breakdown. We wanted to test each of those processes individually.

Performance problems with our write operations

The read operations did not cause any concern – as you’ll see later, there was no regression. On the other hand, we immediately noticed a regression in writing to HyperScale (Citus): it was twice slower and took twice as many resources.

write timings

Here we discovered an important bias. We had entered the testing phases mostly concerned about read regression, as our products are known for their speed. But the read part worked perfectly right out of the box. The delays were on the write side – which impacted the end-user experience as well, in both performance and service. 

So, here’s where we had to adjust the playbook. We needed to add additional steps and tools to focus on the writing processes. Up to that point, we had tools to examine problems with the read. Now we had to add the following tools to examine the writes:

  • Audit the `pg_stat_statements` system view, to track the performance of the queries being executed on the database.
  • SQL query plans, to run the `EXPLAIN ANALYZE` command to compare the old and new performances

Using PostGreSQL’s `pg_stat_statements`, we were able to determine that our stored procedure `create_rollup` was slower on HyperScale than on Citus. In fact, it had a mean execution time that was two times slower than the Citus one. But why?

pg stat statements

We had to dig in deeper, using a query plan. There, we found out that HyperScale was triggering the JIT, whereas Citus didn’t.

For those who don’t know it, JIT stands for “Just in time compilation”. In short, its role is to optimize specific queries at runtime. In our case, JIT was constantly being executed – even when the queries were optimized! This considerably slowed down the execution for this specific query. 

jit timings

The query plan also gave us a cause for the unnecessary trigger: we realized that JIT was not respecting a configuration option we had set. To be more specific, we disabled nested loops.

   ALTER ROLE citus SET enable_nestloop = off;
   SELECT run_command_on_workers('ALTER ROLE citus SET enable_nestloop = off')

Indeed, nested loops had a counterproductive performance impact on our Citus setup for our insert queries. However, as stated in the documentation, nested loops can never be completely disabled. When looking at the query plans, we saw that the JIT did use some.

Our solution was to disable the JIT entirely:

   ALTER ROLE citus SET jit = off;
   SELECT run_command_on_workers('ALTER ROLE citus SET jit = off');

We then confirmed it resolved our issue: after disabling the JIT on the HyperScale instance, both systems ran with the same performance and consumed the same amount of resources.

An update failure 

However… slow queries wasn’t our only problem. We received this surprising error message in our logs:

duplicate key error

That “duplicate key” error was quite surprising for us as our system was designed to only have unique keys. We realized that somehow our index had become corrupt, but as before we didn’t know why, and so we had to dig in.

This time, we found the answer by searching online for people encountering a similar issue. It turned out that the operating system’s `libc` was responsible for that error.

The C standard library or libc is the standard library for the C programming language, as specified in the ISO C standard. This library is used by almost all programs, including PostgreSQL. Postgres relies on the libc to compute the UTF-8 COLLATION used by indices. But unfortunately, the `libc` version was different on Azure, leading to this erroneous behavior.

As we learned from this blog post: Beware of the libc! Identical Citus and PostgreSQL versions don’t mean identical behavior, due to the different `libc`. 

We crafted a recovery procedure in case that happened during the migration: a simple reindex on the table if the error occurred. As always – it’s better to know that kind of manipulation in advance.

As a final warning regarding this issue, the latter isn’t migration specific. For example, it could happen on your production database if your provider or your infrastructure team upgrades the operating system or the `libc` on the machine where your Postgres is running.

Migration: replicating & replacing

Now that we’ve tested all of the pieces, it’s important to explain how we went from one system that had been working fine for five years to another system that needed run in exactly the same way on day one. 

Replicating

The replication involved:

  • Running the copying procedures to switch the large data from one system to the other
  • Testing the pieces (already discussed above)
  • Performing simulation tests on the whole process
  • Testing in production
  • Switching the pipeline

The goal here was to duplicate the Citus database on the HyperScale one. 

  • With the help of the Citus team at Microsoft, we created a HyperScale DB from the existing Citus cloud backups. At that point, we had a fully replicated HyperScale DB in Azure.

copy data procedures

  • On our side, once the DB was replicated, we “just” wired the duplicated services to this new instance.

copy the pipeline

It was that “simple”. Thanks to our testing, the redirected pipeline worked seamlessly with the new infra.

Simulating production

We already discussed how we tested each piece of the pipeline. For the last tests, we ran the whole process on our test server, to ensure that we had the same data and performance in both systems. 

We tested by simulating the same conditions and running our most popular requests. We logged the 1000 longest API calls our current instance received over a given time frame. We then replayed those on the new instance. As you can see in this graph, we had results that were similar enough for us to feel safe testing further:

write speeds graph

As for the overall user experience, we can see the reads performed at the same speeds in both old and new:

performance comparison

Testing in production

To be absolutely sure that we were ready, we did something terrible: we tested in production. While, in general, this is something we avoid as much as possible, it felt like the right option in this case, as it allowed us to test part of our migration playbook.

The way it worked was simple:

  • As before, the Citus team created a HyperScale follower of the Citus Cloud instance. Contrary to our write testing, here we waited for the instance to be completely caught up with the live instance.
  • We then promoted the HyperScale instance to be autonomous. At this stage, the Azure data wouldn’t change anymore on this new instance. Our customers are mostly looking at historical data over some days. So the functionality wouldn’t change. Serving stale data for a small amount of time on our analytics pipeline is something we can afford which is why we simply switched our read traffic to target our HyperScale instance for one hour. During this time, we watched our monitoring metrics.

You can see the seamlessness of the migration on those graphs. Can you spot the switch-over? We could not.

overall performance comparison

Going live

Here are the steps we performed for the migration, as displayed in a GIF:

All in all, it went like a charm. The only serious hiccups took place in the pre-testing phases – where they should be. The whole migration process took 40 minutes. The following day, we ran the same migration on our US cluster, which went exactly the same way.

Takeaways

Overall, we learned the importance of creating an exhaustive migration playbook, challenging our biases, and monitoring for errors and performance regressions.

There are other takeaways: 

  • If your system doesn’t require real-time, consider using eventual consistency. Being able to queue messages during the migration enabled us to perform the migration with zero-downtime.
  • Idempotence of our writes was a great ally to investigate performance issues. Instead of needing to recreate a testing environment that mimicked production, we could simply replay the same write queries in different scenarios.
  • Be prepared for the unexpected. Even by planning as much as we did, we still encountered a few hiccups along the way on migration day. For example, we saw some merging issues on GitHub with stacked pull requests. However, we knew we could correct this problem without changing anything in our playbook.

The final takeaway is about teamwork, more specifically, about working in pairs. In fact, that’s why there are two authors here: we performed this migration together, validating each other on nearly every step of the way. Pairing was a great way to challenge ideas and ensure we didn’t make any manual mistakes. As a result, we felt confident during the whole process. If you have to tackle a complex operational project in the future, we can only recommend that you pair up!

About the authors
Matthieu Dumont

Senior Software Engineer

github
Antoine Hemery

Senior Software Engineer

githublinkedintwitter

Recommended Articles

Powered byAlgolia Algolia Recommend

Building real-time analytics APIs at scale
engineering

Sylvain Friquet

Software Engineer

Introducing our new navigation
product

Craig Williams

Director of Product Design & Research

Resilience testing in production: test as you deploy
engineering

Xavier Grand

Engineering Manager