Behind the Scenes: A Successful Real-World Migration to CloudFront


Hey Reader 👋🏽

This issue will be about a recent real-world experience that just went off right with the new year! 🎉

Once upon a time... 🦄

It all started in September 2024 where Edgio, the main CDN provider we used for one of my large enterprise projects, filed for bankruptcy.

Edgio was natively integrated into Azure, allowing you to use it without leaving the Azure ecosystem. It also featured a powerful rules engine (allowing for all kinds of conditions, redirects and rewrites) and didn’t require any upfront or monthly fees, only charging for actual usage. With the native integration, it didn't feel like a third-party product in the first place. But it definitely was.

If your project was on Azure and you didn’t want to purchase a third-party tool like Cloudflare, it was the absolute best choice.

The sundown was scheduled for the last quarter of 2025, so with plenty of time ahead, and Microsoft also included this note in their announcement, with a slight hint, that they can only try to keep the lights on, but there’s no guarantee.

As is often the case with technology, this didn’t turn out as expected.

In this issue, I’ll explain how we managed to stay online despite the tight sundown schedule.

Brief Overview of the Situation

First things first: the project I’m discussing uses Azure as the main cloud provider. We didn’t use AWS before and only a few small parts of GCP. This makes sense because managing multiple clouds adds extra costs due to extended governance processes.

Moreover, Azure is a great cloud platform, and Microsoft does many things well.

One of these advantages was the seamless integration with Edgio as a CDN solution that was much more powerful than Azure Frontdoor.

We’re making heavy use of Edgio's rules engine, as we’re running a lot of micro frontends (even better: single page apps) and other technical helpers behind a single domain.

This complicates matters, as we need to handle redirects and rewrites within our CDN.

All these rules rely heavily on regex. We match groups and use them in the following actions. Many of the rules even include negative lookaheads, which isn’t common in a typical CDN rule set. Adding to the challenge, most of the rules were written by former team members. Since few changes were needed in the past, the current team was unfamiliar with the existing rules.

All the rules had to be set up in a cumbersome web console with many dropdowns, and there was no way to test these rules beforehand. There was also no support for common Infrastructure-as-Code tools like Pulumi or Terraform, so changes had to be manually transferred to each stage. At the end of a new configuration, Edgio would generate an XML file that exactly matched the rules configured in the dropdowns. At least this was something that could be easily recorded for history.

In summary, even though the developer experience wasn’t ideal, the service performed its job exceptionally well. In the end, that’s what matters most.

The Crisis

After reading the initial announcement and seeing the reports about a potential acquisition by Akamai, we were quite confident that we have plenty of time to migrate.

Right within my vacation and a few days before Christmas holidays, Microsoft announced that Edgio will not shut down in Q4 of 2025, but at the 15th of January 2025. So our timeline changed from “more than 9 months” to “less than 4 weeks” at the worst possible time of the year.

The “less than 4 weeks” didn’t consider that in Germany (where the project and most of its employees and suppliers are based), people typically don’t work between Christmas and New Year. It’s also common to take the first week of the year off.

This means the realistic timeline is less than 2 weeks or less than 10 working days. For replacing a major tool in a large project with many unknowns and no evaluated migration strategy, this isn’t just a tight schedule—it’s a perfect storm

Decision to Migrate

Microsoft sent an email saying that all Edgio-based deployments will automatically join a “best-effort migration” to Azure Frontdoor “between the 7th and 14th of January.” Since we were already using Azure Frontdoor elsewhere, I was worried that this “best effort” approach wouldn’t work well for us. Our rule set, with over 90 conditions and more than 70 rewrite rules, was quite complex.

I doubted it was feasible to implement this with Frontdoor.

Evaluating Options

When I returned from vacation on December 30th, I immediately began looking into this, even though my regular work was set to start on January 2nd. I knew we couldn’t afford to waste any days.

Being a rational person, I’ve considered our options:

  1. Wait for Microsoft to migrate our Edgio configuration to Frontdoor and hope for the best.
  2. Immediately search for a good alternative and rebuild everything there.

As mentioned earlier, I had almost no trust in option 1 because of Frontdoor's limited features and my extensive experience with Microsoft support, which is incredibly frustrating, even with an enterprise support plan.

Besides that, an improper in-place migration of existing DNS entries could essentially take us offline even before January 15th.

Why Amazon CloudFront Was Choose

I have a long history with AWS services (who would have guessed?), so my first thought was: “Rebuilding these rules with simple JavaScript that runs on the edge with CloudFront will be the most efficient approach.”

I was confident AWS was already in use elsewhere in the company, making it a practical solution. Choosing a new third-party tool that hadn’t been purchased yet wasn’t an option, as the process would take more time than we had.

Migration Process

In my mind, the goal was clear from the start:

  1. All redirects could be easily managed with a CloudFront viewer request function. This function is triggered whenever a CloudFront URL is accessed. We can check the request URL and instantly send a redirect if it meets our conditions.
  2. URL rewrites will happen in a Lambda@Edge function at the origin request step of the process. This function is invoked when the origin is called and CloudFront doesn’t already have something cached.
  3. For debugging purposes, a viewer response Lambda@Edge function can be used. This is great for tracking our original request URL and the URL that was actually called at the origin.
  4. In the viewer response CloudFront function, we can define our caching behavior and handle everything else we might need to do.

The picture above illustrates what the final solution should look like when everything works together. By using CloudFront functions for viewer requests and responses, we can significantly save money and improve performance.

Step-by-Step Migration Process

The migration involved several steps:

  1. Reverse engineer all rules and convert them to JavaScript.
  2. Set up the necessary AWS infrastructure and configure everything correctly.
  3. Expand the existing tests to thoroughly cover every rule and scenario.
  4. Deploy the new solution without changing the DNS entry to ensure it works. By adding a single line to /etc/hosts, we can redirect an existing DNS entry to a specific IP (one from our CloudFront distribution). This allowed us to safely test the new solution in our heavily integrated staging environment without affecting other teams.
  5. Switch the DNS record to CloudFront in the staging environment.
  6. Run smoke, end-to-end, and acceptance tests.
  7. Go live by switching the production DNS records.

With a proper timeline, this wouldn't be a big issue, but we had almost no time.

In the end, everything worked out perfectly on point - with Microsoft shutting down our Edgio deployment just one day after the migration.

Nevertheless, we've faced a lot of challenges on the way, that included difficulties in reverse engineering things, AWS default Service Quotas, CloudFront limitations, and business requirements.

You can read all of them in the full blog post, including our lessons learned and how Microsoft handled the "best effort" migration in the end (spoiler: didn't go great 😅).

Tobias Schmidt & Sandro Volpicella & from AWS Fundamentals
Cloud Engineers • Fullstack Developers • Educators

You're receiving this email because you're part of our awesome community!

If you'd prefer not to receive updates, you can easily unsubscribe anytime by clicking here: Unsubscribe

Our address: Dr.-Otto-Bößner-Weg 7a, Ottobrunn, Bavaria 85521

AWS for the Real World

Join our community of over 8,800 readers delving into AWS. We highlight real-world best practices through easy-to-understand visualizations and one-pagers. Expect a fresh newsletter edition every two weeks.

Read more from AWS for the Real World

⌛ Reading time: 13 minutes 🎓 Main Learning: How to Run Apps on Fargate via ECS 👾 GitHub Repository ✍️ Read the Full Post Online 🔗 Hey Reader 👋🏽 When building applications on AWS, we need to run our code somewhere: a computation service. There are a lot of well-known and mature computation services on AWS. You’ll often find Lambda as the primary choice, as it’s where you don’t need to manage any infrastructure. You only need to bring your code - it’s Serverless ⚡️. However, more options can be...

⌛ Reading time: 10 minutes 🎓 Main Learning: Running Postgres on Aurora DSQL with Drizzle 👾 GitHub Repository ✍️ Read the Full Post Online 🔗 Hey Reader 👋🏽 With re:Invent 2024, AWS finally came up with an answer to what many people (including us) asked for years: "What if there were something like DynamoDB but for SQL?" With Amazon Aurora DSQL, this is finally possible. It’s not just a “scales-to-zero” solution like Aurora Serverless V2. It is a true distributed, serverless, pay-per-use...

⌛ Reading time: 12 minutes 🎓 Main Learning: CloudWatch Launches re:invent 2024 ✍️ Read the Full Post Online 🔗 Hey Reader 👋🏽 re:invent happened already two weeks ago and there were some amazing launches 👀 CloudWatch got a lot of love at that re:invent. This is why we are showing you our top CloudWatch launches for this year. We've worked through all of them, tried to get them working with our example application of the CloudWatch Book, and are now busy updating the book ✍🏽. Let's dive into...