Can't read this? Access the online version of our newsletter!
Hi Reader ππ½,
We hope that everything's alright on your side - maybe you're already heading into the vacation or you're just enjoying the weather - which is hopefully not too hot! ποΈ
This week we want to talk about Amazon Route 53 π
For understanding what Route 53 is and how to work with it, we'll go through the fundamentals of the worldwide DNS first. Then we'll talk about Route 53's capabilities and why it's more than just a simple resolver.
This includes a look at all of its simple and advanced routing policies and more great features like health checks that can be tied to your records!
Before diving into this, we'd like to drop you a wrap-up in the form of an infographic β
β
Naming is crucial in distributed systems for identifying entities.
The internet's DNS provides...
A common example to visualize this is to think of it as a phone book for the internet. If you want to call somebody, you need some kind of directory that helps you find their number. π
DNS management has a unique feature of decentralized control, allowing high scalability and avoiding a single point of failure.
It has a hierarchical structure maintained by registries in different countries, with authoritative name servers at the top holding actual addresses for DNS records, followed by top-level domain servers, root name servers, and recursors as we go down.
When resolving domain names, we need to distinguish between two types of services:
What's caching and why it would be impossible to provide DNS on a global scale without it?
Caching is the process of saving information for a short period to reduce the load on up or downstream systems that are part of a request.
Caching is crucial to keep the internetβs DNS resolvers healthy. π₯
There are different locations where caching takes place. The two most obvious of those caching locations are
The closer the caching is to your browser, the better, as it reduces the number of required processing steps, often through different systems.
β
Now that we understand how DNS works, let's explore Amazon Route 53! π
A hosted zone acts as a container for several records that specify how traffic should be routed for the root domain and its subdomains. Hosted zones come in two different flavors: public and private.
You can use Route 53 to route traffic to various AWS services, including:
There are different types of routing that you can use for your domain names. Letβs dive into those policies in the paragraph.
The type determines how Amazon Route 53 responds to queries for those domain names. The selection of the best-fitting record types highly depends on your requirements.
The standard DNS record for a single resource.
No multiple records with the same name are allowed.
Specify multiple values for one record. Route53 returns all values and the client selects one by himself.
You can define multiple records for the same domain name using it and control the traffic distribution among them by assigning a percentage, as the name suggests.
Prominent use cases are load balancing and testing new features or releases.
Weighted routing not only enables you to quickly scale your application but also build blue/green deployments or do traffic shifting that is fully in your control.
In a multi-region setup with latency-based routing, if the nearest region for a customer is unavailable, we should avoid routing requests to that region. π₯
Thatβs where health checks come in. Health checks in Route 53 allow you to monitor the availability of AWS-native or external endpoints.
Those checks are configurable:
The exciting part: You can attach the health checks to your Latency-based records. If a location's health check becomes unhealthy, the corresponding target will no longer be propagated for that DNS record.
In a multi-region setup, you want to route requests to the closest region for faster responses.
With Latency-based routing, create multiple records for a domain, each for a specific region. When queried, Route 53 chooses the record with the lowest latency.
Geo-Location records allow routing based on the origin of your clients. This enables you to easily localize content or implement geo-restrictions to comply with regulations.
The granularity of locations is either by continent, country, or even US state.
For serverless architectures, deploying your eco-system globally wonβt increase your costs significantly. Unused resources donβt contribute to your bill, so adding more regions wonβt affect your charges.
To build a multi-region setup with Route 53, follow these steps:
If all regions are healthy, requests go to the region with the lowest latency. Even if a region has little or no traffic, it wonβt affect anything.
In case of a regional outage or accidental application breakage due to a faulty deployment, Route 53 quickly fails over. It won't return the unresponsive region for any DNS query, preventing significant outages.
This may cause slower requests for origins with long distances, but it only affects the application's availability for a brief period.
Canary deployments roll out new app versions to a small user group first to avoid deploying errors to many users. This allows checking for issues before a wider rollout.
Jumping into Networking fundamentals: Services like ALB can route traffic based on headers or cookies, so you can send specific users to a dedicated deployment.
The ALB will terminate the TLS connection. The downstream service, such as an ECS task, communicates with the ALB over a separate HTTP or HTTPS connection, depending on the configuration of the ALB.
Some architectures can't support simple traffic routing, such as those needing secure connections to the application container.
For example, if a TLS connection is terminated within the container (e.g. with an NGINX container) in an ECS cluster, you need to use an ELB instead of an ALB.
ELB works on the transport layer and doesn't terminate the TLS connection, forwarding the request as is. This ensures end-to-end encryption from the client to the application, increasing security.
β‘οΈ The obvious downside and problem: For HTTPS requests, services in between (including ELB) can't inspect request details to make routing decisions. Instead, we can set weights on target groups to route traffic percentages to each group.
In serverless architectures, we use AWS API Gateway instead of load balancers to route requests to different Lambda function versions.
Setting up routing to different function versions can be challenging if the infrastructure is coupled with the function code. However, with serverless architectures, we can replicate infrastructure without worrying about costs.
To delegate routing decisions to Route 53, use records for weighted routing and run multiple stacks for multiple replications of the application.
Deploy multiple stacks of the application infrastructure, including AWS API Gateway and Lambda function. Another global infrastructure (e.g. SQS and DynamoDB) is independent of the application stack.
Each regional API gateway gets a weighted DNS record, with the inactive stack set to weight zero and the active one set to 100.
To roll out a new code deployment, deploy the inactive stack first. Once completed, adjust weights in the DNS record.
For example, set the new version stack to receive 20% of the traffic while the old one keeps 80%.
After weβve run this configuration for a while and made sure there are no error spikes or other suspicious behavior in our application, we can switch the rest of the traffic to the new version.
Route 53 has very transparent pricing.
What you'll be charged for is simple:
You don't have to worry about anything when starting to use Route 53.
β
That's it for Amazon Route 53 and for today.
We hope that you've learned something new and you're excited to use Route 53 for yourself! π
Have a great week βοΈ
βSandro & Tobi
β
If you're interested in more, have a look at our
AWS Fundamentals blog π
Join our community of over 8,800 readers delving into AWS. We highlight real-world best practices through easy-to-understand visualizations and one-pagers. Expect a fresh newsletter edition every two weeks.
Hey Reader ππ½ This issue will be about a recent real-world experience that just went off right with the new year! π Once upon a time... π¦ It all started in September 2024 where Edgio, the main CDN provider we used for one of my large enterprise projects, filed for bankruptcy. Edgio was natively integrated into Azure, allowing you to use it without leaving the Azure ecosystem. It also featured a powerful rules engine (allowing for all kinds of conditions, redirects and rewrites) and didnβt...
β Reading time: 13 minutes π Main Learning: How to Run Apps on Fargate via ECS πΎ GitHub Repository βοΈ Read the Full Post Online π Hey Reader ππ½ When building applications on AWS, we need to run our code somewhere: a computation service. There are a lot of well-known and mature computation services on AWS. Youβll often find Lambda as the primary choice, as itβs where you donβt need to manage any infrastructure. You only need to bring your code - itβs Serverless β‘οΈ. However, more options can be...
β Reading time: 10 minutes π Main Learning: Running Postgres on Aurora DSQL with Drizzle πΎ GitHub Repository βοΈ Read the Full Post Online π Hey Reader ππ½ With re:Invent 2024, AWS finally came up with an answer to what many people (including us) asked for years: "What if there were something like DynamoDB but for SQL?" With Amazon Aurora DSQL, this is finally possible. Itβs not just a βscales-to-zeroβ solution like Aurora Serverless V2. It is a true distributed, serverless, pay-per-use...