Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

AWS Logo
Menu
AWS Edge Services - Getting started

AWS Edge Services - Getting started

Hands-on tutorials and content covering the basic concepts.

Achraf Souk
Amazon Employee
Published Apr 29, 2025
Last Modified May 12, 2025

AWS Edge Services in a nutshell

An AWS Region is a physical location where AWS clusters data centers and operates regional services, like EC2 and S3. In the specific case of online applications, user traffic may traverse multiple public networks to reach a regional infrastructure. If you want to address the drawbacks of traversing uncontrolled networks in terms of performance, reliability and security, you should consider adding AWS edge services to your architecture. AWS Edge Services like Amazon CloudFront and AWS Global Accelerator, operate across hundreds of worldwide distributed Points of Presence (PoPs) outside of AWS Regions. Users are served from these PoPs within tens of milliseconds on average, and, when needed, their traffic is carried back to your regional infrastructure over the AWS global network instead of going over the public internet. The AWS Global Infrastructure is a purpose-built, highly available, and low-latency private infrastructure built on a global, fully redundant, metro fiber network that is linked via terrestrial and trans-oceanic cables across the world.

CloudFront the best reverse proxy for your HTTP(S) web applications

CloudFront is Amazon’s Content Delivery Network (CDN). CloudFront is used to accelerate HTTP(S) based web applications, and enhance their availability and security. CloudFront can be used in use cases such as full website delivery, API protection and acceleration, adaptive video streaming, and software download.
To use this service, create a CloudFront distribution, configure your origin (any origin that has a publicly accessible domain name), issue and attach a valid TLS certificate using Amazon Certificate Manager, and then configure your authoritative DNS server to point your web application’s domain name to the distribution’s generated domain name (xyz.cloudfront.net).
During the DNS resolution phase, when users navigate to your web application, an HTTP(S) request is dynamically routed to the best CloudFront PoP in terms of latency and availability. Once the PoP is selected, the user terminates the TCP connection, including the TLS handshake, on one of the PoP’s servers, and then sends the HTTP request. If the content is cached in one of the cache layers of CloudFront, the request will be fulfilled locally by CloudFront. Otherwise, the request is forwarded to the origin.
CloudFront has two layers in its infrastructure. The first layer is based on Edge locations, where users' connections are terminated and layer 3/4 DDoS attacks are mitigated. They provide caching capabilities, and if configured, execute CloudFront Functions, and apply WAF rules. The second layer is based on Regional Edge Caches, hosted in AWS regions. It provides longer cache retention times, improving cache hit ratios, and execute Lambda@Edge functions when configured.
CloudFront dynamically optimizes the use of these layers for each HTTP depending on its nature. For example, HTTP requests tagged as dynamic (e.g. Caching disabled, POST/PUT/DELETE requests, requests for objects tagged as non-cacheable using Cache-Control response header, etc..) skip caching layers, and as a result are sent from Edge locations directly to your origin. Watch this talk from re:Invent 2024 that dives into the life of an HTTP request across the layers of CloudFront


AWS WAF to protect web applications at layer 7

AWS WAF is a Web Application Firewall that can be used to protect web applications from application-level threats, such as DDoS attacks (e.g. HTTP Floods) , exploiting application level vulnerabilities, and undesirable traffic originated by automated bots.
To use AWS WAF, create rules in a WebACL then attach it to resources that need protection. Global WebACL can be attached to CloudFront distributions, and regional WebACls can be attached to resources withing the same region, such as ALBs and API Gateways. When a WAF WebACL is attached to a resource, the underlying service of the resource (e.g. CloudFront or ALB) hands off a copy of the HTTP request to the AWS WAF service to evaluate configured rules within a single digit millisecond. Based on the rule evaluation, AWS WAF service instructs the underlying service how to process the request (e.g. Block, forward, challenge, etc..).
A newly created WAF WebACL only contains a default rule allowing all requests, to which you can add multiple rules of different types, such as custom rules mainly based on the request attributes (e.g. IP, headers, cookies, URL, etc..) and Managed Rules from AWS (AntiDDoS, Bot Control, etc..) or from vendors on the AWS Marketplace, which are added as configurable rule groups to your WebACL.

Getting started

  1. Read this blog to understand the mental model of configuring CloudFront and WAF based on your application requirements.
  2. Read this blog to dive more into building your first AWS WAF web ACL
  3. Get hands on with CloudFront and WAF using this self-guided workshop.
     

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments