Blog

Edge Caching: Next-Generation CDN Services

Edge Caching: Next-Generation CDN Services

From their emergence in the late 1990s, content delivery networks, or CDNs, have been instrumental in reducing Internet bottlenecks and speeding delivery by caching content close to end users. However, the Internet of today is very different from the Internet of the 1990s; as a result, today’s site and application owners need CDNs capable of providing more services, processing more requests, and quickly delivering high-resolution images, video streaming, and dynamic content to a wide variety of devices.

To accomplish this, Azion’s Edge Application leverages the power of edge computing to perform complex computing tasks at the edge of the network, closer to end users. The standard module for Edge Application is Edge Caching, which uses a reverse proxy architecture to cache content and perform other crucial tasks needed to speed content delivery. This post will take a closer look at Edge Caching by reviewing what caching is and explaining the different types of caching, describing how reverse proxies work and the jobs they perform, and delving into the specifics of Edge Caching.

What is caching?

According to Mozilla’s MDN Web Docs, “Caching is a technique that stores a copy of a given resource and serves it back when requested.” Files that are cached locally can be quickly accessed in the future, enabling faster access and less resource use.

How do CDNs cache content?

In a previous post, we reviewed the history, purpose, and anatomy of a CDN in depth. The full post can be read here, but a brief review of the process CDNs use to cache content is included below:

  1. CDNs have points of presence, or PoPs, located in geographically distributed areas and composed of one or more proxy servers.
  2. When a user requests content from a site using the CDN, the request is routed to the nearest PoP.
  3. If a valid copy of the content is in the proxy server’s cache, the request is served from the proxy server; if not, the proxy forwards the request to the site’s origin server.
  4. The origin server returns the request to the proxy server along with its TTL (time to live), which specifies how long content should remain in the cache before being purged.
  5. The proxy serves the content to the user who requested it, saving copies of the files in the cache for future requests.
  6. When a file’s TTL expires, it is cleared from the cache, and the next user who requests that file must wait for it to be fetched from the origin server.

Setting a file’s TTL is important because, as Mozilla notes in MDN Web Docs, caching “has to be configured properly as not all resources stay identical forever: it is important to cache a resource only until it changes, not longer.” However, today’s sites feature all kinds of content, from site headers and company logos, which are unlikely to change often, to breaking news stories and retail sales that may become quickly outdated.

In addition, some content may vary depending on who is accessing it. Much of today’s content is dynamic, meaning that it is personalized for various users depending on how long they visit a site, which links they click on, and other factors which are subject to change. As a result, properly configuring TTL can be a complex task, which is further complicated by the fact that content is not only cached once, but multiple times.

Types of caching

Caching can occur on either the server-side, via a proxy server, or on the client-side, via a browser cache. Although both types of caching speed up content delivery, they differ in both where files are stored and who controls and accesses the cache. MDN Web Docs distinguishes between these two groups, using the following terms and definitions:

  • Shared proxy cache: a server-side cache that stores popular resources for reuse by multiple users, often as part of local network infrastructure set up to reduce network traffic and latency.
  • Private browser cache: a client-side cache dedicated to a single user and holds all documents downloaded via HTTP by the user to make previously accessed web pages available without an additional trip to the server

A shared proxy cache (also known as a server cache or CDN cache), is the type of cache CDNs use in order to speed up content delivery and improve site availability. It serves multiple end users in a geographic region, reducing network latency by reducing the distance files have to travel. However, caching also occurs on the client side, to further speed content delivery. Whereas a proxy server stores copies of files to be quickly accessed by many users and devices, a browser cache is stored locally on a single device, and is only accessible by anyone using that specific device’s web browser.

How do private browsers cache content?

As explained in a blog post on Google Developers, all browsers have some kind of built-in HTTP cache, which is provided through support for a set of web platform APIs. These APIs enable developers to determine how content on their site is cached via the configuration of request and response headers such as cache-control. However, Google notes, HTTP caching occurs even if a developer doesn’t specify how long content should stay in the cache; if the cache-control header is not configured “browsers effectively guess what type of caching behavior makes the most sense for a given type of content.”

When a user visits a site, the request is initially routed to the browser cache, which checks to see if the request can be fulfilled through the cache. If there’s a match, the content will be served from the cache. If not, the content will be downloaded from that site and stored on the user’s hard drive for future use, resulting in a faster page load and less data transfer costs.

Content is purged from the browser cache when files expire or when the browser cache is full. Alternately, end users can manually flush out a browser cache to clear space on a hard drive or troubleshoot other performance issues. Once a webpage’s content has been purged from the browser cache, all the page’s data will need to be downloaded from the site the next time it is visited, resulting in a slower page load.

Forward vs. Reverse Proxy Servers

Proxy servers come in two varieties: forward proxies and reverse proxies. Forward proxies sit in front of end users’ devices, acting as a tunnel or gateway between the user and the Internet that prevents users from receiving traffic directly from an origin server. This can be used as a gateway to set up browsing restrictions that prevent certain content from reaching users, or to obscure an end user’s identity when making requests. One example of a forward proxy is The Onion Router, or TOR networks, which routes traffic through different proxies to protect users’ anonymity.

Reverse proxies, in contrast, sit in front of origin servers and prevent them from directly receiving traffic from the Internet. Just as a forward proxy can be used to hide the identity of a client, reverse proxies hide the identity of origin servers by acting as the public face of a website.

  • Forward proxy: Sits in front of a client’s device and acts on behalf of a client or group of clients as a tunnel or gateway to the Internet.
  • Reverse proxy: Sits in front of origin servers and acts on their behalf, serving as the public face of a website and providing backend security, performance, and flexibility.

Reverse proxies provide flexibility and scalability by enabling companies to change their backend infrastructure without altering their site address. In addition to caching content, they may improve security and performance by providing services such as:

  • DDoS mitigation
  • Blacklisting
  • Compression
  • SSL termination
  • Load balancing

Azion Edge Caching

Azion Edge Caching is the standard module designed to reduce latency and improve availability for edge applications through a reverse proxy architecture that connects the Edge Nodes of our highly distributed global network to a websites’ origin infrastructure.

With a high level of granularity necessary for effectively caching today’s complex websites, Edge Caching minimizes trips to the origin, resulting in better speed and availability than legacy CDNs. With Edge Caching, Azion users can configure separate cache settings for browser and CDN cache, enable a second layer of caching for long-lived object files, speed up the transfer of large files, and cache dynamic content to expedite its delivery.

Cache settings

Azion customers can configure cache settings separately for both origin host server and CDN cache. Using Edge Caching, edge applications can be configured to honor origin cache headings that will keep the TTL on your origin host server’s settings. Alternatively, you can override your host server’s cache settings by manually configuring TTL.

Azion’s L2 Cache

Today’s complex websites and applications are content rich, composed of numerous files with very different needs in terms of caching. While some objects are subject to frequent changes, others remain constant and, as a result, can be cached for a long period of time. For static images and other objects that can be cached for thirty days or longer, Azion’s L2 Cache provides an additional cache layer between the edge and the origin to further reduce load on origin infrastructure.

Slice Settings

High-definition video and other heavy content can result in long transfer times that end users will not wait for, particularly as performance expectations increase. Slice settings is a feature of Edge Caching that enables large amounts of data to be processed efficiently by splitting files into smaller pieces that are delivered gradually to end users, rather than transferring files all at once. This not only reduces latency, but ensures that large file transfers do not eat up bandwidth, resulting in network congestion that impacts speed and availability.

Advanced Cache Key

Advanced Cache Key is a feature of Edge Caching that enables caching of dynamic content by grouping users according to various factors such as geographic location, browsing history, or shopping profile. Customized cache rules can be defined based on metadata, such as cookies or query strings.

Open Caching

In addition, it uses Open Caching, the open architecture developed and endorsed by the Streaming Video Alliance to scale the infrastructure needed for effective video delivery, which is both the largest driver of growth for the CDN market, and an especial challenge for fast and efficient content delivery. As noted by a 2020 Deloitte report on CDNs,

“Video files are particularly large, and tremendous technological wizardry is needed to compress them, break them apart into distributed pieces, and then dynamically reassemble and stream them on demand to hundreds of millions of requesters, all with high resolution and minimal latency. The growing quantity and sophistication of OTT video content means more traffic, more routing, and a greater need for management, optimization, and prediction across the CDNs responsible for delivering speedy and reliable content.”

Open Caching provides a solution to this problem via a unified, global CDN platform with open APIs that helps service providers easily deploy an edge CDN footprint. As noted by Streaming Media Alliance’s “Open Cache Solution Functional Requirements Document,” this enables content providers and CDNs to:

  • Reduce overall network transportation costs;
  • Offload the heavy lifting of popular content onto the open caching layer;
  • Reach network locations not available for third-party cache deployment; and
  • Act as a first caching tier to accommodate usage planned and unplanned usage spikes.

Azion’s use of Open Caching open architecture and programmability at the edge of the network enables developers to cache websites as efficiently as possible, resulting in lower latency and better availability. See what Edge Caching can do for your website by creating a free account and trying out Azion’s edge computing products today.