Why Companies are Abandoning Legacy CDNs for Edge Computing

The frequent updates and dynamic content of today’s websites are problematic for legacy CDNs. It's time to move to the edge.

Rachel Kempf - Editor-in-Chief
Why Companies are Abandoning Legacy CDNs for Edge Computing

Since their emergence in the 1990s, CDNs have been touted as a solution for reducing latency and site availability—factors that have a direct impact on websites’ bottom line. According to Deloitte’s 2020 report Milliseconds Make Millions, even a .1s reduction in latency increases retail conversions by 8.4%, average order value by 9.2%, bounce rate by 5.7%, and customer engagement by 5.2%.

However, the frequent updates and dynamic content of today’s websites are problematic for legacy CDNs, which can only cache static content and must forward requests to origin servers any time outdated content is purged from the cache.

As a result, a new type of distributed networking platform emerged, one capable of delivering content with the agility of today’s Internet. This post will explain what CDNs are and how they work, the factors affecting CDN performance, the evolution and transformation of CDNs and how edge computing is upending the market, and the benefits of using edge computing products like Azion’s Edge Application, Edge Functions and Edge Cache as the solution for modern applications.

What is a CDN?

A CDN, or content delivery network, is a type of computing service that stores popular content in highly distributed locations around the world, closer to users than centralized data centers. In doing so, CDNs reduce the load on origin servers and enable faraway users to receive content more quickly.

CDNs use cache servers, sometimes known as forward proxy servers, to temporarily save or hold copies of previously accessed files for a set period of time known as time-to-live, or TTL. The process of temporarily holding content is known as caching.

When the cache is purged, or when new content is requested, the cache servers forward those requests on to the website’s origin servers.

  • Origin server: a server which is updated and maintained by a digital business with its website’s most up-to-date content
  • Cache server: geographically distributed servers that cache content and forward requests for new content to a site’s origin server
  • Cache: copies of files stored on a cache server for a set period of time after being fetched from a site’s origin server
  • TTL: the period of time files are to remain in a cache before being purged

In addition to caching content, larger CDNs may offer security services, such as DDoS protection or web application firewalls. Because CDNs work as proxies positioned between a site’s users and its origin servers, it can also serve as a first line of defense against cybersecurity attacks, filtering out malicious traffic before it reaches a site’s origin servers.

Evolution of the CDN

CDNs were first introduced in the late 1990s to alleviate bottlenecks from increased Internet traffic and improve reliability to mission-critical services. Since then, Internet traffic, content, and infrastructure have changed considerably, resulting in:

  • The need for reliable, high-speed content delivery in real-time
  • Service expectations and the need for higher quality of experience
  • Websites featuring video content and rich media
  • Demand for low-latency gaming and video streaming
  • Internet use and adoption of mobile devices

These changes have caused a shift in the market from legacy CDNs with edge platforms that perform workloads at the edge through serverless technologies. Both edge computing and serverless are fast-growing markets, outstripping the 14.1% CAGR of CDNs, with serverless growing at a CAGR of 22.7% and edge computing growing at an astonishing 34.1% between 2020 and 2025, according to recent predictions from MarketsandMarkets. In addition, CDNs have had to evolve over time, with more capacity, service offerings, and better underlying technology.

Capacity

Internet usage has increased substantially since Akamai introduced the first CDN in 1999. At the time, mobile phones were still using 2G networks, the smart phone had not yet been introduced, and the term IoT had only just been coined. Since then, the proliferation of Internet-connected devices has exploded, requiring additional capacity to accommodate billions of devices and zettabytes of data. In addition, web application content is increasingly heavy, incorporating high-definition video and images. As a result, CDNs have had to expand capacity from hundreds of megabits per second and millions of requests per day to tens of terabits per second and trillions of requests per day.

Services

With the rise of IoTs and mobile devices, CDNs have had to offer a broader range of services. A proliferation of different devices and screen sizes, along with an increase in high-quality images on websites, has driven a need for image optimization services. At the same time, the rise of IoTs, which often have less strict security capabilities than PCs, and the broader attack surfaces of modern applications have driven a need for CDNs that are capable of mitigating DDoS attacks and securing web applications. In addition, the rise of IoTs has necessitated a need for distributed data centers capable of real-time processing and analytics, resulting in perhaps the biggest step in CDN evolution to date: the transition from traditional CDNs to edge computing.

Technology

CDN infrastructure has evolved from PoPs consisting of cache servers to edge data centers capable of real-time analytics and data processing. This transition has become increasingly necessary as automation increases, replacing human speed with machine speed. As noted in LFE’s most recent State of the Edge report, “today’s Internet—while fast enough for most humans—appears glacial when machines talk to machines … As more and more machines come online, businesses will seek to apply the power of server-side processing to their behaviors. This will require an edge-enabled Internet that operates at machine speeds.”

In addition, low-latency applications such as VR, AR, video conferencing, and ultra-reliable low-latency services enabled by 5G, increasingly require powerful processing capabilities at the edge, replacing CDN PoPs with edge data centers. LFE’s Open Glossary of Edge Computing explains that edge data centers are “Capable of performing the same functions as centralized data centers although at smaller scale individually,” a feat which is enabled by “autonomic operation, multi-tenancy, distributed and local resiliency, and open standards.”

Anatomy of a CDN

PoPs

PoPs are often confused with data centers, as the two terms are sometimes used interchangeably. However, for a traditional CDN, PoPs are generally much smaller and less complex than data centers. LFE’s Open Glossary of Edge Computing provides the following definitions for the two terms:

  • PoP: A point in their network infrastructure where a service provider allows connectivity to their network by users or partners.
  • Data center: A purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, purpose-built flooring, as well as suitable heating, cooling, ventilation, security, fire suppression and power delivery systems.

More simply, a PoP is any network access point; it can be as small as a single server with limited resources. Even some higher end devices and network equipment can serve as PoPs. Data centers, on the other hand, house multiple servers which can translate into thousands of PoPs in a centralized location.

Due to their size and scale, data centers can handle much more traffic than a single PoP, and are capable of performing more complex computing functions. A PoP is designed to perform more simplified tasks, such as caching content, serving cached content, and forwarding requests that are not stored in the cache.

Cache Servers

Each PoP is composed of one or more cache servers. Cache servers temporarily store files along with their TTL, which tells the server how long files should remain in the cache before being deleted. This ensures that content can be purged to remove old content, make room for new content, and ensure that any issues with loading content do not occur indefinitely. When the TTL expires, or when a site owner manually purges the cache, the cached files are deleted. The next time the content is requested from the cache server, it will forward the request to the origin server to fetch content, providing up-to-date content, but with higher latency.

Server Resources

Server resources come in two varieties: persistent and in-memory. Random access memory (RAM) is used for in-memory storage, which can be quickly accessed by the CPU while files are in use. However, RAM is also volatile, meaning that files do not remain in-memory when they aren’t needed or when the machine powers down. In contrast, persistent storage has more capacity and is non-volatile, meaning that files stored on it remain in place when the server isn’t on. However, persistent storage takes longer to access; how much longer depends on the type of storage the machine is using.

Older servers use hard-disk drives (HDD) for persistent storage, which uses a mechanical process akin to a record player to read and write data. As a result, it is slower and more fragile than solid-state drives (SSD), a newer technology that stores data on interconnected flash-memory chips. Because there are no moving parts, SSDs are faster and more reliable than HDDs. They are also better at accessing fragmented files, which are stored in different locations across a disk. Azion’s Edge Platform uses SSDs for enhanced performance and reliability.

  • RAM: In-memory storage that temporarily stores files while they are in use so they can be quickly accessed by the CPU
  • HDD: A method of persistent storage that uses a mechanical device to read and write files to a magnetic spinning disk
  • SSD: A newer method of persistent storage that stores and accesses data via computer chips

Virtualization

CDNs store data for many different customers, each of which must be isolated to prevent data leakage and security issues. In addition, resources like CPU and bandwidth must be segmented to ensure equitable access and prevent a single customer’s traffic from overwhelming a server. One way to do this is by segmenting a server into virtual machines, or VMs: separate and independent virtualized environments that each have their own networking interface and their own share of CPU, memory, and storage. This also enables CDN vendors to provide more elastic resources, as VMs can be spun up in a matter of minutes to meet increased demand.

However, today’s users will not wait minutes for a page to load; the average user will abandon a slow-loading page within seconds. In order to gain more elasticity, resources can be divided into containers, which have relaxed isolation properties that allow them to share OS kernel, rather than requiring each container to include its own operating system, as VMs do. As a result, new containers can be deployed in seconds and spun up in about half a second.

Factors Affecting CDN Performance

Although the goal of all CDNs is the same—to speed content delivery, reduce resource use, and improve reliability—their ability to execute these tasks varies considerably. For starters, CDN performance is highly dependent on the location, distribution, and number of the CDN vendor’s PoPs. Since CDNs improve performance by reducing the distance between users and the content they’re requesting, PoPs must be located as close as possible to where end users are concentrated.

In addition, PoPs are not created equal. A server sitting in a cabinet in an office building will not have the same reliability or security as one housed in a data center with built-in redundancies, on-site support to handle security and technical issues, and purpose-built systems to control temperature, humidity, and airflow.

Performance is also dependent on the quality and capacity of the equipment used. HDDs are not as fast or reliable as SSDs, and they consume more power. Server capacity also makes a difference, since servers with more for storage and memory will have less cache misses, resulting in lower latency.

CDN performance is dependent on many factors:

  • Location and number of PoPs
  • Regional or geographic coverage
  • QoS and capacity of IBXs and data centers
  • Quality and capacity of equipment

Benefits of Edge Application

Azion Edge Application replaces legacy CDN services with edge computing capabilities, including modules for Edge Cache, application acceleration, image optimization, and load balancing. In addition, Edge Application can be extended with security and real-time analytics using Edge Firewall and Edge Analytics products, as well as the ability to create personalized content and edge-native applications through Edge Functions.

Edge Cache provides caching services designed for today’s Internet. Our software-defined network monitors and processes requests in real time, always ensuring the connection is secure and that end users are served by the healthiest, nearest Edge Node. With our serverless platform, Azion customers can create code, business rules, clustering logic, and categorize cached data on the edge to support high volumes of requests and dynamic content delivery.

With Edge Cache, Azion customers gain:

  • First line of defense against attacks
  • Faster page loads
  • Optimized API processing
  • Optimized traffic management
  • Local jurisdiction and compliance
  • Managed control and flexibility
  • Better UX

To experience the benefits of Edge Cache firsthand and gain full access to all Azion products and features, create a free Azion account today.

Subscribe to our Newsletter