Anatomy of Edge Computing Infrastructure

This post will explain what the edge is, where it is located, how it works, its link with the serverless model and how Azion can help you move to the edge.

Mariana Bellorín Aguilera - Technical Researcher
Anatomy of Edge Computing Infrastructure

Introduction

Edge computing has recently arisen as the certain and natural next step in technology and innovation to power a hyper-connected economy. However, understanding those abstract concepts, such as the edge or serverless computing, can be difficult for some organizations, especially if technology is not their core business. Considering this scenario, we’ve initiated a series of blog posts to explain the fundamentals of edge computing for beginners.

In another blog post, we talked about the differences between cloud computing and edge computing and which is better for your business. This post will explain what the edge is, where it is located, how it works, its link with the serverless model and how Azion can help you move to the edge.

What’s the Edge?

For a better understanding, let’s make an analogy of the edge with the communication process. In any communication process, you need a sender and a receiver, along with the message and the channel. Each party should also speak the same language and provide feedback to complete this process. For effective communication, this cycle repeats itself if necessary. Additionally, communication can be affected by factors such as noises, the quality of the channel, among others.

If we understand networking as a communication process, we can use this model to explain how a request on the Internet works. For example: let’s imagine you need to find the lyrics for a new song you like. Surely, you’ll open your browser, type the name of the song and send the request. Immediately, you’ll receive a results page. It sounds fast and easy, right? Well, several processes and equipment are required to complete this task, or like we prefer to name it: the journey of the request. In a very simplified way, this is what happens:

  • The user (or sender) makes a request or sends information through an electronic device.
  • The message is coded and encrypted in a language the equipment can understand.
  • The message is received and processed by a server (receiver), which creates a useful response and sends the message back to the user (feedback), following the same path in the opposite direction.
  • The user could make a new request, starting this process again.
  • Along the way, latency, poor connection, or problems with the type of device or Internet provider network could hinder smooth communication.

In the current Internet infrastructure, the servers are usually located inside data centers. The main problem is that those data centers are centralized and located geographically far away from end users. This generates a series of issues related to the speed of the response, interfering with the quality of experience for the user, who needs to wait to get an answer or complete any online task, and increasing the risk of abandonment.

Going back to the concept of the communication process, if the sender and the receiver are closer, it could definitely improve the process, making it more efficient and faster. The same happens when servers are located physically closer to the end users and devices: the time of response is faster, with minimal issues, more quality and using less bandwidth. This is the difference between edge computing and traditional Internet infrastructure.

In Edge Networking: An Introduction, the Linux Foundation explains that the edge refers to a new computing paradigm characterized by compute and storage happening as close as possible to the place where the data is generated and collected. Following this idea, the Linux Foundation also defines edge computing as “the delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services.” If we consider the servers and the users as our two extremes, this leads to our next question…

Where is the Edge?

Physically, the edge could be anywhere: in a micro data center, in a cell-site, in a specialized device or inside enterprise premises… What really matters is where it’s located inside the network’s infrastructure. So, three principles should be met:

  • in front of the cloud;
  • closer to the end user; and
  • in a decentralized and geographically distributed infrastructure.

Let’s try to graph it¹ ²:

Anatomy of Edge Computing Infrastructure

The first layer, the endpoints, includes all devices able to generate and process data, as well as to connect with the edge computing infrastructure. The edge computing layer includes all the hardware and software deployed (and we will explain more about this later). The gateways are all the assets that enable the communication of the edge devices with distant servers or cloud services, if required. For example, to check some information not included in the edge node because it’s sensitive data, during authentication tasks, or when the edge node’s cache has been purged and needs to retrieve the original data from the origin source. Finally, the cloud layer is composed of centralized and faraway data centers, with resources to store and process data in larger quantities, a concept we are already familiar with.

How Does the Edge Work?

A recent report from Statista confirmed “the total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly.” After reaching 64.2 zettabytes in 2020, this number is expected to reach more than 180 zettabytes by 2025. Initially, this context forced more web applications and services to move to the cloud. However, the increasing number of devices available and connected, and the volume of data generated by them, can’t be efficiently managed by the traditional centralized model. Edge computing emerged as the answer to solve many of the problems and limitations, delivering better user experience, performance, security and services. But, let’s discover its infrastructure first.

An edge network is composed of several geographically distributed nodes. Every edge node is deployed to receive and deliver content as requested, and has hardware and software that enable their functions:

  • Hardware: the infrastructure usually could include racks filled with servers (according to the processing and networking capabilities intended and the volume of traffic to be managed by the node), storage capacity, and switches and routers to establish peering with different networks. These deployments are smaller and less intensive compared to cloud data centers.
  • Software: all the programs, protocols, functions, including business logic, data analytics and security parameters.

Being located in front of the cloud, the edge computing infrastructure enables computing and storage resources closer to the user. Remember the communication process? Now, it can be completed faster and more efficiently because requests are resolved at the edge — with less data traveling — and not in the origin infrastructure or cloud.

SDN and NFV: How They Enable The Edge

In order to provide its services, the edge computing model uses two important tools to manage and organize its infrastructure: Software-defined Networking (SDN) and Network Functions Virtualization (NFV).

SDN is an approach that allows the control and management of the network in an intelligent and automated way, through software applications using APIs. As software-defined, rules are created to indicate what the network should do in certain situations automatically. For example, to avoid overloading an edge node, a rule driving the traffic to the closest healthy, available edge node can be the ideal solution — providing resilience and scalability. This automation helps to simplify resource provisioning; at the same time, it also reduces operational costs and efforts for the developers.

The Open Glossary of Edge Computing describes NFV as the “migration of network functions from embedded services inside proprietary hardware appliances to software-based Virtualized Network Functions (VNF)”. In simpler words: using NFV, a developer can create software-based network devices that run as virtual elements on servers, such as virtual machines and containers, instead of buying and deploying new hardware. Each VNF has its own purpose, and is programmed to work as a router, firewall, load balancer, among others, into a virtualized environment defined by the NFV.

The Serverless Model

Frequently, companies complain about all the disadvantages and limitations associated with the traditional computing services. The most common pain points at enterprise level are:

  • high latency;
  • unreliable and inflexible infrastructure;
  • poor performance and user experience;
  • expensive services; and
  • operational complexity.

In this context, the edge can offer improvements in every matter. Regarding the proximity of the edge to the end users and distribution of the elements, this deployment provides the following benefits:

  • Low latency and less bandwidth use: faster response due shorter trips needed to resolve the requests.
  • Resilience: even when a node or a specific component fails, the system continues to operate, using other resources into the network to guarantee 100% availability.
  • Scalability: infrastructure can expand or reduce automatically to match the level of demand, reducing costs related to over provisioning or lack of resources.
  • Reduced operational costs: with less trips to the origin infrastructure, costs associated with the use of cloud services are reduced.
  • New use cases: edge computing infrastructure enables projects with modern technologies such as AI, VR/AR, Machine Learning, IoT, video and Data Stream, 5G, and more.

Additionally, edge services use a serverless model that can increase these benefits and add more advantages. The Cloud Native Computing Foundation’s Serverless Whitepaper describes serverless computing as “a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”

The origin of the concept of “serverless” goes back to 2012, when Ken Fromm introduced it for the first time in his article “The Future of Computing is Serverless.” Fromm explained that it doesn’t mean servers are not involved; instead developers don’t have to deploy and manage them, or even provision resources, because providers take these responsibilities. He affirmed “going serverless lets developers shift their focus from the server level to the task level.”

This way, serverless is a model that abstracts away the provisioning and management of server infrastructure and compute power, and automatically scales on demand. This means you, as a developer or operator, don’t need to manage, configure, execute or maintain it. Additionally, with serverless services, you pay just for the resources you consume. Linking serverless with the edge model, servers are available “everywhere”, close to where services are needed and ready to be used to build, create or run applications and functions, at a reduced cost without managing the infrastructure.

What are the edge’s advantages?

  • Decentralization: with edge nodes highly distributed geographically, the edge network can embrace more territories and be present in more places compared to the centralized cloud data centers.
  • Scalability: the size of the network and the resources used can be adapted automatically according to the demand.
  • Virtualization: resources can be virtualized at the edge through NFV, reducing the need to manage physical structure and reducing costs.
  • Orchestration: developers are able to deploy, manage, control, monitor and update resources on the edge in real time with a zero-touch approach, as well as integrate others services.
  • Automation: using SDN enables these capabilities, so developers just need to worry about creating and executing the code and workloads on the edge; no provisioning, no maintenance and NoOps.
  • Data Stream and Observability: the edge computing model offers tools to track and monitor all the events and behaviors registered in the applications and networks. This allows the operators to get better business insights and understanding of their users.
  • Security: The provider is able to create a plan that includes perimeter and multilayer security, using resources such as Web Application Firewall, Network Layer and DDoS Protection.
  • Open standards: ideally, an edge computing platform should be compatible with different resources, languages, equipment, and can integrate third-party solutions.
  • Serverless model: the provider runs and manages the servers and all the infrastructure, as well as the resources, abstracting scalability, capacity planning and maintenance operations from the developer or operator.

Azion’s Edge Computing Platform

Azion’s Edge Computing Platform is composed of a wide range of products and modules that provide the serverless infrastructure developers need to build, execute, and move their applications to the edge, without worrying about management. It has high availability and is fault-tolerant, open, extensible, and easy to connect to any cloud services.

Azion’s Edge Computing Platform

The platform’s features include all the benefits and advantages of using edge computing:

  • Azion’s Edge Network has more than 100 edge locations highly distributed around the world.
  • With Edge Functions, developers can build, manage, configure and execute their serverless applications at the edge of the network, and let Azion automatically scale the resources needed.
  • Through our Load Balancer you can choose between multiple distribution algorithms and customize rules to balance traffic, avoiding network congestion and server overload.
  • Application Acceleration speeds ​​up the performance of your applications and APIs, with no need for changes to your infrastructure.
  • You can also improve your observability practices with our tools: Data Stream, Edge Pulse, Real-Time Metrics, and Real-Time Events.
  • Protect all your applications, servers and resources against numerous dangers, ranging from OWASP Top 10 Threats to bad bots and sophisticated zero-day attacks with Edge Firewall.
  • Our secure multilayer perimeter and our Security Response Team (SRT) are our additional tools to boost your security strategy.

Do you want to experience the benefits of serverless computing right now? All you need is to create a free account on our website, and you’ll gain $300 of service credits to use with any Azion products.

References:

¹ Hopkins, B. & Staten, J. (2019) A Decoder Ring For Edge Computing: How To Interpret What “Edge Computing” Is And How It Creates Value. Cambridge, MA: Forrester Research.

² The LF Edge (2020) The New Open ”Edge”. San Francisco, CA: The Linux Foundation.

Subscribe to our Newsletter