In past years, serverless and edge computing have quickly risen to the forefront of many technological discussions, due to the widespread adoption of microservices, the emergence of 5G, and the ever-increasing demand for better user experiences. However, despite all the buzz and interest around serverless and edge computing, the concept still remains unclear to newcomers and IT professionals alike.
To help demystify this crucial component of modern computing, this post will take an in-depth look at edge computing, its relationship to serverless, and how Azion’s Edge Application compares to other serverless compute services on the market.
What is Edge Computing?
A common quip among IT professionals is that if you ask a dozen engineers what edge computing means, you’ll get a dozen different answers. This is one of the reasons why the Linux Foundation created the LFE State of the Edge, a project to facilitate consensus and cross-industry collaboration around edge computing. Their Open Glossary of Edge Computing defines the term as “The delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost, and reliability of applications and services.”
The Market Drivers for Edge Computing
LFE’s 2020 State of the Edge Report posits edge computing as “the third act of the Internet,” which builds on the trends toward regionalization that began during the 1990s when CDNs and regional data centers emerged as a means of enhancing delivery speed and reducing the cost of transporting data. The need for more bandwidth and lower latency has only grown since then due to:
- our digital life, including work, communication, video streaming and gaming;
- an explosion of Internet-connected devices; and
- widespread use of cloud infrastructure and SaaS.
In addition, the recent completion of 5G commercial standards, which leverage edge computing for ultra-low-latency delivery and real-time processing of data, has enabled the rollout of both commercial and private 5G networks, accelerating the adoption of edge computing.
As a result of these driving forces, edge computing is quickly becoming a massive global market. According to the Worldwide Edge Spending Guide, published by IDC last year, worldwide spending on edge computing will reach $250 billion in 2024.
How Will Edge Computing Transform Applications?
To fully harness the power of edge computing, edge-native applications must be capable of executing serverless compute logic when and where they’re needed. With serverless compute, there is no need for customers to operate or manage the underlying infrastructure, which in a highly distributed network architecture—such as edge computing—would be nearly impossible. Combining edge computing and serverless is the only way to enable customers to make use of edge computing resources in a very easy way, with the same highly innovative technology putting market leaders and new entrants side by side.
With serverless and edge computing, customers can build and run microservices or event-driven functions that are faster, cheaper, more reliable and become locally compliant. Each function is designed to do one thing well and scales independently of other processes in response to event triggers. Those functions can be stateful or stateless and enable on-demand scaling for irregular workloads. In addition, executing code on a purposely built edge-native runtime enables ultra-low latency applications and a large number of innovative use cases that enable huge improvements in digital transformation.
With serverless, companies can build and run functions with edge computing, paying only for the resources used. This means developers only need to focus on writing code, which accelerates the launch of new applications and features. When edge computing workloads are required in remote-device or on-premises infrastructure, zero-touch orchestration becomes a key component to simplify how customers deploy and manage edge workloads at scale, making those highly distributed edge nodes fully managed by a centralized control-panel or APIs.
History of Serverless-based Computing
The arrival of serverless compute, as it is generally understood today, is a fairly old concept dated over 20 years ago, and became popular with the introduction of AWS Lambda in 2014. But although serverless compute is still relatively new in popularity, its emergence can be understood as a culmination of other technological developments, including the rise of new service models, such as IaaS and PaaS, and microservices architectures.
From IaaS to PaaS to Serverless Compute
The popularization of cloud computing arose in the mid-2000s with the emergence of Infrastructure-as-a-Service, or IaaS. IaaS provides on-demand virtualized compute resources over the Internet. This business model quickly gained popularity due to its increased elasticity, but nevertheless left some resources underutilized. As noted in an article in Communications of the ACM, “Unfortunately, the burden of scaling was left for developers and system designers that typically used overprovisioning techniques to handle sudden surges in service requests.”
Platform-as-a-Service, or PaaS, emerged shortly afterward as a means of further abstracting away complexities, providing the OS, execution runtime, and the middleware. This arrangement allowed the engineers to manage their applications and data—nothing more.
PaaS enabled the configuration of autoscaling, but, as noted by InfoQ, “Even if you set up your PaaS application to autoscale, you won’t be doing this to the level of individual requests.” In contrast, serverless compute enables code to be executed as lightweight ephemeral functions in response to event triggers, eliminating the need to provision dedicated servers.
As a result, serverless compute can be seen as the natural evolution of the trend toward less complexity, more efficient resource use, and pay-as-you-go services.
From Monolithic to Microservices
Cloud computing brought more scalability to applications, but fully leveraging its cost and resource efficiency necessitated a new kind of application architecture: microservices. Traditional monolithic applications are built as a single unit, and must be scaled and deployed as such, each hosted with its own operating system. As such, the entire application must be scaled in order to reduce bottlenecks, resulting in wasted costs and resources.
To gain more granular control over scalability, applications must be atomized into smaller components that can be deployed and scaled independently. The release of Docker in 2013 facilitated this process by providing a way to package independent application modules, or microservices, with their dependencies into loosely coupled units that share the same OS kernel.
Serverless compute takes this trend even further with the creation of event-driven functions, which can be even more lightweight and are easier to deploy and scale. Serverless is the natural evolution of computing because there is no way for someone to manage a highly distributed network of computers such as edge computing manually or with traditional tools. Serverless provides the full abstraction layer, so applications can be deployed anywhere you need them to run without having to worry about the underlying infrastructure.
How does Azion’s Edge Application Compare to Standard Serverless Compute Solutions?
Edge Application is Azion’s serverless compute service, which delivers the efficiency, cost-effectiveness and scalability of serverless with the performance and reliability of edge computing. Combined with Edge Functions, it enables developers to execute their own code on all our globally available edge locations or on their own remote-device, on-premises or multi-cloud infrastructure by using it with our Edge Orchestrator.
Because of our purposely built edge computing runtime and the close proximity of edge nodes to end users, Edge Functions are faster and more resource efficient than serverless compute services like their traditional counterparts AWS Lambda, Microsoft Azure, and Google Cloud Platform.
The biggest difference between the traditional platforms and edge-based platforms is the distribution of the network. Lambda, Azure, and GCP are concentrated within massive data centers spread apart by long geographic distances. Although these data centers are large and designed for high-volume traffic and a torrent of workloads, they lack the advantage of proximity and coverage. You have to choose one data center where you want to run functions, and all users will connect to that exact location, no matter where the users accessing it are from.
Moreover, these traditional platforms have the disadvantage of executing functions in containers. Containers are a heavy cloud-native technology. They consume more memory, use more CPU cycles and require additional complexity to execute at scale. And of course, you are paying for that lack of efficiency.
Unlike container-based serverless compute, Azion’s Edge Functions use Azion Cells, our proprietary runtime built on top of V8’s sandboxing features, to keep each function secure in a multitenant environment. This not only enables more efficient resource use than container-based solutions, it means that developers do not have to provision memory for each function ahead of time.
In addition, running serverless functions in containers results in less reliable performance. Because containers require dedicated servers, their elasticity derives from being spun up and down. When a function has not been called in a while, serverless compute service providers spin down the container to conserve resources, requiring the container to be spun up yet again the next time it’s invoked. During peak times, this could easily impair your user experience.
This process of scaling up and down will add significant milliseconds to serve up responses, resulting in a latency gain known as a cold start. If Edge Functions haven’t been executed and are not yet in memory, they only need to load from a modern NVME disk when they are requested, enabling consistently high performance and reliability, even for irregular workloads and during periods of peak usage.
Edge computing enables crucial capabilities, such as ultra-low latency applications and real-time data processing, which are needed for next-generation applications and services. Overall, edge computing is faster, cheaper, and more reliable than other serverless compute services, and it gives you the ability to comply with local regulations, no matter where you are.
At Azion, we make it easier to build better applications faster, and you can run your workloads distributed on top of Azion’s globally distributed edge network, or wherever you need them: including remote-device, on-premises and multi-cloud infrastructure.
As noted by Forbes, “Implementing edge computing significantly cuts down costs of Internet bandwidth, data storage and computational power on servers, as it does most of the work on-board without the need to send and receive data from servers.”
These key benefits, along with the automation, elasticity, and cost-effectiveness provided by edge-native applications, is reflected in International Banker’s list of four reasons edge computing will be increasingly crucial moving forward:
- Powering the next industrial revolution in manufacturing and services
- Optimizing data capture and analysis to provide actionable business intelligence
- Making business processes and systems more flexible, scalable, secure and automated
- Promoting a more efficient, faster, cost-effective and easier to manage business ecosystem
With Azion’s Edge Functions, customers gain all the benefits of building and running serverless applications at the edge:
- Ease of Use
- Low Latency
- Cost Effectiveness
- Resource Efficiency
Edge Functions is available to all Azion users. Create a free trial account today to put Edge Functions to work for you.