Blog

Pushing Computation to the Edge with FaaS

Pushing Computation to the Edge with FaaS

In past years, Function-as-a-Service (FaaS) and Edge Computing have quickly risen to the forefront of many technological discussions, due to the widespread adoption of microservices, the emergence of 5G, and the ever-increasing demand for low-latency applications and devices. However, despite all the buzz and interest around Edge Computing, the concept still remains unclear to newcomers and IT professionals alike.

To help demystify this crucial component of modern computing, this post will take an in-depth look at Edge Computing, its relationship to FaaS, and how Azion’s Edge Functions compares to other FaaS solutions on the market.

What is Edge Computing?

A common quip among IT professionals is that if you ask a dozen engineers what Edge Computing means, you’ll get a dozen different answers. This is one of the reasons why the Linux Foundation created the LFE State of the Edge, a project to facilitate consensus and cross-industry collaboration around Edge Computing. Their Open Glossary of Edge Computing defines the term as “The delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost, and reliability of applications and services.”

The Market Drivers for Edge Computing

LFE’s 2020 State of the Edge Report posits Edge Computing as “the third act of the Internet,” which builds on the trends toward regionalization that began during the 1990s when CDNs and regional data centers emerged as a means of enhancing delivery speed and reducing the cost of transporting data. According to LFE, the need for more bandwidth and lower latency has only grown since then due to:

  • IoTs replacing human speed with machine speed;
  • an explosion of Internet-connected devices;
  • widespread use of automated infrastructure and clouds; and
  • increased demand for AR, VR, and other multidimensional experiences.

In addition, 3GPP’s recent completion of 5G commercial standards, which leverage Edge Computing for ultra-low-latency delivery and real-time processing of IoT data, has enabled the rollout of both commercial and private 5G networks, accelerating the adoption of Edge Computing.

As a result of these driving forces, Edge Computing is quickly becoming a massive global market. According to the Worldwide Edge Spending Guide, published by IDC last year, worldwide spending on Edge Computing will reach $250 billion in 2024.

How Will Edge Computing Transform Applications?

To fully harness the power of Edge Computing, edge-native applications and services must be capable of executing processes when and where they’re needed. FaaS enables a way to do this through the creation of event-driven functions. Each function is designed to do one thing well and scales independently of other processes in response to event triggers. Functions are stateless and, as a result, short-lived, enabling on-demand scaling for irregular workloads. In addition, lightweight functions enable lower application latency, which is needed for real-time processing.

With FaaS, companies can build and run event-driven functions on a pay-as-you-go basis, paying only for the resources used, which are fully managed by FaaS providers. This means developers only need to focus on writing code and the business logic, accelerating market availability.

History of Serverless-based Computing

The arrival of FaaS as it is generally understood today occurred in 2014 with the introduction of AWS Lamdba. But although FaaS is still relatively new, its emergence can be understood as a culmination of other technological developments, including the rise of new service models, such as IaaS and PaaS, and cloud-native microservices architectures.

From IaaS to PaaS to FaaS

The popularization of cloud computing arose in the mid-2000s with the emergence of Infrastructure-as-a-Service, or IaaS. IaaS provides on-demand virtualized compute resources over the Internet. This business model quickly gained popularity due to its increased elasticity, but nevertheless left some resources underutilized. As noted in an article in Communications of the ACM, “Unfortunately, the burden of scaling was left for developers and system designers that typically used overprovisioning techniques to handle sudden surges in service requests.”

Platform-as-a-Service, or PaaS, emerged shortly afterward as a means of further abstracting away complexities, providing the OS, execution runtime, and the middleware. This arrangement allowed the engineers to manage their applications and data—nothing more.

PaaS enabled the configuration of autoscaling, but, as noted by InfoQ, “Even if you set up your PaaS application to autoscale, you won’t be doing this to the level of individual requests.” In contrast, FaaS enables code to be executed as lightweight ephemeral functions in response to event triggers, eliminating the need to provision dedicated servers.

As a result, FaaS can be seen as the natural evolution of the trend toward less complexity, more efficient resource use, and pay-as-you-go services.

From Monolithic to Microservices

Cloud computing brought more scalability to applications, but fully leveraging its cost and resource efficiency necessitated a new kind of application architecture: microservices. Traditional monolithic applications are built as a single unit, and must be scaled and deployed as such, each hosted with its own operating system. As such, the entire application must be scaled in order to reduce bottlenecks, resulting in wasted costs and resources.

To gain more granular control over scalability, applications must be atomized into smaller components that can be deployed and scaled independently. The release of Docker in 2013 facilitated this process by providing a way to package independent application modules, or microservices, with their dependencies into loosely coupled units that share the same OS kernel.

FaaS takes this trend even further with the creation of stateless, event-driven functions which are even more lightweight and easier to deploy and scale. As Communications of the ACM puts it, “Serverless seems to be the natural progression following recent advancements and adoption of VM and container technologies, where each step up the abstraction layers led to more lightweight units of computation in terms of resource consumption, cost, and speed of development and deployment.”

How does Azion’s Edge Functions Compare to Standard FaaS Solutions?

Edge Functions is Azion’s FaaS solution that combines the cost-effectiveness and scalability of serverless with the performance, reliability, and efficiency of Edge Computing. It enables developers to write event-driven functions that automatically execute on the edge. The edge is defined by the node closest to the end users making the requests.

Because of the close proximity of serving nodes, Edge Functions are faster and more resource efficient than serverless solutions like their traditional counterparts AWS Lamdba, Microsoft Azure, and Google Cloud Platform.

The biggest difference between the traditional platforms and edge-based platforms is the distribution of the network. Lambda, Azure, and GCP are concentrated within massive data centers spread apart by long geographic distances. Although these data centers are large and designed for high volume traffic and a torrent of workloads, these centers lack the advantage of proximity and coverage.

Moreover, these traditional platforms have the disadvantage of executing functions in containers. Containers are heavy. It consumes more memory, uses more CPU cycles and may require additional configuration. Containers also require the image within the container to be bundled up with all the required dependencies to execute the function properly.

Unlike container-based FaaS, Azion’s Edge Functions use V8’s sandboxing features to keep each function secure in a multitenant environment. This not only enables more efficient resource use than container-based solutions, it means that developers do not have to provision memory for each function ahead of time.

In addition, running serverless functions in containers results in less reliable performance. Because containers require dedicated servers, their elasticity derives from being spun up and down. When a function has not been called in a while, FaaS vendors spin down the container to conserve resources, requiring the container to be spun up yet again the next time it’s invoked.

This process of setting up and tearing down can needlessly add milliseconds to serve up responses, resulting in a latency gain known as a cold start. When Edge Functions are requested, they only need to load from disk into memory, enabling consistently high performance, even for irregular workloads and periods of unusually high activity.

Edge Functions derives superior performance, greater ease of use, and more resource efficiency with:

  • no need to build, run and deploy containers;
  • pre-set runtimes ready to run closest to users; and
  • distributed nodes located close to end users, not in centralized data centers.

Conclusion

By omitting the use of containers and processing data closer to end users, Edge Computing enables crucial capabilities such as ultra-low latency applications and real-time analytics, which are needed for next-generation applications and services. Overall, Edge Computing reduces long-term costs of provisioning and managing infrastructure.

As noted by Forbes, “Implementing Edge Computing significantly cuts down costs of Internet bandwidth, data storage and computational power on servers, as it does most of the work on-board without the need to send and receive data from servers.”

These crucial benefits, along with the automation, elasticity, and cost-effectiveness provided by edge-native applications, is reflected in International Banker’s list of four reasons Edge Computing will be increasingly crucial moving forward:

  • Powering the next industrial revolution in manufacturing and services
  • Optimizing data capture and analysis to provide actionable business intelligence
  • Making business processes and systems more flexible, scalable, secure and automated
  • Promoting a more efficient, faster, cost-effective and easier to manage business ecosystem

With Azion’s Edge Functions, customers gain all the benefits of building and running serverless applications at the edge:

  • Scalability
  • Low Latency
  • Reliability
  • Cost Effectiveness
  • Resource Efficiency
  • Ease of Use

Edge Functions is now in beta and available to all Azion users. Create a free trial account today to put Edge Functions to work for you.