Get More Capacity with Serverless

By providing managed resources and only charging for what is used, serverless enables companies to get more serving resources for their money.

Rachel Kempf - Editor-in-Chief
Get More Capacity with Serverless

In many ways, serverless can be seen as a tool to not only abstract away the complexity of server management, but to optimize a company’s server usage as much as possible. The term “serverless” can be traced back to Ken Fromm’s 2012 article “The Future of Computing is Serverless,” where he introduces the idea as a natural outgrowth of the trend toward purchasing computing power in increasingly fine-grained units: from company-owned servers that would only be replaced after years of usage, to renting servers by the month, hour, and second. Ultimately, Fromm noted, compute resources would be fully managed and paid for on a task-by-task basis—which is what we have with serverless computing today.

By providing managed resources and only charging for what is used, serverless enables companies to get more serving resources for their money. This post will provide a beginner’s look at serverless computing, from its evolution, benefits, and implications for enterprises to a look at how Azion’s Edge Functions brings serverless computing to the edge.

Evolution of Infrastructure Management

The evolution of infrastructure management is a story that has unfolded in two parts. Over the past decades, changes have occurred to not only resource management, which has become increasingly outsourced to third-parties, but the resources themselves, which have been atomized into increasingly smaller units.

From In-house Data Centers to PaaS

With the advent of the Internet, digital businesses were born. Initially, content was served from personal computers. However, creating more content and serving more requests necessitated more storage space and computing power—in other words, more servers. As a result, companies have transitioned to hosting infrastructure in private data centers, colocation facilities, and the cloud.

Data Centers

Locating infrastructure in private data centers provides companies with complete control over their servers, but installing, operating, and maintaining those servers takes an enormous amount of time and technical skill. In order to deploy an application on bare-metal servers, companies need to purchase and rack hardware, install and configure the OS and software needed to serve the application, and install the application code. Companies are also responsible for operational tasks such as fixing or replacing aging hardware and upgrading software.

In addition to all this, companies need physical space for their equipment. For smaller companies, this may only require a closet with a few servers, but for larger companies, this may require a separate room or building to house server racks, run cable to connect equipment, and install networking, power, and cooling subsystems, as well as backup systems to avoid service interruptions—all of which need to be maintained and operated by the company.

Managing in-house data centers involves:

  • Purchasing and installing hardware
  • Licensing, installing, configuring, and upgrading software
  • Fixing and replacing aging hardware
  • Installing and managing networking, powering, and cooling subsystems

Colocation Centers

With colocation centers, companies can pay to store servers and networking equipment off-premises in data centers owned and operated by third-parties. This frees companies from having to maintain their own facilities, manage power and cooling subsystems, and run cable to connect equipment. Instead, companies using colocation facilities are only responsible for operating and maintaining the equipment they use.

Managing colocated equipment involves:

  • Purchasing and installing servers
  • Maintaining and replacing aging equipment
  • Deploying and upgrading software
  • Managing networking equipment

Cloud Computing

With the advent of cloud computing, companies no longer needed to purchase or operate servers at all; instead, on-demand virtualized resources could be leased over the Internet. Traditional cloud computing can be broken into two services: Infrastructure-as-a-Service, or IaaS, and Platform-as-a-Service, or PaaS. With IaaS, storage, networking, servers and virtualization are provided by the vendor as a service. PaaS further abstracts management complexities by providing not only on-demand resources, but a fully managed environment, including a runtime, OS, and middleware for developing applications.

Managing cloud resources involves:

  • Configuring application runtime, OS, and middleware (with IaaS)
  • Managing software licenses
  • Configuring and managing VMs or containers
  • Configuring and managing container orchestration tools

From Bare-metal Servers to Containers

In order for infrastructure management to transition from a job requiring teams of on-site technicians to one that could be remotely executed by operations teams, computing resources needed to be virtualized. With the rise of virtual machines, companies no longer had to ship and manually install hardware; they could remotely deploy new compute instances in minutes by installing and configuring a new OS along with the software to serve the application.

Containers, like VMs, are virtualized compute resources, but have more flexible isolation properties that allow them to share the same operating system. This enables both more efficient resource use and more ease in deployment, since developers do not have to install or configure a new OS for each new container. Instead, each container encapsulates its own share of OS kernel, in addition to application code, its dependencies, and the CPU, memory, and disk I/O needed to run it. Configuring these resources complicates container management somewhat; however, many management tasks can be automated with orchestration tools like Kubernetes.

  • Bare-metal servers must be installed and managed manually
  • VMs can be deployed and managed remotely
  • Containers can be deployed remotely, with orchestration tools to automate some management tasks

Each step in the progression from bare-metal servers to containers abstracts away some of the configuration and management of infrastructure. In addition, each new step enables companies to make increasingly efficient use of server space. However, in doing so, companies have amassed far more virtual resources to manage. This trend has only accelerated with the rise of globalization and hyperscale digital businesses, which require numerous servers to be geographically dispersed across the world.

In an increasingly global and digital economy, managing a large enterprise might mean managing thousands or even millions of containers—a daunting task, even for companies with managed cloud services and orchestration tools. Fortunately, a new paradigm has arisen, combining highly efficient resource use with highly managed services: serverless computing.

What Does Serverless Mean?

The Cloud Native Computing Foundation’s Serverless Whitepaper defines serverless computing as “the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.” In doing so, serverless eliminates both the overhead from wasted resources and the need for companies to manage their own infrastructure.

Serverless platforms may include both managed services, such as Edge Cache, authentication, and databases, and a serverless compute service for building event-driven functions that are small, stateless, ephemeral, and—crucially—not dedicated to any specific server. As a result, edge computing companies can execute serverless functions at the edge location closest to end users, ensuring the lowest possible latency.

What Are the Implications of “Going” Serverless for Enterprises?

Going serverless is not only cost-effective and resource efficient, it enables more efficient programming. As Forrester explains in its report Serverless Best Development Practices, “Serverless eliminates the need for AD&D pros to write and manage dozens of XML config files or adopt infrastructure as code strictly to support operations.” [1] As a result, serverless programming means less code, and as a result, a smaller attack surface and fewer bugs.

In addition, a reduction in code and less configuration results in a streamlined workflow with less operational tasks. This enables agile business practices like DevOps, which combines development teams with operations teams. Although DevOps is often discussed in the context of container design and management, serverless enables companies to take DevOps to a whole new level.

A recent blog post on the website DevOps.com promotes serverless as a means for supercharging DevOps culture through:

  • Automating DevOps using infrastructure-as-code
  • Easily switching between versions with minimal service interruptions
  • Enabling teams to work across a variety of locations and environments without significant impact on other teams

Forrester’s report on Serverless Best Practices further elaborates on the positive impact serverless can have on companies’ DevOps culture:

  • Minimal effort to deploy apps, which scale with no effort
  • Roll out updates and new features in minutes
  • Easily run business experiments
  • Scale to zero, with no cost for idle functions
  • Easily refine applications with less risk of destabilization [1]

As a result, serverless computing is an increasingly popular option; in a recent blog post, Forrester predicts that 25% of developers will use serverless functions regularly by the end of 2021.

Azion Brings Serverless Computing to the Edge

With Azion’s Edge Functions, businesses get all the benefits of serverless, along with the power of edge computing. Unlike serverless solutions such as AWS Lambda or Microsoft Azure, which deliver serverless functions from hyperscale data centers located far away from end users, Edge Functions are executed on the point of presence closest to end users, resulting in lower network latency than cloud vendors’ solutions.

In addition, Edge Functions are not run in containers, as is the case with cloud vendors’ solutions. With container-based solutions, dedicated servers are needed to keep functions running; as a result, vendors will spin down functions after they have been idle for a while. When functions are called again after this period of inactivity, containers must be spun back up, resulting in increased application latency known as a cold start. In addition, running functions in containers means customers must allocate memory to each function, requiring additional configuration and introducing the possibility of overprovisioning.

In contrast, Edge Functions are run in a multitenant environment, using Google’s V8 Engine, which creates a secure sandbox to keep each function isolated. As a result, Edge Functions have zero cold starts, less configuration, and use resources more efficiently than container-based solutions.

Edge Functions is now in beta, enabling all Azion users to experience the benefits of serverless computing. New users can create a free account on our website and gain $300 of service credits to use with Edge Functions or any other Azion products.

References

[1] Hammond, J. S., & Rymer, J. R. (2019). Serverless Development Best Practices (pp. 3-4, Rep.). Cambridge, MA: Forrester Research.

Subscribe to our Newsletter