Containers and serverless are two options for deploying microservices, the architecture used in modern applications. Microservices provide the needed agility and performance for today’s applications and are optimized for the cloud, making them crucial to a business’s growth and readiness for the future as 5G and edge infrastructure deployments become increasingly prevalent. In addition, cloud adoption has played a key role in adapting to the increase in remote work and online shopping due to the COVID-19 pandemic, making cloud-native architectures like containers and serverless more important than ever to contemporary businesses. This post will discuss the similarities of the two architectures, their defining characteristics, and how they compare for cost, resource use, performance, and other considerations.
Similarities between serverless and containers
Although containers and containers can be used together, they are most frequently discussed as two opposing options for breaking down monolithic applications or building cloud-native microservices applications. However, serverless and containers have more in common than their use in distributed architectures. A 2019 article from The New Stack neatly summarizes other key similarities between containers and serverless functions:
- Designed for small, independent tasks
- Can be deployed in seconds
- Use APIs to integrate with external resources
- Do not have built-in persistent storage
- Can be used to build immutable infrastructure
Despite these similarities, however, developers debating between the two options for microservices architecture would do well to consider the two concepts separately to fully understand their respective benefits and drawbacks.
What are containers?
Containers are units of software that package code with its underlying dependencies so it can be moved from one computing environment to another without affecting its execution. Containers are similar to VMs in that they provide a way to isolate applications that run on the same host. However, unlike VMs, which each contain their own operating system, containers share the same OS kernel, resulting in a more lightweight unit that uses less resources and starts faster.
When it comes to containers, two open-source tools are used so ubiquitously that their names are almost synonymous with container-based architecture: Docker and Kubernetes.
Docker is the open-source platform for building containers that sparked their widespread adoption and is considered the industry standard. Their website states that Docker Engine creates “a universal packaging approach that bundles up all application dependencies inside a container which is then run on Docker Engine.” Docker containers are available for both Windows and Linux-based applications and are designed to ensure that containerized applications always run the same, regardless of infrastructure.
Kubernetes is another open-source tool for containers, but unlike the Docker platform, it has a very narrow focus: orchestration. Since microservices are vastly more complex than monolithic applications, their day-to-day management is much more involved. Kubernetes allows developers to automate aspects of deployment, scaling, and other tasks to reduce the operational burden of managing containers.
What is serverless?
Definition and characteristics
Serverless uses a pay-as-you-go model for running distributed applications on servers managed by the serverless provider. Unlike containers, which are dedicated to specific machines and packaged with individual shares of CPU, memory, and other resources which must be provisioned ahead of time, serverless functions execute when and where they’re needed in response to pre-programmed events, such as a change in a database or an HTTP request. As such, they scale up and down automatically, allowing companies to pay for resources on a per-use basis.
A 2019 Thoughtworks article describes six traits of serverless:
- Low barrier-to-entry: without the need for server management, it’s easy to get code up and running, resulting in a fast time to market
- Hostless: not dedicated to a specific host or server
- Stateless: functions are ephemeral, so nothing can be stored in memory
- Elasticity: automatically scales up and down
- Distributed: deployment units are small and distributed by default
- Event-driven: triggered by events and loosely coupled
Origins and business drivers
One of the earliest discussions of serverless comes from a 2012 article by Ken Fromm, “Why the Future of Software and Apps is Serverless.” In it, Fromm lays out the trends driving serverless adoption, including the rise of cloud computing and distributed architectures. Fromm notes that, “Developers working in a distributed world are hard pressed to translate the things they’re doing into sets of servers. Their worldview is increasingly around tasks and process flows, not applications and servers—and their units of measures for compute cycles is in seconds and minutes, not hours.”
In addition to the rise of cloud computing, microservices, and backend-as-a-service, Forbes lists two other trends that are currently driving serverless adoption. First, it notes an increased use of APIs, which are used in serverless architecture as the glue holding together. In addition, it notes a shift in how teams are organized as companies transition toward continuous integration and development (CI/CD), resulting in DevOps teams that bridge the gap between operations and development. With serverless, Forbes states, this trend moves even further, where “Except a few lines of code, neither developers nor operations have anything to manage, configure, tweak, and optimize.”
As serverless compute services in part emerged as a result of the increasing trend toward managed infrastructure, it is often confused with other cloud models that provide managed services:
- PaaS: Platform-as-a-service vendors provide software tools, infrastructure, and operating systems. This is similar to serverless compute services in that developers only have to write application code, but with PaaS, applications are not ephemeral; they still require dedicated servers to keep them running all the time, which must be provisioned by the client.
- IaaS: Infrastructure-as-a-service provides automated and scalable compute resources, but the operating system and database are not managed by the provider. This is different than serverless compute services, where functions are stateless and the provider manages everything but the application code.
- SaaS: Software-as-a-service refers to licensing out-of-the-box software over the Internet. In this case, the provider manages everything, from the infrastructure supporting the application to the application itself.
Comparing serverless and containers
Although serverless and containers share several benefits, such as fast deployment, support for CI/CD, and their suitability for the cloud, there are notable differences in their use of resources, costs, security, performance, and agility. Considering each of these metrics separately will give a good sense of the pros and cons of each model.
When compared to serverless, which scales up and down automatically, containers use considerably more resources. Even with orchestration tools like Kubernetes, companies need to make decisions between over- and under-provisioning (in other words, between cost and performance). In fact, a 2020 Datadog study found that over 45% of companies use less than 30% of requested CPU and memory.
In addition, serverless functions are smaller than containers. Since they are not packaged with their dependencies, but share resources with other functions (a model known as multitenancy), serverless allows for more efficient resource use than containers.
One of the key differences between serverless and containers is the way they are priced. While serverless is pay-as-you-go, companies that use containers pay for resources ahead of time. This means that serverless results in less upfront costs, another reason for its low barrier to entry. As discussed above, it can also reduce operational overhead, both through efficient resource use and less time spent on managing resources—particularly for tasks with irregular workloads. However, autoscaling can make it difficult to gage costs ahead of time. Serverless solutions that do not provide solutions for monitoring or rate limiting can lead to potentially high bills, particularly in the case of malicious traffic, such as brute force attacks.
Containers run all the time, making them better for long-running processes. However, for short-lived processes and irregular workloads, this can be resource intensive and costly. As a result, orchestration tools like Kubernetes spin containers down after periods of inactivity, resulting in a lag of several seconds known as a “cold start” when they are spun up again. Since users are known to abandon sites with poor performance, these few seconds can be crucial to retail conversions and sales; for low-latency and mission-critical applications that require fast and reliable performance, a long and unpredictable delay is unacceptable.
Because serverless providers manage backend security for their customers, serverless customers are only responsible for securing the front end of their application. In this sense, serverless customers have less attack vectors to guard against than if their application was run in containers. However, the isolation properties of containers provide a degree of security that is not present in multitenant serverless architecture; as such, serverless providers who do not take care to isolate customers’ applications put them at risk.
Both serverless and containers are much more agile than legacy applications, as they allow for CI/CD, rapid deployment, and more elasticity. However, the simplicity of working with serverless—or “low barrier to entry”—makes it easier to develop applications, resulting in faster time to market. In addition, serverless scales automatically—although it should be noted that some serverless compute services, such as AWS Lambda, run inside of containers and require clients to apportion memory for each function, requiring some provisioning and guesswork in order to scale.
However, portability can be an issue for serverless customers. Cloud companies that tightly couple functions to other services put clients at risk of vendor lock-in. Since serverless provides customers with less control over their tools and programming languages, moving to another provider may not be possible without rewriting code.
With automatic scaling, fast deployment, low barrier to entry, efficient resource use, high performance, and cost-effective pricing, serverless is an ideal solution for:
- irregular workloads
- simple, short-running processes
- low latency applications and IoTs
- reducing operational costs
- quickly releasing new features
- breaking up monolithic applications
In addition, Edge Functions, Azion’s product for building serverless functions, addresses many of the challenges of serverless functions. Rather than running serverless functions in containers, like AWS Lambda and other cloud providers, Edge Functions combines security and performance by running in a multitenant environment where each function is isolated in a secure sandbox using V8 Isolate. Azion products are built with open standards to avoid vendor lock-in and allow customers to easily transition between our platform and other providers’. Finally, Edge Functions allows for monitoring through Real Time Metrics and can be configured to log usage or set up rate limiting to avoid unexpectedly high bills and ensure functions do not scale out of control. In the next post in this series, we’ll discuss best practices for serverless applications, including ideal use cases, development practices, and finding a serverless provider.