What Are the Basics of Serverless Computing, Containers, and VMs?

Nowadays, we have many digital devices sending large amounts of data with high speed in real-time and connected to the network for long periods. This new approach requires virtualization with high computational capacity that can be decentralized and easily scalable. Edge computing addresses these challenges by enabling new architectures such as serverless and containers.

Luis Quezada LL. - Technical Researcher
What Are the Basics of Serverless Computing, Containers, and VMs?

The digital technologies market has shown high growth in the last decade as companies seek strategic partners to implement new virtualization technologies like containers and serverless.

More specifically, companies from the health, engineering, commerce, and education sectors have begun to deploy modern application architecture to achieve different objectives such as reducing costs, increasing productivity, optimizing manufacturing processes, automating internal management, implementing zero-trust security, creating new business models, and others. In this way, companies carry out a digital transformation that involves not only software and hardware but also their staff at all different hierarchical levels.

One option businesses that companies must consider is whether to totally or partially migrate legacy systems to achieve this digital transformation. Among the primary options employed by tech leaders and their teams are virtual machines (or VMs), containers, and serverless computing. In the present post, we will discuss these three types of architectures and help you understand their definitions, technical features, and how some of them are deployed at the edge.

Application Architecture Models for Digital Transformation

The rise of cloud computing brought on-demand computing services to users connected to the Internet. Between 2006 and 2008, computer resources, such as storage and processing capacity, began to be offered commercially for medium and large companies. The increase of its popularity only forced the original cloud computing model to transform to new approaches or paradigms for providing more virtualized resources to a broader audience. Nowadays, we have many digital devices sending large amounts of data with high speed in real-time and connected to the network for long periods. This new approach requires virtualization with high computational capacity that can be decentralized and easily scalable. Edge computing addresses these challenges by enabling new architectures.

Below we will share relevant information and insights on three main architectures used for software development and execution, allowing you to gain a greater technical perspective.

Virtualization and Virtual Machines

Virtualization is a technology (software) that enables the creation of virtual versions of some technological resource by using a physical machine and distributing its capacities among smaller and isolated units. The objective of virtualization is to improve efficiency and solve problems related to computer equipment such as energy consumption, maintenance costs, as well as reduce the hardware resources per user. Some of the most common types of virtualization are:

  • Server Virtualization: Server computers have central processing units (CPUs) with multiple processors to run complex tasks. Server virtualization allows a dedicated server to be partitioned into different virtual servers, with their computational resources, IP address, and isolation level.
  • Desktop Virtualization: Computers have a similar technical construction to servers, but with a relatively lower performance. Therefore, desktop virtualization works similar to server virtualization, as it allows a dedicated computer to be partitioned into different virtual computers with their respective computational resources.
  • Network Virtualization: With network virtualization, computer hardware and software resources can be used to combine multiple physical networks into a single virtual network or divide one physical network into different virtual networks to provide independent environments for various purposes. A good example of this is the Virtual LAN (VLAN), a subdivision of a local area network (LAN) that improves the speed and performance of overloaded networks.

In the virtualization landscape, we can create and run virtual machines (VMs) on physical computers or servers, commonly referred to as a host. A virtual machine, referred to as a guest, is the emulation or virtualization of a physical computer. It is carried out through the hypervisor that manages the creation of one or more VMs into a host. Then the hypervisor allows the manual or automatic installation of an operating system image (e.g., Linux, Windows, Ubuntu) and allocation of computational resources such as storage, processing, and memory to each VM.

During the creation of a virtual machine, the administrator must allocate the computational resources of the servers according to a prediction of use and maximum load caused by the application. Most of the time, a small fraction of these resources are used. In addition, applications can also generate high usage peaks of computational resources without prior notice, which will cause an outage of the system. As a result, virtual machines are unable to scale automatically. Another limitation is the inability to migrate VMs from one physical server to another without disruption. This limitation occurs when migrating between servers with different processors (e.g., Intel and AMD), which requires the temporary shutdown of the application services.

Containers

Containers are another way to partition a computer or server into different isolated units that can be run independently. Unlike virtual machines, containers share the host machine’s kernel (a part of the operating system that allows interaction between software and hardware), and each container does not require the installation of an image of the entire operating system. For each application, a container is installed with only the files necessary for its execution. This configuration makes containers lighter (megabytes) and faster than virtual machines (gigabytes). It is ideal for running microservices as it reduces the use of hardware. Therefore, due to the lower file load on the OS, containers require only a few seconds to start, while virtual machines require minutes.

On the other hand, developers must perform the containers’ creation, execution, and orchestration, often through the open-access software Docker and Kubernetes. Consequently, the management of these independent units introduces more tasks to the company’s technical team. In addition, security remains a significant concern as this architecture provides light isolation of the host and other containers at the process level when compared to virtual machines, where the Hypervisor and the operating system of the guest machine offer more robust security.

Serverless

Serverless is a new management paradigm for the allocation of virtualized resources and services. The provider performs server administration tasks such as provisioning, patching, and managing hardware resources. Software developers do not have to worry about the supporting infrastructure or the back end. The client pays only for the resources demanded by the application at that moment, unlike containers and virtual machines, which must be provisioned ahead of time. The serverless model is also ideal for executing modern applications with new development trends, such as microservices. In this, the source code for each microservice resides as a function, which is deployed and hosted on the provider’s servers. With this configuration, the functions are accessed through APIs and executed in milliseconds.

Containers and Serverless Computing Acting at the Edge

Edge computing is a distributed computing model that offers computing resources closer to users. Unlike a centralized model like the cloud, servers or edge nodes are spread out all over the world. This model implies that the larger the deployment of edge nodes, the shorter the data transfer time (known as network latency). Then, it brings new digital features such as real-time data analysis, live video streaming delivery, and the integration of new technologies such as 5G, AI, and IoT.

Both serverless and container architecture can be applied at the edge. Containers acting at the edge have the advantage of being lightweight (compared to VMs). Because it is a tested technology, developers can use the same software tools they are familiar with. On the other hand, the serverless model is a strong candidate for executing tasks at the edge due to the automatic management of the infrastructure. Serverless allows applications to scale according to workloads and reduce costs by paying only for the resources used. It is also the best choice with the shortest implementation time on the network (milliseconds) regarding essential tasks like updating source code or sending a security patch.

Serverless and container architectures have many challenges, which are still ongoing research objectives. A strategy adopted by research groups to solve these challenges is through new fields of study, such as machine learning and neural networks. These computing techniques are used for detailed data analysis, learning behaviors, and discovering patterns of interest. With them, investigations are carried out to improve computing architectures in edge networks and solve common problems in a cloud computing model. For example, some research cases are:

  • Network Edge Cache: a study conducted by a research group from the North China Electric Power University evaluated learning-based approaches to storing big data on the edge network. The study showed the benefits of this new approach as well as the results of significant advances in caching capacity in numerical simulations.
  • Mitigating Cold Start: another research group from The University of Melbourne proposed a Reinforcement Learning (Q-Learning) agent setting to identify factors such CPU utilization and behavior factors. This approach helped determine function invocation patterns and reduce cold start frequency by preparing function instances in advance

As mentioned in the previous examples, container-based and serverless architectures present challenges that need improvement. In the following section, we’ll show you how Azion addressed these challenges and compare them with the proposal offered by one of the leading cloud computing providers.

How Azion offers better serverless architecture

Some cloud providers like AWS offer the benefits of containerized serverless functionality to reach a broader audience of developers. Their product, known as Lambda Functions, is packaged as a container image of up to 10GB in size. This new model expands the capabilities of a container-based architecture to create and implement high workloads (e.g., machine learning, data-intensive workloads). However, due to the inherent use of computational resources, deploying containers at the edge could be inefficient to host simple applications.

This is one of the reasons why Azion does not employ a centralized, container-based architecture. Azion uses Google’s open-source V8 engine as the basis for Edge Functions, our serverless compute product. Using V8 as a JavaScript execution engine allows us to execute functions in a multi-tenant environment that uses sandboxing as a mechanism to isolate applications and reduce possible security vulnerabilities. What’s more, cold start problems are mitigated once serverless functions are stored in our NVME devices, reducing latency and taking advantage of the internal parallelism of solid-state storage devices.

Finally, you can modernize your applications by establishing an intelligent strategy for caching on our edge computing platform, reducing the long wait times experienced by end-users when using cloud-based serverless functions. Azion provides two tools to implement this strategy. Edge Cache can be used with all solutions built on Azion’s platform. Once integrated into your applications, Edge Cache reduces latency and increases the data transfer rate between the edge nodes in our distributed global network and the end-users. Our platform will select the edge node closest to the user to deliver the content by storing a copy of your data on our edge nodes.

A complementary tool is Azion Load Balancer. In case of incidents with your origin servers, Load Balancer ensures the availability of your apps and content. It allows you to select more than one origin and balance traffic between them. In addition, you can customize the host header to identify a virtual host and locate your content or apps, guarantee the availability, define a load balancing method, and customize timeouts and error handling.

To find out how Azion’s Edge Computing Platform can improve the development and execution of your applications, contact Sales or create a free account to start using Azion’s benefits.

References

  1. Chang, Z., Lei, L., Zhou, Z., Mao, S., & Ristaniemi, T. (2018). Learn to cache: Machine learning for network Edge Cache in the big data era. IEEE Wireless Communications, 25(3), 28-35.
  2. Agarwal, S., Rodriguez, M. A., & Buyya, R. (2021, May). A Reinforcement Learning Approach to Reduce Serverless Function Cold Start Frequency. In 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing (CCGrid) (pp. 797-803). IEEE.

Subscribe to our Newsletter