1 of 20
2 of 20
3 of 20
4 of 20
5 of 20
6 of 20
7 of 20
8 of 20
9 of 20
10 of 20
11 of 20
12 of 20
13 of 20
14 of 20
15 of 20
16 of 20
17 of 20
18 of 20
19 of 20
20 of 20

Blog

Azion Introduces Layer 7 Load Balancer

Azion Introduces Layer 7 Load Balancer

Let’s talk load balancers. Load balancing has been essential to network traffic management since the 1990s. It’s the tool that makes sure your users are always getting the fastest connection possible, and that no single server on your network ever receives more traffic than it can handle. In this blog, we’ll talk about what load balancers are, how they work, and some of the different approaches to load balancing out there.

A load balancer is similar to a reverse proxy. That means it acts as a proxy for the server in client-server connections, as opposed to a forward proxy, which would act as a proxy for the client. When a client device attempts to connect to a service, a load balancer for that service accepts the connection, and then decides which of its servers to forward the connection to, based on the rules of the load balancing algorithm it was programmed with (more on that in a moment). So when you click on Azion.com, your computer is actually connecting to one of our load balancers. The load balancer then picks a server with the website information that your computer needs, that server sends its information to the load balancer, which then forwards it along to your computer to complete the request. All of this is occurring behind the scenes, because your computer and our server never directly communicate. To the computer, the load balancer appears to be the server it’s drawing data from, while the server perceives the load balancer as the client computer, without either ever being aware that the other exists.

This may sound like the load balancer is an unnecessary middle man, but the control that the load balancer provides in directing traffic is vital for maintaining reliable and efficient connections. Without a load balancing algorithm to regulate traffic, a connection could be made to a faulty server, or too many connections could be made to a single server at one time, causing it to crash. With a load balancer in play, traffic can be spread more evenly among the available servers, preventing any single server from becoming overtaxed.

There are two principle types of load balancers, based on the OSI layer that they operate on. Layer 4 load balancers operate at the Transport Layer, the layer responsible for the coordination of data transfers. They also connect to Layer 3, the Network Layer, where routers and IP addresses operate. L4 load balancers were the original load balancers, and they’re still in popular use today. These machines regulate traffic to their servers by reading the TCP port numbers and IP addresses of incoming connections, then changing the destination IP address to its chosen server in order to forward the connection along. This is a very fast and very simple way of handling load balancing. Since the load balancer is only looking at IP addresses, it never needs to look at the contents of any data being transferred, saving precious time in the connection making process. Plus, it’s inability to see the data makes security simpler. An L4 load balancer would be useless for a hacker to attack, since the machine never has access to any information worth stealing.

A more modern approach to load balancing targets the Layer 7 Application Layer instead of Layer 4 or 3. Layer 7 is the layer that most users think of as the internet. It is the surface layer, containing all of the data that we send and receive. The biggest differentiator between L7 and L4 load balancers is that instead of IP addresses, L7s read the actual data that is being sent to their servers. This means that L7s have access to a lot more information than their L4 counterparts, allowing them to make smart load balancing choices that an L4 just doesn’t have the data to make. This data reading takes more energy, requiring more processing power, and it creates a point of vulnerability to protect, as unlike with L4s, you really don’t want anyone to be able to steal the data from your system’s L7 load balancer. Both L4s and L7s are used widely, and each has access to different algorithms that control how traffic in that system is directed.

Azion Introduces Layer 7 Load Balancer Cover

Algorithms:

Each load balancer follows a simple set of rules, called a load balancing algorithm, that determines what server they send traffic too. Layer 7 load balancers are able to carry out more sophisticated instructions, so their algorithms can vary, but the strength of Layer 4 load balancers lies in their simplicity, so most are programmed with one of the following algorithms.

Round Robin:

Probably the simplest approach, a Round Robin algorithm distributes server requests cyclically, making sure each server in its system has received a request before sending another request to the first server. This is an easy way to ensure a measure of equal distribution among servers, but it’s far from perfect.

Weighted Round Robin:

One of Round Robin’s flaws is that it assumes all servers can process the same number of requests, overflowing small servers in a system, while the largest servers go partially unused. A Weighted Round Robin algorithm attempts to compensate for this problem by labeling each server in a system with a different weight proportionate to its request capacity. So if there are three servers in a system, server A has five times the capacity of server B, and server C has twice server B’s capacity, then a Weighted Round Robin would send 5 requests to server A, 1 request to B, and 2 to C, then send another 5 to A, 1 more to B, and so on.

Least Connections:

Another issue with Round Robin algorithms is that not all connections last an equal length of time. If you’re round-robining requests back and forth between two servers, so that connections 1, 3, and 5 go to server A while 2, 4, and 6 go to server B, it seems like both servers are fielding an equal number of requests. But if 2 and 6 are brief connections that end quickly, suddenly server A is dealing with connections 1, 3, and 5, while server B only has connection 4, and continuing to round-robin more connections may exacerbate the disparity. This is when a Least Connections algorithm becomes useful. Instead of focusing on how many connections each server has received, a Least Connections algorithm keeps track of how many connections each server is maintaining currently, and always forwards new requests to the one handling the least connections at that moment. This way, irregular connection timeouts don’t have any adverse impact, as any server that is maintaining longer connections will not be overburdened with more if there are servers better able to lighten the load. Weighted Least Connection algorithms also exist, following the same weight labeling principle used in a Weighted Round Robin, just applied to a Least Connections algorithm instead.

IP Hash:

IP Hash algorithms focus on another key load balancing issue, Persistence. Some transactions like e-commerce depend on locally cached data in the specific server you’re connecting to, in order to function at maximum efficiency. If you’re loading items you want to buy into a digital cart, but then close the browser before checking out. It’s much more convenient for those items to still be there when you return. If a load-balancing algorithm isn’t paying attention to that, and just blindly forwards you to a new server that doesn’t have your data cached, you’ll need to start the whole process again. IP Hash solves this problem by taking the client IP address and server IP address and combining them together to create a unique hash key that it can regenerate and read going forward. This way, on subsequent sessions with such a system, the client will always be redirected to the server that already knows them, allowing them to continue where they left off without retracing their steps.

Least Pending Requests:

We’ve already noted that Layer 7 load balancing offers too many options for summarizing everything. But in addition to the Network Layer algorithms listed above, there is one Layer 7 specific algorithm that has grown quite popular. Least Pending Requests (LPR) monitors all pending connections and simultaneously scatters them to the most available servers. It isn’t following a simple logic process like Round Robin or Least Connections, because it doesn’t need to. Unlike L4 load balancers, it can read the data it’s forwarding, enabling it to evaluate its servers using more complex criteria than just asking which server has the most available space. It allows for such tricks as configuring a set of your servers to serve exclusively as image repositories, and then sending all image requests there, while other queries are shunted elsewhere. It’s a very useful tool, but also a complex one, and plenty of services still find the simpler load balancing algorithms are plenty for their needs. If you’re experiencing an extremely high traffic situation, however, the level of detail an LPR algorithm can handle is a godsend.

Open Source

This variety in network layer positioning and algorithms has led to the development of all kinds of different load balancing open source products, each with a unique approach to distributing server traffic. We’ll dive into a few of the big names and see what each brings to the table.

Maglev

Maglev is Google’s creation, so it’s understandably built to handle extreme traffic loads. In fact Google load balances its global cloud network using a mega-version of the Maglev software they sell to clients. Maglev is a software-based load balancer, which was a significant change from standard load balancing hardware back when Google released their product in 2008, but has since become the norm among most major companies. Maglev is a smart load balancing technology, able to handle data at both the Application Layer and Network Layer. (For those with a passion for cyber-security, Google also has a Layer 4 Load Balancer built with a focus on user privacy). Maglev specializes in a more complex version of the IP Hash algorithm, called consistent hashing, which increases flexibility and scalability by minimizing the traffic disruptions that come from adding or removing servers from a system. Maglev has great redundancy features, and provides the stability that a lot of companies look for in a load balancer.

Katran

For Facebook, the creators of Katran, connection speed is the top priority. This is a key reason why Facebook hasn’t made the change that many companies have selected of shifting over to the more complex L7 load balancing technologies. L7 for its bells and whistles, still isn’t quite as fast at forwarding connections as a classic L4, so Katran is built as a Layer 4 load balancer, albeit a highly sophisticated one. Katran is another example of load balancing as software. Less common in L4s, this gives Facebook levels of scalability and flexibility competitive with the L7 users. Katran’s algorithm uses a custom version of Maglev’s consistent hashing. Because all of these projects have been released to the public as open-source, many of them build off each other’s designs. Since speed is so integral to Katran, a lot of work has gone into pushing it beyond the already impressive forwarding speeds of basic L4s by incorporating an efficient packet handling system called eXpress Data Path. Katran lacks the advanced customization of its L7 competitors, but it sure is fast.

Traefic

Traefic is a more specialized load balancer, built with a focus on micro-processes. It uses its layer 7 architecture to provide moment-to-moment smart load balancing. Traefic’s biggest selling point is its dynamic self-configuration capabilities. The load balancer is able to manage itself in real-time, drastically reducing the need for human maintenance and oversight. It’s not the fastest at forwarding data, and it has a bit of a complex installation process, but what it does provide is detail-oriented smart load balancing at the container level. Traefic is also built with a focus on customization, with an array of optional presets that allow you to personalize the regulation of timeouts, certificate verification, and backend server routing. It’s not really built with the same goals as massive traffic load balancers like Katran and Maglev, focusing more on micro-processes. If Maglev is a chain saw, Traefic is more of an X-Acto knife, and a very clever one at that.

HAProxy

HAProxy is the Wikipedia of load balancers, a completely free open-source Layer 7 software that subsists on community donations. They also have a commercial offering called HAProxy ALOHA, which has a few additional security features and the ability to integrate layer 4 load balancing capabilities. That said, the open-source version is still an impressive piece of technology that is constantly being improved upon, and provides stiff competition for the more commercial products. HAProxy is an across-the-board effective load balancer, with a truly dazzling amount of community support. Most impressive of all is its track record with security. In order to compensate for Layer 7’s security vulnerabilities, HAProxy has a singular focus on deflecting attacks. It’s remotely unpredictable processing turns any attempt to exploit a system bug into a painful process, and its regex-based header control blocks dangerous requests and prevents information leak. Put together, HAProxy is able to boast 13 years of zero intrusions. If you prize information security above all else, HAProxy is about as secure as it gets.

Azion Layer 7 Load Balancer

We’ve developed our own load balancer to meet our unique needs of balancing serverless traffic at the network edge. Azion’s is a Layer 7 Load Balancing software, equipped with a full suite of customizable balancing algorithms, including Weighted Round-Robin, Weighted Least Connections and IP Hash. We don’t currently offer Layer 4 capabilities, as our work at the edge requires a level of complexity and customization that necessitates use of Application Layer technologies. The high speed connections that edge computing makes possible compensate for the speed issues that can hamper some L7 products, enabling the Advanced Load Balancer to perform fast-paced content-aware balancing of requests. As always with our tools, we don’t want users to lose steam in that all-important setup phase, so we’ve built the Advanced Load Balancer with a focus on user-friendliness, making it a simple process to install and configure.

Summary

This is only the tip of the load balancing iceberg. There are many other excellent load balancing tools out there, each with their own special areas of focus. Load balancing is a simple yet essential part of every network process, and if you want your request traffic flow to bump up from good to great, it is vital that you find the load balancer that best suits your needs. If your network functions depend on a swarm of smoothly coordinated microprocesses, you may want to check out Traefic. If you want your network to be a digital Fort Knox, then HAProxy could be a good fit. And if you’re taking your network serverless and want a user-friendly tool designed specifically for that environment, or just can’t decide between smarts and speed and are looking for the best of both worlds, come try Azion Advanced Load Balancer, and let us know what you think.