Have you ever thought about how the Internet works underneath all the interconnected hardware? It’s a complicated process involving not just hardware, but a menagerie of systems and software.
Let’s focus on two actors, the client and the server, that communicate with each other on the Internet. At the top level of this exchange, a client (i.e. device) makes a request for content in the form of data, and the server is responsible for delivering content back to the client.
Each request must go a long way from being sent by your device, reaching the servers, and returning with a response. In this post, one more for the blog posts’ series about the fundamentals of edge computing, we’ll explain more about this journey. In addition, we’ll tell you how the Azion’s Edge Computing Platform makes this process faster and more efficient compared with other solutions.
How is a Request Solved on Internet: Step-by-Step
To build a network it’s necessary to connect two or more computational devices. These devices can connect to a central device like a server or intermediary system, such as a router. Like a web, this network also can create connections with other networks to share information between them. This way, we can consider the Internet as the largest of networks, in which different smaller networks, service providers and devices communicate with each other to meet all the connectivity needs that exist nowadays.
In order to create an interconnected network at a global scale — and ensure that it works efficiently —, it’s necessary to define some models, concepts and protocols to standardize Internet services. This offers a “common language” to all the elements involved in this process, so they can understand each other and act appropriately.
One of these efforts is the OSI Model (Open System Interconnection), it divides the operation of the Internet into 7 layers:
According to this model, the interaction between users and computers occurs in layer 7 (also called the application layer). With this idea in mind, let’s start to trace the path our requests travel.
The Journey in a Simplified Way
To summarize, when you make a request, this is — in the most simplified possible way — the path that follows:
The Starting Point
The user sends a request. It can be from typing a query in a search engine, sending an email, making a bank transfer to playing the latest episode of their favorite show.
Next Stop: DNS and HTTP Protocols
In the same way that you need to specify the recipient’s information for a letter to reach its destination, you must make a request that contains all the necessary data so it begins its journey and is correctly resolved in order to obtain an adequate response on the Internet.
Two protocols are involved in this phase. One of them is the DNS Protocol (Domain Name System). Let’s say the machines are more comfortable with numbers, and every website, server or network is identified with an IP address made up of a series of numbers. Currently, IP addresses can present two formats, according to the version the system is using:
- IPv4: the first version for IP addresses. These addresses are a 32 bits number, in a quartet of 3-digit numbers. For example: 203.0.113.42.
- IPv6: the latest version uses a 128 bits number, composed in eight 16-bit sections, separated by a colon(:) and using a hexadecimal notation. For example: 2001:0002:14:5:1:bf35:2610.
However, it could be really hard for the users to remember that. It’s why domains are used, with easy to remember names and structures, such as www.google.com or www.azion.com. Explained in a simpler way, the DNS protocol “translates” the URL that you typed into a series of numbers that the machine can understand, and then routes your request to the proper destination.
The HTTP Protocol (Hypertext Transfer Protocol) has a fundamental role here. Its function is to enable the communication, since it uses predefined rules and standards to be able to carry out this exchange of information without inconveniences.
The first thing is to know “where you want to go” or what you want to search, and then type the URL (https://www.azion.com/, for example) in the browser bar. To facilitate the communication process between both parties, HTTP offers a request-response logic:
- After typing the URL (which DNS will convert to an IP address for be understood by the machine), the request will be sent using a header that contains information such as:
- the method indicates what you want to do: obtain, send, update or delete information;
- the path that the request must follow, to reach a page or complete a specific task;
- the version of the protocol being used;
- the request headers and body contain additional information, such as your location or language preference to show you specific or customized content.
- When the server has solved your request, it’ll send you a response using status codes. For example, 200 to show a successful process and you can see the content requested now; it can also show a 401 code when the request wasn’t authorized by the server, the famous 404 code when the requested content doesn’t exist or the 501 code for internal server error.
But before the request reaches the server, it needs to complete the last step…
Where the connections happen: BGP and Anycast
Each of these smaller networks that make up the Internet can be considered an Autonomous System (AS). In order to communicate with each other (either internally between ASs of the same provider or with ASs in external networks), the Border Gateway Protocol (BGP) was developed. This protocol helps to define the best path that the request should take to reach the appropriate server, in accordance with the policies and rules configured by administrators for the exchange of information between the ASs.
When you send a request, your device is connected to a network and the BGP decides dynamically (in other words, while it’s routing the request it can make decisions) the paths that the request takes, in the style of your GPS. With the help of the routers, which are responsible for transporting the data packets with the request, it creates connections between the ASs that are enabled to communicate with each other until your request reaches the server that has the requested information.
To accomplish this task, BGP can use different methods to route the request:
- unicast: there is only one route and one destination;
- multicast: it can choose a route between several destinations, but they are limited and specific;
- anycast: also routes the request to several destinations, but can do so using other criteria that allow choosing a closer and more efficient route to the server that is available, healthier and closer to where the request was generated.
The content of a website can be stored in several servers and, through the anycast method, the BGP protocol chooses a path that leads to the server that has it and is closest to the user. This way, the request is resolved quickly, as the data must travel a shorter distance.
This is how, finally, the request reaches the server, resolves it, and sends you a response using the HTTP status codes, following the same path but in the opposite direction. And all of this happens in milliseconds!
Closer to the User and Faster
In the early stages of the Internet, requests had to travel to a central server where the information was stored. This implied delays in receiving responses, a tedious process with almost no communication between networks. Over the years, different solutions emerged to improve interconnectivity, as well as new and modern equipment and devices. The use of data centers also spread and, later, cloud computing rose, which made it possible to accelerate and virtualize many tasks.
However, the main disadvantage of this computational model is the centralization; this turns them into a single point of failure. It means that if they are affected by some issue, attack or failure, the connection is lost and can’t be accessed. In addition, even when they are able to store and process huge amounts of data, they are located far away from the end users. This location generates a high latency, as well as a high consumption of resources and bandwidth to respond to requests.
The introduction of content delivery networks (CDN) in the end of the 90s was a big step in trying to correct some of those problems. A CDN is made up of a network of points of presence (PoPs) distributed in different locations and strategically located closer to users, compared to the traditional models. Having nodes or servers closer to the users, the request must travel shorter distances, so the response reaches the user faster. But, can you imagine it being faster and more efficient? This is the promise of edge computing.
The principle of edge computing is to distribute a large amount of computational resources closer to the users in order to attend their requests with lower latency, since the data must complete shorter trips to be processed. To this end, the edge computing infrastructure is highly decentralized, made up of widely geographically distributed edge locations. Using this design, the network can quickly serve users and automatically route their requests to the closest node, but also to a more efficient and healthy one in case of a peak in demand or if a server is not available. This way, requests should no longer go to the cloud or source infrastructure. In addition, real-time data processing is enabled, a feature essential for time-sensitive and mission-critical applications.
How a Request is Solved in the Azion’s Way?
One of the focuses of the Azion’s Edge Computing Platform is the optimization of content delivery. So your users can be connected to your applications and websites to enjoy everything you offer quickly, safely and without interruptions, providing an excellent experience. To do this, we offer to our clients a suite of products and tools that allow them to implement, control, monitor, scale and automate resources at the edge, in real time.
Decentralized and distributed infrastructure
If you want to order your favorite food and the restaurant that prepares it is on the other side of the city, it’ll surely take a long time to get to your house, when you’ll be already hungry and upset. The same happens on the Internet when a user’s request is answered by a faraway server: the user must wait to get a response, causing friction and even abandonment of the website. Nobody wants to wait, even more today, being used to completing tasks online in a blink.
Now imagine the restaurant opens several branches, strategically distributed around the city. Your favorite dish is two blocks away, so when you request it to the branch in your area, it arrives really fast to your door. One of the main advantages of edge computing is, precisely, having a highly distributed and decentralized infrastructure that enables being closer to the users. This way, requests can be served quickly, with low latency and less bandwidth consumption.
Azion’s Edge Network has more than 65 edge locations distributed around the world. No matter where your users are, using our network you can deliver content efficiently, because you can always attend requests from the closest, in better health and available edge location.
At Azion, we also use Intelligent DNS, taking advantage of our global infrastructure and using anycast to establish the most efficient paths to route the request to the best possible destination. This guarantees that, when a request sent by a user of any of our clients begins the journey and reaches our network, the edge location closest to that user will take care of it.
Azion’s Load Balancer
The journey of a request on the Internet is not linear. A request can take shortcuts or zig zag across multiple sub-networks. We’ve already seen that BGP, altogether with anycast, is responsible for creating different paths to be attended by the more suitable server. However, there are a number of factors that can affect this travel, as well as how quickly a request can be answered. We can point out two common situations that give headaches to the administrators of any network: usage peaks and when a server or any other component of the network stops working.
For example, events like Black Friday, Cyber Monday or Christmas can attract an exponentially large number of visitors to your applications or website. Peak requests can overload your server, which could slow service. Any additional requests can fail because of overloaded systems. A scenario where unexpected traffic exceeds the available resources can be catastrophic for your brand’s reputation and revenue. The experience can leave a bad taste for many customers. One of the ways to deal with this situation and avoid any issue is setting up a load balancer. A load balancer distributes the workloads between the different nodes in order to not overload any server.
Azion’s Load Balancer can balance the traffic across the edge locations closer to the users. At the same time, it attends the requests on the edge, preventing any traffic jam. To accomplish that, it offers multiple distribution algorithms that allows you to choose the best method for your servers. This way, your users will not have to wait to be served by a single server, but rather their requests are distributed in a more intelligent way throughout the network, without affecting the performance of the service.
Azion’s Edge Caching
Caching is another method that can help you to accelerate the journey of the request. It consists in storing copies of your content and resources frequently accessed by the users in easily accessible locations.
Having at hand a copy of your content prevents the request’s travel to the cloud or origin infrastructure and the need of downloading the data everytime the page or application is visited. Azion’s Edge Network also enables caching closer to the user, directly on the edge.
Accelerate your Applications
We live in a hyper-connected world that is moving faster and faster. And no user is willing to wait for a response from any brand, so they will not hesitate to go and use a competitor. Now that you understand how the requests are answered on the Internet, you know it is not an easy job. However, having partners as Azion will help you to meet the users needs as well as having a wide range of tools and products that will enable you to make this journey a faster and more efficient process. Know more about our products or contact our team of experts and start creating the perfect service plan for your business.