You may have heard the phrase “no latency, gigabit experience” as a shorthand for what 5G will deliver. However, 5G’s capabilities extend far beyond improving the performance you’ll get on your mobile phone. It will also populate smart cities and with high-density, low-energy consumption IoTs that can be used to improve public safety and conserve resources, and will enable mission-critical uses like remote surgery, where a high-speed, uninterrupted network connection is literally a matter of life and death. But in order for that to happen, 5G networks, devices, and applications will need to meet strict performance requirements for density, availability, latency, and efficiency. Today’s blog post will examine 5G standards: what they are, how they’re developed, and how Multi-Access Edge Computing will help applications and devices achieve these standards.
Standards: What They Are and Why We Need Them
As ideas for new technology emerge, two important things must happen to translate those ideas into a reality. First, we must define what the technology will do by determining the performance benchmarks it will need to accomplish. Then, engineers must determine the underlying protocols and architectures that will accomplish those benchmarks. In other words, they need a definition (or standard) for the technology and a blueprint (or specification) for achieving that standard.
For the past few years, organizations and members from all over the world have converged to create the standards and specifications needed to make 5G a reality. The standards have been set by the International Telecommunications Union or ITU, an agency of the UN that allocates global radio spectrum and ensures that networks around the world can interconnect. The specifications are created by The Third Generation Partnership Project, or 3GPP, which was formed to create the protocols for 3G networks and has been integral to the development of wireless technology ever since.
How 5G Standards are Created
To develop their specifications, 3GPP has divided its work into specialized areas. Sixteen different working groups have been devised from three separate categories: RAN specifications, Service and System Aspects (responsible for the overall architecture, service requirements, and coordination of the project), and Core Network and Terminals. While working on various features, the groups get input from various market partners such as GSMA, a global organization of mobile operators, or CTIA, which represents the U.S. wireless communications industry.
When the service requirements, underlying architecture, and protocol for implementing that architecture have been defined for a specific feature, such as New Radio (NR), 3GPP issues a release of that feature. NR was included in 3GPP’s 15th release and completed Phase 1 of their work on 5G, which was primarily focused on enhancing mobile broadband–essentially, enabling better-performing smartphones and personal devices. The 16th and most recent 3GPP release, which was issued in July, paves the way for industrial and enterprise uses that rely on massive numbers of interconnected IoTs and ultra-reliable, low-latency communication. Once 3GPP issues a release, the specifications are then transposed into deliverables by 3GPP’s organizational partners (standards bodies like ETSI) and submitted for approval by the ITU.
IMT-2020: The Global Standard for 5G
In 2017, ITU laid out the global benchmarks for commercial 5G deployments in the IMT-2020. These benchmarks were based on the needs of various 5G use cases, spanning three broad categories: personal mobile use (extreme mobile broadband or eMBB); the large-scale deployment of energy-efficient IoTs (massive machine-type communication or mMTC); and Critical IoTs, which provide ultra-reliable, low-latency communication (URLLC).
In order to address the various needs of each of these categories, the IMT-2020 breaks down minimum performance benchmarks into a variety of technical requirements. These include data rates, network energy efficiency, spectral efficiency, capacity, latency, connection density, mobility, reliability, and bandwidth. Needless to say, there’s a lot to unpack here; so for the purposes of this post, we’ll just give a few highlights. If you’d like all the nitty-gritty details, however, ITU has the full list of IMT-2020 minimum requirements here.
- Bandwidth: at least 100 MHz, or up to 1 GHz in higher frequency bands
- Connection Density: 1,000,000 devices per square kilometer
- Data rate: peak downlink 20 Gbps, peak uplink 10 Gbps
- Latency: 4ms for eMBB, 1ms for URLLC
- Mobility: up to 500 km/hr for high-speed vehicles
- Spectral efficiency (throughput): peak downlink 100Mbit/s, peak uplink 50Mbit/s
Because these standards are so ambitious, achieving them on a massive scale will require some assistance–and that’s where Multi-Access Edge Computing (MEC) comes into play. By moving processing and data closer to end users, MEC improves network efficiency, increases network capacity (allowing for faster data transmission and more connected devices), and significantly decreases latency. These improvements help 5G achieve several different performance benchmarks, but today we’re going to examine just one: latency.
MEC: Helping 5G Break the Latency Barrier
Regardless of the capabilities of a network, certain limitations exist for how fast data can travel on that network. For example, radio waves can only travel as fast as the speed of light, about 300 km/ms. In addition, latency builds up as data travels from devices through radio towers, into the core network, and is finally transmitted to the cloud and data centers. Each leg of the journey adds a few more milliseconds to the trip, creating a huge barrier to the super-low latency standards for 5G.
Think of 5G as a super-fast highway for data transmission, with ample lanes and almost no speed limit. On a highway like that, you could drive very fast and experience very little congestion, no matter how many people are sharing the road. But no matter how fast you drive, travel will never be instantaneous–especially if you’re going a long distance and need to make a lot of pit stops. To bring your travel time as close to zero as possible, you need to be incredibly close to where you’re going.
This is the basic principle of MEC: decreasing the time it takes for data to be processed by shortening the distance that data has to travel. Rather than transmitting data from radio towers to faraway data centers, MEC allows data to be processed and stored in servers at nearby edge nodes or even on the base station itself–in other words, shortening the trip and reducing the number of pit stops. This results in vastly improved end-to-end latency, reducing roundtrip latency on today’s LTE networks from 70ms to about 20-30ms.
MEC was initially conceived as a self-contained system to be added on to 4G LTE networks in order to improve performance capabilities like latency or throughput. However, because MEC is necessary to fulfill the latency requirements of 5G applications and devices, the 5G system architecture specified by 3GPP is more fully integrated with MEC and designed to provide flexible support for different MEC deployments.
Azion: Supporting 5G through MEC
Clearly, MEC is incredibly important to delivering on 5G standards–so much so that the specifications for 5G have built-in support for MEC. Furthermore, these standards were designed to enable more interconnected devices than ever before. In fact, a new feature of the latest 3GPP release is “sidelinking,” where 5G-connected vehicles can communicate directly with each other rather than going through a base station. Although this was developed for V2X, it could theoretically connect any kind of 5G devices to each other, enabling industrial mMTC use cases like factory robots. In order to facilitate this kind of interconnection, applications and devices will need to be designed with interoperability in mind.
Azion is helping to facilitate 5G deployment by helping service providers implement MEC and providing a flexible platform for developers to build, scale, and run edge-native applications. In addition, our core technology, Azion Cells, is designed to be as interoperable as possible, allowing for edge-native applications that can be deployed on any hardware, on any infrastructure, and in any language. In our next blog post, we’ll take a look at the building blocks of edge computing through a discussion of serverless vs. container-based architectures and further discuss Azion Cells in that context.