Why Industrial Operators Need 5G URLLC and How They Can Get There

October 6, 2020 Perry Cohen

The 3GPP’s ultra-reliable low-latency communications (URLLC) will deliver sub 1 millisecond latencies and six-nines reliability (99.9999%). That’s good enough to “cut the cord” in critical automation, manufacturing, transportation, and other applications that have relied on costly, faulty wiring for generations.

With those tethers removed, a new genus of wirelessly-enabled use cases becomes possible, from collaborative robots (co-bots) to fully autonomous vehicles.

However, a recent study titled “5G Edge Cloud Infrastructure: For vRAN, Industrial, and Automotive Applications,” reports that 58% of respondents in these industries want to see 5G proven before the jumping on the wireless bandwagon.

You can’t blame operators in critical industries like automation, manufacturing, and transportation for being careful in their transition to a new networking technology. With 5G there are still more questions than answers, particularly concerning how these networks will be implemented in industrial environments that have relied on private LTE deployments for years.

“The fundamental difference between a 5G architecture and LTE is that 5G is being enabled as a virtualized edge technology,” said CTO at Wind River Paul Miller (Figure 1). “With LTE, typically you're going to have a monolithic appliance deployed to host that radio frequency entity at the endpoint. That means you cannot place third-party applications on it.

“In critical use cases, you need a real-time connection or near-real-time connection between those entities,” he continued. “Traversing back to the core of the network would bring in too much latency. These scenarios imply that the application that's managing those devices would need to be deployed in the edge node.

“So, if you need URLLC for an application such as vehicle-to-vehicle accident avoidance or autonomous machinery within factories and warehouses, an evolution to 5G is required.”

Figure 1. 5G network architectures distribute the core network at the edge (Source: Netmanias).

The Evolution & Advantages of 5G Network Slicing

Advances are needed on several technical fronts to realize the vision of 5G networks in both critical and non-critical domains. These include:

  • Dynamic spectrum sharing (DSS) allows operators with large 4G investments to transition their infrastructure to 5G much more efficiently by sharing existing spectrum across 4G and 5G systems.
  • mmWave investments and the publishing of 3GPP Release 16 have made mobility a reality in the 5G standard, and have prompted the development of next-generation user equipment (UE) that can capitalize on the data rates that are possible with this bandwidth
  • Stand-alone (SA) permits next-generation use cases in which the 5G core network and radio infrastructure can be deployed completely independent of 4G LTE.
  • O-RAN looks to extend virtualization into the 5G radio access network (RAN) on top of open standards, which drives flexibility and cost efficiencies in RAN technology via economies of scale.

The biggest architectural difference in 5G networks is that the core network moves from a centralized location to distributed points at the edge. Network functions virtualization (NFV) and software-defined networking (SDN) technologies make it possible to accomplish this using standard networking hardware, which means that orchestration and management applications can run on or near the endpoints that rely on them.

As a result, latencies associated with communicating back to a centralized core network can be reduced to the sub-millisecond latencies of URLLC. This topology also means that network services and resources can be partitioned into “slices” that are reserved for specific use cases. These slices can then provide domain-specific operators with insight into application and network performance for continuous improvement.

“Network slicing is the idea of taking portion of the network from the core to the edge and dedicating it to certain applications, such as industrial applications,” Miller says. “It's only available in 5G. The Industrial sector would be a consumer of a network slice.

“A huge amount of analytics and data will be sourced from that new paradigm,” he continues. “With the ability to monitor these ultra-real-time applications or monitor latency throughout the network across each step as it travels the network comes a whole host of new analytics data that can help people better operationalize and maintain these environments.

He continues, “You can look not only at the infrastructure, but also the applications contributing data in the edge devices themselves. By bringing in artificial intelligence algorithms and running them on an analytics platform, you have the ability to build new applications that do things like predictive outage prevention.”

The Cold Hard(ware) Facts of Transitioning to 5G

Of course, installing or replacing physical networking equipment is a costly and time-consuming endeavor, particularly in industrial settings that could suffer from lost productivity, revenue, or safety during system downtime or exhibit an inherent resistance to updating legacy equipment.

Moreover, industry experts don’t expect chipsets and infrastructure that support all of the features of 3GPP Release 16 – the portion of the 5G standard that defines non-public network (NPN) features – to be robust or ubiquitous for at least another year.

A transitionary step is required.

“I don't think that it's such a rip and replace approach,” said MultiTech’s vice president of strategic development Daniel Quant. “People are going to start off with 4G, dumb it down by centralizing it and wrapping it up in software that makes it really simple to use, and then use hyperscalers or Kubernetes or Docker containers to start running 5G workloads on edge computers.”

According to Quant, transitionary edge computing equipment capable of running 5G workloads is already available in the form of devices like AWS Snowcone (Video 1).

Video 1. AWS Snowcone is an edge computing, storage, and data transfer device with the resources needed to support 5G network deployments (Source: AWS).

“You've got processing power, you can put your core network on this, you can connect base stations on this, you can put it in your house, you can put it in a commercial building, you can put it in a mine or a factory,” Quant explains. “And what you've created is a completely standalone network deployment. This is exactly what they've created the Snowcone to do: To be able to deploy your own CBRS, completely on-prem network.

“That’s how a lot of these ultra-reliable low latency features have been manifested.”

Snowcone is a small, 4.5 lb. rugged cube outfitted dual processors, 4 GB of memory, 8 TB of storage, trusted platform module (TPM) security chips, Wi-Fi or wired 10 GbE network access capabilities, and can be powered by a USB-C cable or battery (Figure 2). In other words, an operator could drop a cube onto the factory floor, connect it to their local network, and start hosting applications locally in an “edge cloud.”

Figure 2. AWS Snowcone is capable of running 5G core network services at the far edge to deliver URLLC performance in industrial environments (Source: AWS).

Because these Snowcone-based “edge clouds” can plug directly into industrial-grade on-prem “base stations” like the MultiTech MultiConnect rCell 600 series to bring an entire core network to the factory floor (Figure 3). As a result, data can be shared from one side of the factory floor to the other at URLLC speed and determinism without ever touching traditional elements of a core network or even leaving the factory environment.

Figure 3. The MultiConnect rCell 600 is a CBRS Cat 12 cellular router can be easily configured as a private LTE broadband wireless and Ethernet router, and supports up to 128 concurrent Wi-Fi connections (Source: MultiTech).

And Amazon isn’t the only company creating such devices. Quant notes that MultiTech is a certified supplier of CBRS/private LTE infrastructure for Microsoft’s Azure Edge ecosystem.

Proving Out the Paradigm Shift with 5G Testing

The shutdown of legacy cellular networks, paired with the ultra-reliability, low latency, and lower cost per GB available from 5G, means that every organization that relies on cellular technology will undergo a network transition sooner or later. Network testing will be critical in all of these deployments, but in safety- and security-critical industries, it may be the most important phase of the development lifecycle.

The 3GPP and entities such as the O-RAN Alliance, 5G-ACIA, PTCRB, and GCF are defining test plans for RANs and UE. At the moment, these test focus primarily on ensuring that air interface standards are implemented correctly in devices, gNodeB (gNB) radio nodes, and the like.

Of course, Roger Nichols, Program Manager, 5G/6G at Keysight Technologies says, it’s early.

“Now that Rel-16 is complete, the requests and demand for testing capability and validation of the RAN and user equipment is growing,” Nichols observed. “What will be more interesting is the end-to-end testing. How does the entire system work under normal loading and extreme circumstances? How will the system handle downtime of a gNB or some other network entity? How can one verify the system after new software loads?

“It is key to realize that latency and reliability are impacted by the entire network: Capacity/loading, UE performance, RAN performance, interference, network architecture, application-specific distributed computing, etc.,” Nichols continues. “All must be considered to implement a network that will meet the demands of mobile-edge computing and other capabilities unique to 5G.”

According to Nichols, systems that test the 5G air interface can be used to validate URLLC capabilities, but must be able to address the fast state transitions in the air interface protocol. URLLC communications complicates this because its self-contained subframes and dynamic time-division duplex (TDD) duty cycles increase the demand on architectures and protocol stack performance.

In order to test these devices, as well as how an entire network would operate under various loads, Keysight offers the LoadCore 5G core testing software (Figure 4). The solution is now being used to test SA 5G networks.

Figure 4. Keysight LoadCore is a comprehensive 5G testing software suite that simulates network performance from the core network, through the radio access network (RAN), and out to user equipment. It can scale to millions of devices (Source: Keysight Technologies).

“We must be in a position to test network performance in and beyond the RAN,” Nichols explains. “These demands require new thinking, since the system itself will have to meet some kind of service-level agreement (SLA) between the network operator and the client. Such testing will require higher level feature testing at the network and application level.

“Our user equipment emulation (UEE) systems have to emulate many UEs with varying demands on these URLLC capabilities,” he continues. “LoadCore is now functional to test SA networks, which are necessary for full 5G implementation of [the industrial IoT] “corner” of the 5G triangle.”

Keysight has been an active participant in defining 3GPP, GCF, PTCRB, O-RAN, and 5G-ACIA UE and gNB test plans. Working to shape the 5G roadmap reassures customers that their investments in development infrastructure are not wasted during this period of rapid feature change and constant evolution in the standard itself.

5G: The Other 28%

Surprisingly, in the same study mentioned previously, some 28% of respondents in critical industries reported that they are already supporting 5G architectures. So, what is this 28% seeing that the 58% is not?

“The 28% will typically be the early adopters, first market movers,” said CTO at Wind River Paul Miller. “They see the technology’s value and gravitate towards testing and deploying it. The majority, however, want to see it proven. They are not necessarily against it or see something wrong in the technology, they just want validation that this is indeed a reliable and proven solution, and that it will truly produce the expected value.

“We expect that as the early adopters have success in this space, the competitive pressures emerge,” Miller continues. “The early adopters will experience lower operational costs, driving higher profit margins, more flexibility, and faster time to market. Then we will see the “show me” crowd shift to execution as well.

“This is a typical, healthy new technology, early adopter paradigm.”

About the Author

Perry Cohen

Perry Cohen, associate editor for Embedded Computing Design, is responsible for web content editing and creation in addition to podcast production. He also assists with the publication’s social media efforts which include strategic posting, follower engagement, and social media analysis. Before joining the ECD editorial team, Perry has been published on both local and national news platforms including KTAR.com (Phoenix), ArizonaSports.com (Phoenix), AZFamily.com, Cronkite News, and MLB/MiLB among others. Perry received a BA in Journalism from the Walter Cronkite School of Journalism and Mass Communications at Arizona State university. He can be reached by email at <a href="mailto:perry.cohen@opensysmedia.com">perry.cohen@opensysmedia.com</a>. Follow Perry’s work and ECD content on his twitter account @pcohen21.

Follow on Twitter Follow on Linkedin More Content by Perry Cohen
Previous Article
ON Semiconductor Introduces CMOS Global Shutter Image Sensor for Machine Vision and Mixed Reality Applications
ON Semiconductor Introduces CMOS Global Shutter Image Sensor for Machine Vision and Mixed Reality Applications

Together with the AP1302 Image Signal Processor (ISP), the AR0234CS delivers a comprehensive camera system ...

Next Article
Alliance Memory Expands Lineup of High-Speed CMOS DDR4 SDRAMs With New 8Gb Devices
Alliance Memory Expands Lineup of High-Speed CMOS DDR4 SDRAMs With New 8Gb Devices

AS4C1G8D4 and AS4C512M16D4 High-Speed CMOS DDR4 SDRAMs