How Will 5G Impact AI Processing at the Edge?

Tractica-Logo-e1431719018493_0_1_0_0_0_0_0_0

This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.

5G wireless networks promise gigabit bandwidth capabilities, millisecond latencies, and ultra-reliable, ultra-fast wireless connectivity. At the same time, we are seeing a shift in AI processing, where the paradigm of centralized processing architectures with the cloud/servers as the primary hubs for AI models (both training and inference) giving way to decentralized architectures where some or all AI processing will be performed at the edge device. Tractica’s latest report, Artificial Intelligence for Edge Devices estimates the AI edge devices compute opportunity will reach $51.6 billion by 2025. Some of the primary drivers for transition include the need for data privacy, issues with bandwidth, cost, latency, and security, all of which contribute varyingly depending on the AI application and the edge device category. This transition becomes a bit blurry with 5G entering the picture.

Narrowing the Gap between Cloud and On-Device Processing

5G could make edge computing largely irrelevant because performing computing in the cloud would be almost the same as doing processing on the device. 5G would narrow the gap between processing in the cloud versus on the device, and in many ways, would have centralized processing as the preferred option because of higher processing capabilities and less restrictive power budgets. 5G could essentially make the entire debate around AI edge computing irrelevant.

Rather than be a damper, the mobile telecom world (and its constituent ecosystem) is positioning 5G as an enabler for AI edge processing. They see 5G as the underlying fabric of communication for devices to connect to each other and to edge servers to offload some of their content and processing capabilities and improve latency. Their view is that while 5G will provide improved bandwidth and low latencies, this does not have to extend all the way back to the centralized server, but it could extend out into the edge of the network, which in telecom speak is the base station or edge router, just before the last mile that connects the device. 5G would essentially create a middle layer, also known as fog computing, where some or the bulk of the processing could be done—mainly inference, but it could also include training. The fog computing architecture does support the requirement for ultra-low latency and high bandwidth, as the millisecond latencies and gigabit speeds are measured from the base station to the device, although 5G also introduces a flatter core network architecture that will improve “base station-to-cloud” latency as well.

Reinforcing the Massive IoT Network Dream

This architecture ties in well with the dense, small cell-based network architecture that the carriers and wireless vendors have been trying to promote since the days of 3G back in the 2000s. Since the advent of 3G, small cells have been viewed as the answer to providing improved bandwidth density, better indoor coverage, and a hub for wearables or Internet of Things (IoT) devices. However, the IoT dream of connecting multiple sensors, cameras, and devices never came through with 3G or 4G, with most of the demand being driven by consumer mobile broadband services, such as streaming video and music services. As we enter the next transition phase for wireless networks, the dreams of realizing a massive IoT network have been revitalized, with AI edge processing becoming a convenient support pillar for reinforcing that dream.

The problem with this fog computing approach is that it presents an additional cost in terms of infrastructure and hardware processing elements. It also adds another services layer controlled by telecom carriers, which until now have been outside the AI cloud/data center ecosystem. For example, if one were to deploy an autonomous fleet of cars in a geofenced portion of a city, the fog computing approach would add a network of 5G small cells with AI processing capabilities on the boxes to handle the additional load of AI processing. From a carrier perspective, AI is bringing in new business models and revenue opportunities, but from an autonomous fleet perspective, it is an additional member in the value chain, which was previously limited to car manufacturers and possibly the AI data center providers. In many ways, fog computing, especially in the context of AI at the edge, feels like an additional layer of complexity and cost that really has not been proven from an engineering perspective. In fact, a much flatter network hierarchy—one that includes the server and the end device—seems like a much simpler architecture with which to work. In the flatter architecture, AI processing will be performed on the car itself, with only small updates going back into the cloud servers, enabling federated learning across a fleet.

Moving Beyond the Distractions

In reality, the promise of 5G will take some time to come to fruition in order to have any significant impact on AI processing. For starters, the rollout of 5G has not yet started, there is no consensus on what 5G stands for, and there is no clear timeline for getting anywhere close to a full-scale rollout and, more importantly, the shutdown of 4G networks. Depending on the region, it might be anywhere between 2025 and 2040 before we have the scenario of full reliability on 5G with no 4G or 3G backup. The first wave of 5G deployments will exist over underlying 4G infrastructure, called non-standalone (NSA) 5G. This interim version of 5G does not offer the full experience or benefits of 5G, such as ultra-low latency communication or massive IoT deployments. In, fact the full 5G standard, which is also known as standalone (SA) 5G, was only recently finalized, so it us going to be some time before the full SA 5G standard is deployed in chipsets, hardware, and software, after which it will be tested and deployed in networks. Even when SA 5G is deployed, it is unlikely that it will be deployed nationwide anytime soon.

In the meantime, as AI applications are integrated into devices, rather than waiting for 5G to be deployed, the safe strategy would be to rely on device-based processing of AI. The push for 5G will nevertheless continue, with the telecom ecosystem—both vendors and operators—pushing for 5G-enabled AI models. The debate around 5G-enabled cloud or fog-based processing versus device-based processing will continue and become stronger over time. As a result, we are likely to see the topic of 5G bringing some level of confusion in terms of what it means for AI processing, although, in many ways, it seems like a distraction, rather than a way to move the needle forward.

Aditya Kaul
Research Director, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top