You’ve heard it before—artificial intelligence (AI) at the edge is the coolest thing since sliced bread, and if you’re not taking advantage of it, you’re missing the boat. For better or worse, there is a lot of truth to those statements. The goal of AI, at least in an industrial environment, is to improve production capabilities, gain efficiencies, and reduce operating costs by pulling in real-time data from multiple points to produce actionable insights. That’s pointed out in a white paper from Avnet titled AI at the Edge: The next frontier of the Internet of Things.
In just about any application that requires real-time analytics, AI at the edge should be your “go-to.” Some industry insiders believe that AI at the edge will be omnipresent in the next wave of IoT connected solutions. The goal is to drive operational efficiency, and not just for the sake of data.
While bringing AI into your system can be a boon for production, it’s not an absolute. There are times when employing AI at the edge makes sense, but there are circumstances when better options are available. You need to identify whether your AI should reside in the cloud or at the edge, and only then can you take the proper steps for implementation.
Cloud Verses Edge
Conventional thinking is that when complex calculations are required, then AI in the cloud makes more sense, as that’s typically where the higher performing compute engine resides. The edge is more for a quick decision based on some simple machine-learning algorithm.
But times are changing. Edge-based computing can handle many of the AI-related tasks that previously required the compute power that was only found in the cloud. And processing at the edge offers an immediate two-fold advantage: you needn’t send your data to the cloud over an expensive medium, thereby reducing cost, and you remove the time lag associated with sending data off-site and then back for processing. While those latencies may seem minimal, real-time performance is sometimes required, and that’s simply not possible if you have to go to the cloud and back.
Examples of applications that will benefit from AI-infused edge computing include medical devices, manufacturing systems, and vehicles. Medical devices have a particular need for at-the-edge intelligence, for example, in the operating room, where data needs to be processed in a timely manner to provide the specific information for doctors to act upon.
Another potential area where AI at the edge can be beneficial is machine vision, using a combination of cameras and visual analytics. For example, a camera can be positioned in a distribution facility to monitor and manage goods that are moving between trucks and pallets. With current technology, this interaction can occur in near real time.
From a hardware standpoint, today’s edge-based AI devices are increasingly capable of supporting the power and capacity requirements needed to run those AI algorithms. It even goes down to the edge sensors themselves that now integrate significant memory and processing capacities within tiny footprints. And an obvious benefit of edge-based AI is the greater security it brings, as data needn’t be passed over the internet.
The software component comes into play when you add the analytics component, which is key for edge-based AI. Members of groups like the OpenFog Consortium and the EdgeX Foundry offer tools that assist in edge-based analytics. A host of code for the EdgeX Foundry is available directly on GitHub.
OpenFog ecosystem partners include Microsoft, whose recent acquisition of Express Logic and its popular ThreadX RTOS. Microsoft believes that its new OS and middleware adds simplicity, safety, and security at the edge, while keeping the size of the code to a minimum, which suits smaller, battery-powered devices. Microsoft claims that it can now “seamlessly connect to Azure and enable new intelligent capabilities.”
The bottom line is that AI at the edge is likely in your future. And Avnet can be your one step for all your IoT needs when it comes to AI at the edge.