The Need for Edge Video Analysis

February 19, 2020

Story

The Need for Edge Video Analysis

The past few years have seen a dramatic increase in the use of artificial intelligence (AI) systems to analyze images & videos, to detect & identify objects & people, & to derive actionable info.

The past few years have seen a dramatic increase in the use of artificial intelligence (AI) systems to analyze images and videos, to detect and identify objects and people, and to derive actionable information from what they see.

AI-enabled video analysis requires a tremendous amount of computational resources. This explains why early video analysis predominantly took place in the cloud. Unfortunately, while cloud computing offers advantages for many applications, it’s not well-suited for tasks in which latency (speed of response) is an issue. Performing cloud-based video analysis using a cloud services supplier could result in latencies between 100 and 500 ms, which is unacceptable for the vast majority of mission-critical and safety-critical applications.

There's also the fact that modern vision systems may comprise large numbers of high-definition cameras running at resolutions of 4K or higher. If all of this information were to be uploaded into the cloud for analysis, it would consume a huge amount of bandwidth and also occur significant costs.

Another issue is that of security. Whenever you transmit data off the premises, you run the risk of it being compromised by hackers. One further and very important consideration is that of having a consistent Internet connection. In the case of a commercial application, losing your Internet connection is annoying. By comparison, in the case of industrial and transportation applications, for example, if you’re performing your AI-enabled video analysis in the cloud, then the loss of Internet access could result in injury or death.

Fortunately, advances in computing technology and AI algorithms have made it possible to perform edge video analysis (EVA); that is, to analyze the video on-location in real-time. Many AI algorithms, like those involving matrix operations, benefit from parallel processing. The capabilities of today's extremely powerful microprocessor units (MPUs) can be dramatically boosted with the addition of graphics processing units (GPUs), which boast thousands of small processors, each with its own local memory. In this case, however, the GPUs are not being used to manipulate graphical data for display, but rather to execute the video analysis AI algorithms in a massively parallel fashion.

Example Applications

The use of EVA is increasing rapidly and is expected to be ubiquitous in the not-so-distant future. Three real-world examples using ADLINK EVA systems that demonstrate the capabilities of EVA are as follows:

Offshore drilling rigs provide extremely hostile environments, including shocks, vibration, noisy power supplies, wide-ranging temperature swings, high humidity, and saltwater. Furthermore, in addition to supporting only low data bandwidths, Internet connections are notoriously unreliable.

A large ship in a body of waterDescription automatically generated

ADLINK-powered EVA improves safety and reduces accidents on oil rigs (Image source: pixabay.com)

High-resolution cameras augmented with ADLINK EVA systems can be used to monitor the main drill assembly, for example. In addition to issuing a warning if humans walk into an exclusion zone, the system can observe the speed and positioning of the clamps as they attach to the drilling apparatus, and immediately warn the human operators if anything is untoward.

High-speed trains offer many benefits, including reducing congestion, improving mobility, and boosting productivity. Paradoxically, the problem with high-speed trains is that they travel at high speeds, which means the driver has very little time to respond to any obstructions on the track, such as animals, humans, fallen rocks, or even deformations in the tracks themselves.

A picture containing scene, way, track, roadDescription automatically generated

ADLINK-powered EVA provides drivers of high-speed trains with "an extra pair of eyes" (Image source: pixabay.com)

Consider a high-speed a train travelling at 300 km/h (186 mph), which equates to 83 m/s (or 273 ft./s). The fastest a human can respond to a perceived problem is 200 ms (0.2 seconds), and this assumes the human is looking full ahead, not blinking, and not distracted by anything. Assuming a more realistic response time of 500 ms, the distance the train travels in this time could mean the difference between life and death.

High-resolution cameras augmented with ADLINK’s AVA-5500 AI-enabled video analytics platform equipped with an EGX-MXM-P3000 GPU module, which is powered by an NVIDIA® Quadro® Embedded GPU, can detect problems on the track as far as a kilometer away. The result is similar to providing the driver with an additional pair of eyes that never get tired, never get distracted, and never rest. Of course, these EVA systems have to be capable of dealing with a hostile railway environment that includes shocks, vibration, and noisy power supplies.

Today's airports have to handle an almost inconceivable amount of traffic. In 2019, for example, approximately 110 million passengers passed through Hartsfield-Jackson Atlanta International Airport; 100 million people were processed by Beijing Capital International Airport; while around 80 million experienced the delights of London Heathrow Airport. Not surprisingly, the number of people, vehicles, and planes moving around the airport, with other planes arriving and taking off, provides significant potential for problems.

A group of people sitting in a parking lotDescription automatically generated

ADLINK-powered EVA can constantly monitor busy airports to detect and identify potential problems. (Image source: pixabay.com)

Fortunately, ADLINK’s MVP-6100-MXM edge computing platforms equipped with EGX-MXM-P5000 GPU modules, which are powered by an NVIDIA® Quadro® Embedded GPUs, can aid operators in the control tower by constantly monitoring the runways, taxiways, and terminals to detect and identify potential problems.

The live video feeds from ten cameras mounted around the top of the control tower, each with a 4K resolution, are stitched together to provide a seamless 360-degree panoramic view. The EVA's artificial intelligence system observes the movements of people, vehicles, and taxiing planes, immediately alerting the human operators as to any problems.

This is much more sophisticated than simply detecting something like the fact that a luggage cart is heading toward a taxiing plane, for example. Being tied into the scheduling system, the EAV systems can also know which planes have been instructed to land, take off, and taxi on which runways. If a plane has been instructed to take off on a certain runway, but starts heading the wrong way, for example, the system can immediately sound an alarm. In the future, it may be that the EVA's AI system takes control of the situation and issues instructions to the humans operating the machines.

The ADLINK Advantage

ADLINK Technology designs and manufactures a wide range of products for embedded computing, test and measurement, and automation applications. ADLINK's product line includes computer-on-modules, industrial motherboards, data acquisition modules, and complete systems. In the case of the company's EVA solutions, the capabilities of powerful Intel microprocessors are dramatically boosted by the addition of NVIDIA GPUs.

Most engineers are familiar with NVIDIA's off-the-shelf GPU cards with their integrated cooling fans. Unfortunately, while powerful, these cards are not suitable for EVA applications, partly because the off-the-shelf cards typically have a commercial lifespan of only two years. Furthermore, any system's cooling fans offer potential points of failure. If the fans stop working, the system stops working; and, if the system stops working, everything stops working.

As seen in the examples above, EVA systems are often deployed in harsh environments. To address these issues, ADLINK's EVA solutions feature ruggedized units that are fully certified for use in their target environments. Furthermore, these systems feature state-of-the-art passive cooling technology (no fans).

In the case of the EVA GPU subsystems, ADLINK's engineers have taken NVIDIA's GPUs and designed them onto boards with a much smaller form factor than traditional graphics cards. These boards provide equivalent processing capability while consuming less power, which allows them to employ passive cooling. Additionally, these cards are guaranteed by ADLINK to have a much longer commercial lifespan than conventional graphics subsystems.

In the next few years, systems employing edge video analytics will be deployed in myriad diverse locations to perform a vast array of applications—all designed to make our lives safer and more secure—and ADLINK will continue to be at the forefront of this exciting technology.