At the beginning of every technology hype cycle you can find organizations scrambling to find a seat on the bus. As the dust begins to settle, you realize that many of them were aimlessly instructing their developer and engineering resources to "create an [insert hyped technology here] strategy", which mostly resulted in dead-end proofs of concept and prototypes-turned-paperweights.
It's safe to say that we have reached that place with artificial intelligence. For more than a year now, organizations have been scrambling to collect and tag sample data sets, generate highly accurate or responsive models, and optimize neural network algorithms that could run efficiently in their use case and environment. Now that neural networks appear to have become the de facto algorithm of choice, software development tools and frameworks are maturing, and the first generations of AI-optimized logic devices are hitting the market, we face a stark realization: We have to deploy these things!
And, as is the case with any new technology, deploying an end-to-end, production-quality AI system that can capture, process, and act on analog inputs from the real-world turns out to be exceedingly complex:
- Simply choosing AI endpoint infrastructure can be challenging, as you need hardware that is robust enough to withstand the rigors of deployment but advanced enough to support multiple inferencing workloads or changes to AI algorithms over time.
- Once an endpoint platform is chosen, data of interest must be transported over some sort of network so that it can be stored and analyzed in higher-order systems. This presents latency, cost, and privacy/security concerns, especially where streaming video is concerned.
- Finally, any AI platform worth deploying should be able to turn this process into some sort of action, which often requires communication back to the endpoint or the triggering of another integrated system.
And those are just the obvious, high-level deployment challenges.
AI at the Edge in Under 10 Minutes
Rob Boville, Head of Engineering & Architecture at ADLINK Technology, Inc. perceives these as not only technical limitations to overcome, but also opportunities to improve the user experience AI systems integrators. In response, his team has developed an edge-to-cloud AI deployment stack based on the Vizi-AI Development Kit and ADLINK Edge software.
WATCH Dev Kit Weekly featuring ADLINK Technology’s Vizi-AI Development Kit:
In this episode of Embedded Toolbox, Boville demonstrates how to deploy a system that can capture, analyze, and act on data. In about ten minutes, Rob configures a system of systems that passes a video feed through an Intel OpenVINO inferencing engine running on a Vizi-AI platform, then over the ADLINK Data River using the company’s Edge Training Streamer and into Stream Viewer, a solution that combines the video frame and inferencing result to create an streaming RTSP output. Rob then manages the RTSP stream, builds applications, and provisions devices with Edge Profile Builder – also part of the ADLINK Edge software stack – before using ADLINK Edge Node-RED to integrate a Twilio account that alerts his smartphone when specific actions occur.
Tune in to see how you can turn an edge AI from hype into reality with the ADLINK Edge ecosystem.
For more information, visit https://goto50.ai.
About the AuthorFollow on Twitter Follow on Linkedin Visit Website More Content by Brandon Lewis