The development of next-generation cloud-based functionality continues at a breakneck pace, fueled by the ongoing expansion of the IoT, compounded by the growth of edge computing. Edge-driven systems are the logical future of almost anything in the Cloud, as smart devices at the point of application need to address both latency and bandwidth issues effectively.
This horsepower at the edge is highly dependent on efficient subsystem communications, the integration of sensor systems, and proper management of motion and logic. The biggest differences in the IoT is between true edge-computing devices and cloud devices. For example, an Alexa smart speaker is an edge-computing product in the cloud that makes on-site logic decisions based on external stimulus, whereas a smart security camera or vision sensor may only relay information to the cloud without previously taking any action on it.
The different attributes of a “smart” vs. a “dumb” cloud device have to do with the amount of CPU power available on the client side. This has a significant impact on the latency of the operation and the wireless bandwidth used. For example, in the industrial area, any timing problems impact productivity, so it's not possible to properly exploit cloud-based solutions because of timing issues. In a process which needs to have a decision on-site, like sorting fruit, the time to get a response from a central server would make the task impossible.
The ability to put more and more CPU power in the edge devices makes it possible to do the image processing or other computing tasks directly on the edge. For another example, if you're scanning barcodes at retail in an automatic checkout system and you wanted to decode the barcodes on the cloud, you have to stream multiple cameras at full bandwidth. But if you have pre-reprocessing on the edge device, you can reduce the bandwidth by as much as 95 percent.
Among the most bandwidth- and processor-hungry aspects of an intelligent system is its vision, from advanced consumer gesture-capture applications to factory automation, service robotics, drones, and similar products. We reached out to Christoph Wagner from MVTec, a provider of machine vision software, to comment on this issue. According to Christoph, although the industry is migrating toward a group of common standards and communications protocols, there is no true unity yet.
Addressing this, MVTec Software and Hilscher Gesellschaft für Systemautomation initiated a technical partnership to enable easier integration of machine vision and process automation. Combining MVTec software products with Hilscher PC cards enables machine vision applications to be integrated easily and seamlessly into any process control system. For example, MVTec’s machine vision software MERLIC can communicate with all commercial programmable logic controllers (PLCs).
As for MVTec, the optimized process integration is largely based on the application programming interface (API) of the cifX PC card family from Hilscher. This API is a standardized interface for all PC cards, which are provided by the manufacturer for all common form factors. Users of MVTec MERLIC and MVTec HALCON can choose from among all popular Fieldbus and Real-Time Ethernet industrial protocols, including PROFINET, EtherCAT, and other standards.
This effort and others in the industry to better and more seamlessly integrate advanced functionalities at the edge will provide more functional and cost-effective solutions in automation and other complex edge-computing applications. More standardization and modularity in communication and computation can only further this endeavor and make the creation of next-generation cloud-based functionality a reality.