As the Internet of Things (IoT) continues its rapid pace of growth, there’s been no shortage of inflated expectations. Platforms promising to ease the development, deployment, and management of IoT systems are now counted in the hundreds. You might be led to believe that all you have to do is pick a platform you like the best, from the vendor you trust the most, and go build your IoT system. Well, the story is not so simple.
In reality, nearly all of the IoT platforms available are designed to only support cloud-centric architectures. These platforms centralize the “intelligence” in the cloud and require data to be conveyed from the edge to do anything useful with it. Considering the success of this cloud-centric model in IT (and in some IoT applications like fleet management), you may wonder—what’s the big deal?
Simply put, cloud-centric architectures aren’t applicable to a large class of IoT applications. Most notably, cloud-centric architectures fall short in supporting Industrial IoT (IIoT) systems and struggle with more demanding Consumer IoT (CIoT) applications.
The scariest part is that the situation will only get worse with the predicted increase in the number of connected things. But the problem goes beyond the sheer number of things. There’s something more fundamental limiting cloud-centric architectures’ applicability for IoT systems. Let’s go through them one by one.
Cloud-centric architectures assume that sufficient connectivity exists from the things to the cloud. This is necessary for collecting the data from the edge, and for pushing insight or control actions from the cloud to the edge. Yet, connectivity is hard to guarantee for several IoT/IIoT applications, such as smart autonomous consumer and agricultural vehicles. As you can imagine, connectivity may be taken for granted in metropolitan areas, but not so much in rural areas.
Cloud-centric computing assumes that sufficient bandwidth exists to ingest the data from the edge into the data-center. The challenge here is that several IIoT applications produce incredible volumes of data. For instance, a factory can easily produce a 1 Tbytes of data per day. And these numbers will only grow with the continued digitalization of factories.
Let’s assume that the connectivity and bandwidth problem is solved. All good now? Nope. There’s still a large class of IIoT systems for which the latency required to send data to the cloud, make decisions, and eventually send data toward the edge to act on these decisions may be completely incompatible with the dynamics of the underlying system. A key difference between IT and IoT/IIoT is that the latter deals with physical entities. As such, the reaction time can’t be arbitrary; it must be compatible with the dynamics of the physical entity or process with which the application interacts. Failing to react with the proper latency can lead to system instability, infrastructure damage, or even put human operators at risk.
In the age of smartphones and very cheap data plans, most people assume that the cost of connectivity is negligible. The reality is quite different in IIoT due to either bandwidth requirements or connectivity points. While in consumer applications, the individual person—the consumer—pays for connectivity. In most IoT/IIoT applications, such as smart grids, it’s the operator who foots the bill. As a result, the cost is usually carefully accounted for as it has an impact on OPEX and consequently on operational costs and margins.
Finally, even assuming that all the above listed issues are addressed, a large class of IIoT applications are not comfortable, or are incapable due to regulations, to push their data to a cloud.
In summary, unless you can guarantee that the connectivity, bandwidth, latency, cost, and security requirements of your application are compatible with a cloud-centric architecture, you need a different paradigm, and 99.9% of the IoT platforms available on the market are not of much use.
Fog computing is emerging as the main paradigm to address the connectivity, bandwidth, latency, cost, and security challenges imposed by cloud-centric architectures. The main idea behind fog computing is to provide elastic compute, storage, and communication close to the things so that data needn’t be sent all the way to the cloud, or at least not all data and not all the time. And the infrastructure is designed ground-up to deal with cyber-physical-systems (CPS) as opposed to IT systems. In other words, the infrastructure is designed to consider the constraints imposed by the interactions with the physical world in terms of latency, determinism, load balancing, and fault-tolerance.
Angelo Corsaro, Ph.D., is the Chief Technology Officer at PrismTech, where he directs the company’s technology strategy, planning, evolution, and evangelism. He also leads the strategic standardization at the Object Management Group, where he co-chairs the Data Distribution Service Special Interest Group and serves on its Architecture Board. Angelo earned a Ph.D. and a M.S. in Computer Science from the Washington University in St. Louis, and a Laurea Magna cum Laude in Computer Engineering from the University of Catania, Italy.