We are lucky to be living in an age where a person with an idea can create a smart electronics-based solution for almost any problem that currently exists. Aside from the social issues that provide the impetus for the “will,” we have all the tools necessary to address the “way.” However, in a complex electronic infrastructure, both are needed to properly deploy the proper solutions in the most optimum manner.
In the modern world, the ability to communicate and exchange data is a staple like clothing and shelter. It is literally impossible to function in society today without a connection to the network. This also is becoming a reality for inanimate objects as well, as eventually every product that draws power will be networked in some way. In the future, it will be a poor toaster that can’t text you when the toast pops up, or at least notifies you (or possibly the fire department) if a piece of toast gets stuck in it, or other failure mode.
However, all this exponentially-growing data volume places increasingly greater pressure on the cloud and its wireless infrastructure. Every device with telemetry adds more traffic to an already crowded network. Optimum usage of the wireless spectrum is critical to the continued growth and utility of the cloud and Internet of Things (IoT). Any of you who are gamers online, or stream video regularly, know the perils of insufficient bandwidth and related latency issues.
That’s why Edge Computing is getting very hot now. Putting more logic and processing power in the device in the field not only directly addresses network latency issues, but it also provides more computing power at the application point for additional functionality. Instead of sending everything to the server, an edge-computing device does its primary processing of data on-site, removing the bulk of the data that needs to be sent back.
When computers were first developed, they were deployed in buildings with the mainframe in the basement, and dumb terminals at each work position, time-sharing the computing power. As computers grew smarter, this “thin client” approach was eventually replaced by a network of smart computers storing data in the basement computer, which eventually became an offsite server farm. Now the cloud and IoT are making that final migration from a “Thin Client” approach to one of networked smart systems that do their own onsite processing, reducing the burden on network traffic.
So what else can be done? The second thing to do is get more bandwidth. The situation is reminiscent of a scene in the animated television show “The Simpsons,” where the patriarch Homer is told by a financial advisor that he was too “stupid” to budget, and so “just” needed to make more money. We are not limited by stupidity (well, maybe sometimes), but we are limited by the technology we must deal with. We ”just” need more bandwidth.
Enter 5G, stage left. The current label for the hodgepodge of technologies being integrated into the next generation of wireless infrastructure, 5G has been offered as a panacea for all our wireless issues. Occupying the millimeter wave band from 24-86 GHz, 5G does offer blazing speeds and low latency, but cannot travel very far and can’t penetrate most walls, setting us up for an additional layer of RF infrastructure just to cope with the shortcomings of the bandwidth used.
The other problem with 5G is that it’s being offered as a cure-all for every user in the system, animate and inanimate. That means you will be sharing your data volume with your car and your smart appliances. It is capable of doing so in the long run, but this can present issues during deployment and development. Add to that the need to add a repeater or other gateway in homes to bridge walls, and one can see a potential for hardware problems in the infrastructure.
Another solution that is available is to use separate RF bands for devices and people. This solution will probably wind up being part of the final mix, as there is already a hot competition between Sigfox and LoRa to provide the RF infrastructure for the IoT. LoRa uses Chirp spread spectrum (CSS), and Sigfox uses Ultra narrowband (UNB). The core hardware for LoRa is owned by Semtech, but the ecosystem is managed by a non-profit group, the LoRa Alliance. Sigfox is a corporate entity that is sharing their reference design with chip vendors.
The biggest difference between the two, is that Sigfox charges a nominal fee for every device, and LoRa doesn’t charge a licensing fee. Another difference is Sigfox is bigger in the USA, and LoRa is bigger in Europe. Since they are intended for regional applications, this split isn’t a serious issue. Both use the industrial, scientific and medical (ISM) radio bands, so the signals can be used by low-orbit satellites and can also penetrate walls, making them very useful for remote device applications (Figure 2).
The actual final result will certainly be a mix of all of this, with Edge Computing, 5G, and both LoRa and Sigfox providing RF infrastructure for most industrial and IoT applications. The key is to have a solid grasp of all the available solutions, so that you may choose the best mix to create an optimized solution to address your application(s).