AI isn't coming. It's already here.

By Ray Alderman

Executive Director

Vita

September 08, 2017

AI is third biggest change in the history of military warfare, after gunpowder and nuclear weapons.

Now hear this: AI, or artificial intelligence, is going to take over. In fact, I could make the argument that it already has. AI breaks down to machine learning, and under that, deep learning. AI is third biggest change in the history of military warfare, after gunpowder and nuclear weapons.

There are both software and hardware versions of AI; software is slow and hardware is much faster and more elegant). Intel, ARM, Movidius, IBM, Google, Amazon, Facebook, the Catholic Church, Planned Parenthood, and the Freemasons are all developing AI “war algorithms” as you read this post.

I watched your Embedded Insiders video on this topic, and you guys obviously get it. But understand that in defining AI, or more specifically, machine learning vs deep learning, AI is the broad spectrum of intelligent machine processes. Under that is machine learning, and then under that is deep learning. 

Machine learning is “shallow AI.” What the machine learns is narrowly bounded with only a few thousand or so events to learn from. Deep learning uses “deep AI.” That version uses millions of events to learn a more complex process or task.

Machine learning is primarily found in IoT and industrial processes, using a narrow spectrum of logically structured observed events, where only a small set of samples is needed to learn from. In military applications, the spectrum of the task is much larger, and uses millions of examples to learn from (like imaging analysis, to identify tanks, troops, missiles, convoys, etc., from many video and still images).

Another MIL app is electronic warfare (EW), where the EW machine has to see millions of examples of wave forms, different frequencies, different pulse widths, etc., coming from enemy radar in an unstructured random fashion. This is necessary so that it can create countermeasures on the fly.

A second concept to be aware of is soft- and hard-AI. Soft-AI is an AI algorithm in software running on an antiquated von Neumann architecture CPU, like an Intel processor (Intel actually put AI instructions into its latest Xeon processors). von Neumann machines only worked when we were CPU-bound (i.e., when the I/O can deliver more data than the CPU can process).

Since 1993, we’ve been I/O-bound (the CPU can process more data than the I/O links can deliver), and von Neumann machines don’t work well under these conditions. Today, the CPU utilization in a data center runs at about 10%, and the servers are typically waiting for memory or I/O.

While Soft-AI is slow, it works fine for IoT and other pedestrian industrial applications. We’re seeing soft-AI in the military, running algorithms on GPUs and on von Neumann CPUs, but performance isn’t great. But running Soft-AI tells engineers how to design the Hard-AI that’s needed, meaning that Soft-AI can be a stepping stone to Hard-AI in complex applications.

Hard-AI, on the other hand, puts the neural network in hardware. Examples of that are Intel’s new VPU (announced last week) and the new Movidius chips for video and image processing. Look at hard-AI as the next generation of FPGAs with the RTL code mapping out the hardware functions. Many semiconductor vendors are working on their Hard-AI chips today. There are 27 different convolutional neural networks defined so far.

Ray Alderman is Chairman of the Board at VITA.