2020 Embedded Processor Report: Back to the Future with Analog Computing

By Brandon Lewis

Editor-in-Chief

Embedded Computing Design

February 03, 2020

Story

2020 Embedded Processor Report: Back to the Future with Analog Computing

Analog computing ? and even analog signal processing ? appears to be making a comeback. Why?

 


While studying at MIT, Claude Shannon, widely regarded as “the father of information theory,” worked extensively with the Differential Analyzer developed a decade earlier. The Differential Analyzer was essentially the first general-purpose analog computer, and Shannon’s experience with the machine would have a seminal influence on later works such as A Mathematical Theory of Communication.

About the same time, Shannon’s contemporaries were making strides in digital computing systems that would be fully realized over the next two decades and culminate in the digital signal processing revolution of the 1970s and 80s.

But now, roughly 80 years after Shannon’s introduction to the Differential Analyzer, analog computing – and even analog signal processing – appear to be making a comeback.

Why?

“There are two significant reasons,” said Gene Frantz, VP of Engineering at Octavo Systems and member of the design team responsible for the first digital signal processor, the Texas Instruments TMS5100.

“First, IC technology has advanced over the last five decades to make many of the things that were impossible or impractical now very doable,” Frantz continued. “Second, we are finding new problems worth solving that digital solutions are not adequate for – specifically, the need for higher performance and at the same time the need for significantly lower power dissipation.”

As Moore’s law draws to a close, the need for lower power and higher performance will be felt in an increasing number of application areas. And it is already generating renewed interest in analog computation in tasks ranging from mixed-signal signal processing (MSSP) for neural network workloads to dynamic system simulation using differential equations.

Back to basics with physical simulation

To illustrate the most basic advantages of analog computing, consider processing analog signals that are described by a set of differential equations. Because continuous time doesn’t exist in a digital computing paradigm, a digital computer must sample the input every clock cycle to develop a sample signal. This can result in many, many computations, which has the cascading effect of higher latency, increased power consumption, and so on.

Compare this the massive parallelism of an analog computer. Rather than deconstructing inputs into sequential tasks, analog computing circuitry can be configured as basic units (adders/subtractors, multipliers, integrators, fanouts, nonlinear functions, etc.) to solve the differential equations in question, and then sample the entire input signal continuously (Figure 1).

Figure 1. Analog computers contain integrators, multipliers, function generators, and other circuit blocks. Continuous-time circuitry forms blocks capable of creating arbitrary functions. (Source: IEEE Spectrum)

Analog computing chips are capable of executing differential equations significantly faster and at much lower power than digital alternatives. And while one drawback of analog computers is that you have to scale the system linearly with the size of the equation you’re trying to solve, their massive parallelism means that the power and performance benefits scale as well.

Limited benchmarks support these claims. Figure 2 compares the power consumption and time taken to solve the Van der Pol equation on Sendyne’s Apollo IC versus a 25 MHz Texas Instruments MSP430 MCU.

Figure 2. Comparison of power consumption and latency when executing the Van der Pol equation on the Apollo analog computing IC and a Texas Instruments MSP430 MCU.

The Sendyne Apollo IC is a 4 x 4 mm2 general-purpose analog computer fabricated on CMOS technology. The chip, originally developed by a team of researchers at Columbia University, contains 16 analog integrators and uses 1 V circuitry to generate outputs at the expense of only microjoules of energy. It also contains specialized ADCs that minimize conversion costs.

“If you’re dealing with analog signals and can skip the step of converting from analog to digital and then back to analog, that is obviously the advantage,” said John Milios, CEO of Sendyne. “There are special ADCs that basically don’t do any conversion or consume any power unless there is a change in the input signal, so you don’t have any significant power loss.

“If you think about the complicated problems that require a lot of parallel operations and have to execute millions of times, then you can see a very significant benefit,” he added.

(Editor's Note: For more on the evolution of analog computing read "Not Your Father's Analog Computer" by Columbia University professor Yannis Tsividis.)

Neural networks mix it up with analog signal processing

Beyond the realm of differential equations, analog-based arithmetic logic units (ALUs) are also gaining traction in the world of MSSP.

“Each time we reduce the size of the multiplier in half – say from a 32-bit multiply to a 16-bit multiply – the performance increases roughly by an order of magnitude while the power dissipation is reduced for each multiply by the same ratio,” Frantz explained. “So going from a digital 32-bit multiplier to a 1-bit analog multiplier improves the performance by several orders of magnitude while reducing the power dissipation by the same several orders of magnitude.

“At the same time, the number of transistors necessary to do the multiply is reduced from tens of thousands to tens,” he said.

One application area that is starting to capitalize on analog signal processing is the nascent field of neural networking. Here, there are use cases like keyword recognition and certain types of image processing that can afford to trade the lower accuracy of analog for the power and performance improvements it provides.

“There is a growing recognition that machine learning workloads present a different type of workload than the applications that previous processors have been designed for,” said Jeremy Holleman, Ph.D., Chief Scientist at Syntiant. “It is computationally demanding, memory-centric, mostly deterministic, and can tolerate modest precision. All of those factors play to the strengths of analog computation.”

Syntiant is a Bosch-backed AI semiconductor startup out of Irvine, CA that focuses on processing deep learning algorithms in resource-constrained systems like wearables, earbuds, remote controls, and sensors.

“The whole idea is to stay in the analog domain from the sensor front end all the way until after the neural network processing,” said Marcellino Gemelli, Director of Business Development at Bosch Sensortec. “Another way to see it is to imagine neural network processing before the signal conditioning that occurs in an ADC like a sigma-delta.”

The goal of the architecture Gemelli describes is simple: keep digital cores asleep at all costs. Aspinity, another AI startup based in Pittsburgh, PA, has developed a reconfigurable chip called RAMP that replicates digital signal processing tasks in analog for precisely this purpose (Figure 3).

Figure 3. Aspinity’s Reconfigurable Analog Modular Processor (RAMP) resides between a sensor and digital system components to conserve considerable amounts of energy.

“All sensed data is naturally analog, yet we take all of that data and we automatically digitize it, and process it downstream in a digital core,” said Tom Doyle, CEO of Aspinity. “But if you implement that in analog transistors, you can do it efficiently and accurately as well.

“What we’re able to do is be precise enough early in the signal chain to monitor very specific changes in frequency,” he continued. “So right after the sensor, you have Aspinity’s RAMP core that’s looking at all of the raw analog sensor data. When we detect something like voice or a glass break, we wake up the ADC and DSP to run an FFT to get all the gobs of data that one would need to determine what they want to do next.”

According to Doyle, applications like glass-break detection and voice activation have experienced power savings of 10x or more using RAMP technology.

Are we back to the future? Not quite.

While the potential of analog as an alternative or complementary processing technology is clear, it suffers from an extended absence in the commercial market. For one, there is limited information on how analog circuitry responds to the effects of temperature and aging. Another consideration is simply the ubiquity of digital interfaces today.

“In order to take full advantage of the low power and small die size, the signals will need to be tapped at the analog level more upstream, which in turn requires the sensor vendors to introduce architecture changes,” Gemelli explained. “Currently it’s a hard sell because tapping in the analog domain requires a significant redesign of the sensors’ front ends.”

What could ultimately drive those redesigns is more widespread use of analog computing technology. Development tools that provide access to analog circuitry from digital environments would help in that regard, and progress is being made there now that analog hardware targets are becoming available (See Sidebar 1).

Of course, I use “becoming available” in the most literal sense. Sendyne’s chip is the product of academic research, and first-generation products from Aspinity, Syntiant, and others are just barely reaching the market.

However, our rapidly diminishing ability to advance speeds and densities in the digital domain is undeniable. And, at the same time, our demand for computing power is increasing exponentially.

What will it take to go back to the future with analog computing?

“It needs someone to invest in this high-risk opportunity and create the first solution,” Frantz said. “My estimate is that a new computer architecture costs in the range of $100 million up to $1 billion.

“The risk is great. The people who can do it are few. But the reward is great.”

Sidebar 1 | What’s Programmable Analog without Programming Tools?

Analog computing systems are exceptionally efficient when calculating differential equations associated with, for example, dynamical systems. However, mapping these equations to the analog circuitry embodies its own set of complications.

The biggest of these may be that a lack of programmable analog targets has prevented the emergence of analog computing tools that can automate or at least simplify the reconfiguration of analog ICs for new or different differential equations.

But that is beginning to change thanks to a collaboration between a Columbia University team led by Yannis Tsividis, the school’s Edwin Howard Armstrong Professor of Electrical Engineering, and Sara Achour, a PhD candidate at the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology (CSAIL MIT).

After Tsividis’ team completed its general-purpose analog computer, Achour developed Legno, a compiler that successfully targets the device through an on-chip USB interface. In fact, it is believed that the grad student’s Legno tool is the first compiler to successfully target programmable analog hardware. Period.

Legno accepts first-order differential equations written in Python, which allows users to define state variables, expressions, and which variables they would like to observe. The compiler uses these inputs to synthesize configurations for devices like the Sendyne Apollo chip and ensure that all physical restrictions of the target are honored (Figure 1).

Sidebar Figure 1. The Legno compiler is thought to be the first analog compiler to successfully target physical analog computing devices.

“If I’m dropping a ball from 50 meters, if I can’t drive 50 microamps on a wire, I can’t run that simulation,” Achour said. “[The Legno] compiler automatically rescales the system. For example, say we’re going to scale it down by a factor of five, now you’ve mapped the position of the ball between zero and two microamps, which is something that device can represent.

“Maybe I define a velocity and a position [in the high-level language], and I want to observe the position,” she explained. “On output, the compiler will generate a set of configurations for each of the blocks on the analog device and it will also determine which wires to enable,” Achour continued (Figure 2). “It will make sure all of the operating ranges are respected on the device, account for manufacturing variations, respect any frequency limitations, and will try to scale up all of your signals so that you have a good signal-to-noise ratio (SNR).

Sidebar Figure 2. The Legno compiler uses differential equations expressed in Python to appropriately configure analog circuitry. Shown here is a dynamical system specification (DSS) for a dampened oscillator.
 

“All of these observable variables are accessible through pins that you can tap into on the chip, so [Legno] will make sure to route those connections outside of the device so that you can measure them,” Achour continued. “Basically, it does this automatic scaling problem. So, you can just give it your differential equations and boundary variables rescale everything.”

“It basically does the work of the electrical engineer for the high-level ‘rich person’,” she added.

Legno is, of course, not the only tool available to embedded engineers that outputs differential equations, which turns out to be a very, very good thing for prospective analog computing developers. Because so many other tools do output differential equations, it’s not difficult to imagine a pipeline in which these equations are just mapped directly to an analog computing target through a specialized compilation solution like Legno.

Fast-forwarding programmable analog

Currently, Legno is open source and has been evaluated by members of the engineering community who have replicated Achour’s results. Obviously most of those users did not have access to programmable analog chips, so Archour provided 10 gigabytes of oscilloscope data that could be used for simulation.

While this first iteration of the Legno compiler is in the process of being finalized, the MIT student would like to continue refining the tool.

“It’s something I’m interested in developing in the near future because one of the things I do with this compiler is actually profile all of the analog blocks on the chip,” she said. “So, I have these math models that describe how each multiplier instance behaves.

“Something that would be interesting is creating random simulators so that people who do not have physical access to a programmable analog device can still play around with running their experiments on it.”

Achour and her team will present their work on Legno at the 2020 ASPLOS interdisciplinary systems research conference in Lausanne, Switzerland. To learn more about the Legno analog computing compiler, visit Sara’s academic profile page at http://people.csail.mit.edu/sachour, read the research paper “Noise-Aware Dynamical System Compilation for Analog Devices with Legno,” or download the compiler from GitHub.

Brandon is responsible for guiding content strategy, editorial direction, and community engagement across the Embedded Computing Design ecosystem. A 10-year veteran of the electronics media industry, he enjoys covering topics ranging from development kits to cybersecurity and tech business models. Brandon received a BA in English Literature from Arizona State University, where he graduated cum laude. He can be reached at [email protected].

More from Brandon