Making sense of sensors

September 14, 2015

Making sense of sensors

Sensor fusion is blossoming as an application space. In almost any given week, there are at least a handful of stories about new sensor fusion innovat...

Sensor fusion is blossoming as an application space. In almost any given week, there are at least a handful of stories about new sensor fusion innovations, whether it’s the military considering multispectral sensor fusion to improve pilot safety when flying in poor weather or a smarter sensor hub that cuts power use.

The more sensors we embed, the more data that’s generated. This data is useful and drives applications such as fitness and health tracking as well as the recent advances in virtual reality headsets. As we generate more data, we also consume more energy making “sense of sensors” in more and more sophisticated applications.

Sensor fusion is the intelligent combining and interpretation of disparate sensor feed, giving the application greater insight into a user’s behavior or movement. However, the quality of the user experience is largely driven not by sensors themselves, but by the sensor fusion algorithms, which turn sensor data into useful, application-ready information.

Sensors are small, noisy, and their signals are easily distorted and susceptible to interference; sensor fusion and processing software adds calibration, fusion, and more to make the data more accurate, reliable, and ready to be exposed to real-world applications. Sensor fusion itself isn’t simple, and can be compared to an iceberg – the visible sensor fusion is a fairly small, relatively simple set of algorithms. However, those algorithms rely on a hidden and complicated world of larger systems challenges, which must be addressed to provide high-quality data to the fusion system. When good sensor fusion is correctly integrated into the sensor system, it can have dramatic impacts on the user experience.

Context is king

There’s another element to sensor fusion: it can help conserve power based on device context. For example, if the phone is sitting on a desk in your office and hasn’t moved in several hours, it doesn’t have to sample the GPS or otherwise calculate location. Similar techniques can automatically manage phone functions while you’re in a car or public transport. While these may seem like small steps, the associated power savings add up.

To enable these user-experience benefits, we need the sensors to be always-on and gathering data regardless of whether the device is actively being used. Hence, we need a way to gather, filter, and analyze the data from the sensors without consuming significant amounts of the phone’s battery or processing resources. This has led to the rise of a type of “sensor hub” processor, which is a dedicated processor typically based on the ARM Cortex-M series, to handle sensor processing. By optimizing the processor, sensor fusion, and processing software, we can enable the benefits of always-on processing with minimal impact on device battery life.

Let’s look at sensor fusion approaches that leverage that optimization with good results. Virtual reality (VR) systems trick the brain into believing the virtual world is real. That means the system must translate real-world actions into the virtual world with the greatest precision and the lowest latency possible. A commonly used architecture of a modern wearable device uses a Cortex-A processor to run a rich operating system providing a sophisticated user interface, while offloading the sensor fusion function, which requires deterministic real-time response, to a Cortex-M processor. As sensor data processing requirements grow, the Cortex-M7 CPU is a suitable processor for this function.

Take latency as an example, which is widely considered a primary cause of “simulator sickness.” Latency is the time between head movement and the adjustment of the image, which corresponds to that movement. Many system factors contribute to latency, but gathering, processing, and delivering sensor data to the system is a notable one.

The Cortex-M7 enables high-resolution sensor sampling and sensor fusion including dynamic sensor calibration. Typical sensor fusion output data rates (ODR) used in mainstream head trackers today are in the order of a few 100 Hz, but the extra processing capability of the M7 core allows that to scale upwards of 1 kHz. Primarily, this increased ODR means that there’s minimal delay when gathering data packets at an appropriate time for the graphics rendering, as the video frame rate is different to the sensor fusion-processing rate. In addition, it enables a denser sample for more accurate predictive head tracking. By analyzing patterns and predicting future movement, latency can be reduced, but the density of data available over the course of a few milliseconds is vital to head tracking performance. The further we have to look into the past to obtain an appropriate body of data to use in the predictions, the less reliable the estimate is. Higher ODR (1 kHz or more) increases the prediction’s accuracy and therefore the quality of the user experience.

Wearables challenges

Nowhere is the growth of sensor adoption more robust than in wrist-worn wearables. These devices feature more sensors often incorporating pressure, heart rate, gyroscopes, and more to provide more data to the user and to enable better user interfaces. Additional motion sensors including gyroscopes and magnetometers will help add richness and accuracy to personal context tracking. Environmental sensors, such as UV light, humidity, and temperature, will enable better user context and enhanced personal comfort. Biological sensors will measure hydration, blood oxygen and glucose saturation, skin temperature and sweat, and more to provide unique insights about the user’s body and health.

Combining data from this expanding array of sensors requires a powerful yet power-efficient processor. This will be particularly important for low-power context classification. Advanced context detection requires complex algorithms and these algorithms can take advantage of the M7′s advanced features to provide accurate yet low-power context detection for rich user applications.

To learn more about trends in sensor fusion and hear what industry experts from ARM, Atmel, Bosch Sensortec, and Hillcrest Labs have to say, check out the Google+ Hangout discussion.

David Maidment, a mobile segment marketing manager at ARM, brings over 18 years of experience in the mobile industry. As our mobile world rapidly evolves to support an always on, always connected lifestyle, David is excited to be looking at new and innovative use cases for connected devices such as wearables, smartwatches, lifestyle and fitness, health, and quantified self.

David Maidment, ARM