Automotive Echo for voice-control infotainment

By Majeed Ahmad

Editor-in-Chief

AspenCore

November 30, 2018

Automotive Echo for voice-control infotainment

This is an example of a recently-announced IP core for audio and voice processors targeted at voice-controlled infotainment, noise cancellation, and personalized audio listening via sound bubbles.

 

What’s the design magic behind digital assistants like Amazon’s Echo? Advanced artificial intelligence (AI) algorithms that eliminate background noise and isolate the primary voice for a better listening experience.

And while products like the Echo are becoming popular in smart home applications, if there is a needier place for a voice-controlled interface, it’s the vehicle steering where voice commands can minimize the driver distractions. So, following the popularity of digital-home assistants, carmakers are contemplating the AI-based speech recognition for voice-controlled infotainment.

Take the example of Mercedes A-class vehicles now offering AI voice agent to carry out control operations like lighting at driver’s verbal request. The Mercedes-Benz User Experience (MBUX) system offering voice control capabilities is based on the AI platform from SoundHound, a supplier of voice-enabled AI and conversational intelligence technologies located in Santa Clara, California.

Here, it’s worth mentioning that Mercedes is already using various features of Amazon Alexa and Google Home to incorporate voice control in its in-car infotainment systems. Hyundai is also working closely with SoundHound to introduce the AI voice features in vehicles.

Figure 1. Automotive voice assistants require local speech recognition facilitated by compute-intensive audio processors like Tensilica HiFi 5 DSP.

This column presents the example of a recently-announced IP core for audio and voice processors targeted at voice-controlled infotainment, noise cancellation, and personalized audio listening via sound bubbles. The ‘sound bubble’ feature enables distinct audio zones for each passenger without requiring headsets.

The Tensilica HiFi 5 DSP core from Cadence Design Systems is optimized for neural network-based speech recognition as well as far-field audio processing algorithms. Cadence claims to have improved these two compute areas with significant enhancements in floating- and fixed-point DSP capabilities and native support for new data types.

As compared to its predecessor, HiFi 4 DSP, the new audio processor offers 2X multiplier-accumulator (MAC) capability for pre- and post-processing audio operations such as echo cancellation and noise reduction. Next, for neural-network processing, HiFi 5 DSP provides 4X MAC performance versus HiFi 4 DSP processor core. The 32-MAC per cycle engine supports smaller size neural-network weights for efficiently running complex speech-recognition algorithms.

Figure 2. A view of the Tensilica HiFi 5 DSP audio processor architecture.

The HiFi 5 DSP audio processor is optimized for the HiFi Nature DSP Library provided to all partners and licensees to simplify the programming and porting of DSP algorithms. The Austin, Texas-based Ambiq Micro is the first licensee of Cadence’s HiFi 5 DSP processor core.

The fact that audio processors like Tensilica HiFi 5 allow neural network-based speech recognition algorithms to be run locally is crucial in the automotive environment due to latency, privacy, and network availability issues in cloud communications. And, more importantly, such AI speech and audio processors facilitate a critical shift in automotive infotainment toward innovative voice-control features.

I am a journalist with an engineering background and two decades of experience in writing and editing technical content. Formerly Editor-in-Chief of EE Times Asia, I have taken part in creating a range of industry-wide print and digital products for the semiconductors industry content value chain.

More from Majeed