The dev kit tour with Cirrus Logic

December 29, 2017 Rich Nass

I took the latest tour of dev kits with a sidekick—my trusty friend Alexa. The kit du jour was the Cirrus Logic CRD1569-1 Voice Capture Development Kit for Amazon AVS, which can easily be purchased through Digi-Key (part number 598-2471-KIT-ND)

.

The kit arrived in a pretty snazzy case. Inside, I found a Raspberry Pi 3 board, a two-microphone voice-capture board that plugs directly on top of the Pi (it can also connect through a provided ribbon cable), a passive speaker, an 8-Gbyte SD card that’s preloaded with all the software you need, and a power supply. I needed to provide a display, a keyboard and mouse, and WiFi/Internet access.

As dev-kit difficulty goes, I’d put this one somewhere in the middle. A stumbling block could have been the interaction with the AWS interface, but thankfully (or not), I’d already stumbled upon (and eventually cleared) that hurdle with a previous dev kit. So, I was familiar with the process.

According to Cirrus Logic, its Voice Capture Development Kit for Amazon AVS is optimized for developing Alexa-enabled smart speakers, portable speakers, and compact audio devices. The voice-capture board features a dual-core DSP that runs algorithms for voice control, noise suppression, and echo cancellation. The result is high-accuracy wake-word triggering, which I can confirm. I had multiple people try it out from different places in a pretty large setting, and we had a hit just about every time.

The brains within the voice-capture board are provided by the dual core DSP, dubbed the CS47L24. The board also houses three digital-to-analog converters, and a line out, so you can use powered speakers. I only used the passive speaker that was included with the kit. The kit’s specs say that the board consumes 24 mW in voice-capture mode and 30 mW in playback mode. I didn’t connect a meter, so I’ll take their word for it.

The bottom line is that the documentation for the kit is well written, which is not always the case with development kits in general. While I sometimes require a call to tech support, that wasn’t the case here. Again, prior AWS experience was a big help.

In my case, the result was a very simple one, basically just a confirmation that everything was working properly. Obviously, the intent is to voice-enable any type of appliance, which would have been the next step for me. Those of you who know how exciting is it to get an LED to blink under the right conditions will understand my elation at have Alexa beep at me. It’s the little things in life, right?

Bottom line? If you’re looking at voice-enabling your appliance, I recommend this kit as a good starting point. Note that I came across a video that does a good job of explaining the kit’s features.

Good luck in your design.

About the Author

Rich Nass

Richard Nass is the Executive Vice-President of OpenSystems Media. His key responsibilities include setting the direction for all aspects of OpenSystems Media’s Embedded and IoT product portfolios, including web sites, e-newsletters, print and digital magazines, and various other digital and print activities. He was instrumental in developing the company's on-line educational portal, Embedded University. Previously, Nass was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events in the U.S., Europe, and Asia. Nass has been in the engineering OEM industry for more than 25 years. In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites. Nass holds a BSEE degree from the New Jersey Institute of Technology.

Follow on Twitter More Content by Rich Nass
Previous Article
Designing optimized microphone beamformers
Designing optimized microphone beamformers

The ratio of the user’s voice to the background noise is the ultimate determiner of the performance of a vo...

Next Article
High-performing deep learning is possible on embedded systems

You might be surprised that there's enough processing power in an embedded system to handle machine vision.

×

Follow our coverage of hardware-related design topics with the Hardware edition of our Embedded Daily newsletter.

Subscribed! Look for 1st copy soon.
Error - something went wrong!