This blog post was originally published at Syntiant’s website. It is reprinted here with the permission of Syntiant.
Syntiant’s ultra-low-power AI processing enables always-on voice control for most any battery-powered device, from earbuds to laptops
Around the time Syntiant was founded in 2017, our co-founder and CEO Kurt Busch had a lightbulb moment. As he watched his youngest child talk with his iPad, he saw an enormous opportunity to drive artificial intelligence (AI) and machine learning (ML) innovation into voice-activated systems and transform how they work.
Flash forward to January 2021, and Syntiant has delivered two AI chips into the market, shipped more than 10 million units, had its technology certified on Amazon Alexa and been honored at CES. The company raised more than $65 million in venture capital from some of the largest investors and companies in the world including Amazon, Applied Materials, Bosch, Intel Capital, Microsoft and Motorola. All in less than four years.
What happened in between is a testament to teamwork, vision and a laser focus on our team to deliver stand-out products quickly, efficiently and cost-effectively.
To understand how we got here, let’s jump back to 2017. At the time, everybody was keen to develop silicon for enterprise and server-side machine learning: there weren’t many people doing edge and endpoint AI. But we saw, expanding on Kurt’s epiphany, an enormous open market that no one had addressed yet: Improved voice at the endpoint was going to drive the human machine interface (HMI) to new levels of utility. We wanted to leverage our expertise in extreme low-power design in tandem with our vision to make ML programming easy and develop technology for edge and other battery-operated endpoint devices.
Achieving escape velocity
But the key question with startups is how to quickly and efficiently get to your first prototype and your first volume product. That speed and productivity helps not only to get your technology out into the world before competitors, it helps you build a story to continue to tell investors as you evolve the company. This was all the more important to us because, from the very first, our goal was to make our technology manufacturable in the millions of units.
We made two key decisions early on to ensure our nimbleness: First, the founders connected with Evonexus, the startup incubator based in both Irvine and San Diego, California. Here we could use office space, leverage the network and camaraderie of innovators and investors, put together a patent portfolio and do the early blocking and tackling of startups with a relatively low overhead and plenty of shared resources. For example, if you have to test something, no need to go out and buy a $100,000 scope or purchase new MATLAB licenses. You can leverage technology and collaboration that Evonexus’s ecosystem already has in place.
Second, we got involved with another great ecosystem, Arm. We used Arm DesignStart to identify our processor core. We chose the Arm Cortex-M0 and used that to control everything, while we threw all our effort and expertise in our value-add: Our neural network.
In the summer of 2017, our four founders were poring through ML papers and looking at architectures and crafting the audio wake words and keywords as the target application. By the time I joined in November of that year, we knew the use cases and the rough size of networks.
Then from November to our March 2018 tapeout, we focused relentlessly on the design. We used Arm standard cell and memory IP to first perform estimates of what the device was going to look like, determine our speed, area and power objectives. Pretty quickly we knew we had something really special.
We used cloud-supported EDA tools, so we didn’t have to stand up a data center, as we would have if we’d started the company 20 years ago. Just a few laptops and an internet connection. We didn’t know it at the time, but this experience prepared us for how we would need to pivot as a larger company in early 2020 when the pandemic shut down the office and forced us to work from home. We didn’t miss a beat.
Accurate wake word, command word and event detection with near-zero power consumption
With our first product, the NDP100 based on Syntiant Core 1 (our neural network core), we set realistic precision goals and used near-memory compute in the design. We further optimized the performance by meticulously analyzing where and how data gets moved around for low latency and silicon real estate considerations. We crafted every bit to make sure that the device attacked the problem properly with machine learning. We delivered accurate wake word, command word and event detection in a tiny package with near-zero power consumption. Ultra-low-power AI processing enables always-on voice control for almost any battery-powered device, from earbuds to laptops.
In addition to our application goals, we wanted to make deploying ML easy, to enable ML algorithm scientists to move directly from TensorFlow into our device – in other words to make it very easy for people to target and train on our device. We achieved this because our hardware-software machine learning teams were co-located from inception. We didn’t silo those functions because each depends on the other. By co-locating our teams, we made it easier to see the gaps between domains where you can actually make technological breakthroughs that differentiate the end product.
In the three years since, we’ve learned a lot, gathered considerable data, talked with hundreds of customers and gotten feedback. The NDP100 continues to be the core low-power technology for wake word use cases. With NDP120, which is based on Syntiant Core 2, we get roughly 25X the computation of Syntiant Core 1. We augmented our initial effort with a more powerful machine learning solution for some use cases and dived into some of the more powerful machine learning building blocks, such as convolutional networks and recurrence. With the NDP120, you can implement much more complicated structures that are common in ML today but now on a low-power endpoint device.
Even before the silicon was back, we were running thousands of simulations in our cloud infrastructure with the full hardware and software integration. Our development boards are built around the Raspberry Pi platform so we can enable remote access for our engineers anywhere in the world. An hour after the NDP120 prototypes hit our lab, it recognized our Amazon Alexa wake-word package. I don’t think I’ve seen a bring-up that fast in my 25 years in semiconductors. We say we train hard and fight easy, right? Well, we trained really hard for the NDP120.
Expanding our engagement with Arm Arm Flexible Access
Our effort and focus have paid off. In the summer of 2019, the NDP100 was certified by Amazon for use in Alexa wake-word designs, and then the NDP120 was honored as a CES 2021 Innovation Award winner in January.
Going forward, our team will capitalize on the momentum. We’re planning to expand our engagement with Arm by tapping into the Arm Flexible Access program. When we want to build a more powerful apps processor, we can just grab IP from Flexible Access and start doing the same thing we did in the beginning—building the components and making sure we can assemble the system we want. There are no lengthy contract negotiations or access challenges. It just allows us to go in and work with IP to find the right fit for our application.
These tools are exciting for us as innovators as we build a stronger bridge between humans and machines. We’re moved from a 12-person team tackling wake words in the NDP100 to a global team of more than 70 delivering the NDP120 to handle much more powerful, far-field voice and multi-sensor use cases. We’re expanding the ability for the machine to hear more and have much more robust interaction with users. You’ll be conversing with your device and you will do it at the endpoint so it doesn’t have to race up to the cloud for computation. It’ll lower latency, speed results and maintain your data privacy.
All that in just over 3 years, thanks to the simple inspiration found in a young child talking to a machine. Going forward, the possibilities really are endless.
Dave Garrett, Ph.D.
Vice President of Hardware, Syntiant