Gabriel Costache, Senior R&D Director at Xperi, presents the “New AI Platform Architecture for the Smart Toys of the Future” tutorial at the May 2022 Embedded Vision Summit.
From a parent’s perspective, toys should be safe, private, entertaining and educational, with the ability to adapt and grow with the child. For natural interaction, a toy must see, hear, feel and speak in a human-like manner. Thanks to AI, we can now deliver near-human accuracy on computer vision, speech recognition, speech synthesis and other human interaction tasks. However, these technologies require very high computation performance, making them difficult to implement at the edge with today’s typical hardware.
Cloud computing is not attractive for toys, due to privacy risks and the importance of low latency for human-like interaction. Xperi has developed a dedicated platform capable of executing multiple AI-based tasks in parallel at the edge with very low power and size requirements, enabling toys to incorporate sophisticated AI-based perception and communication. In this talk, Costache introduces this platform, which includes all of the hardware components required for next-generation toys.
See here for a PDF of the slides.