Edge AI and Vision Insights: April 19, 2023 Edition

LETTER FROM THE EDITOR
Dear Colleague,Women In Vision Reception

Back by popular demand—and thanks to our sponsor, Perceive—we’re excited to host the third annual Women in Vision networking reception at the 2023 Embedded Vision Summit. We invite women working in computer vision and edge AI to join us for this special in-person gathering to meet, network and share knowledge and ideas. This year’s Women in Vision Reception will be held on Tuesday, May 23 from 6:30-7:30 pm. Appetizers and beverages will be served. We look forward to seeing you there!

The Summit, returning again this year to the Santa Clara (California) Convention Center, is the key event for system and application developers who are incorporating computer vision and perceptual AI into products. It attracts a unique audience of over 1,400 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and edge AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and edge AI.

Once again we’ll be offering a packed program with more than 90 sessions, more than 70 exhibitors, and hundreds of demos, all covering the technical and business aspects of practical computer vision, deep learning, edge AI and related technologies. And also returning this year is the Edge AI Deep Dive Day, a series of in-depth sessions focused on specific topics in perceptual AI at the edge. Registration is now open, and if you register by next Friday, April 28, you can save 15% by using the code SUMMIT23-NL. Register now and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

CONSUMER OPPORTUNITIES FOR COMPUTER VISION

The Market for Cameras and Video Doorbells in the Smart HomeStrategy Analytics
Hardware-focused camera brands are increasingly reckoning with the unsustainable business model of selling at the lowest price. The global camera installed base—including indoor, outdoor, and video doorbell cameras—has surpassed 300 million units, and the camera market is entering a cycle of iterative, not innovative, updates. Enhanced object detection, trainable facial recognition, improved motion tracking and the edge and cloud AI powering these services will fuel the next stage of smart home camera market growth. In this talk, Jack Narcotta, former Principal Analyst at Strategy Analytics, examines what camera market dynamics have reigned over the last few years, and then identifies the key opportunities and obstacles that lay ahead for market participants. He also assesses how camera component manufacturers, video analytics providers and applications software and service developers can benefit as the smart home camera market evolves.

A New AI Platform Architecture for the Smart Toys of the FutureXperi
From a parent’s perspective, toys should be safe, private, entertaining and educational, with the ability to adapt and grow with the child. For natural interaction, a toy must see, hear, feel and speak in a human-like manner. Thanks to AI, we can now deliver near-human accuracy on computer vision, speech recognition, speech synthesis and other human interaction tasks. However, these technologies require very high computation performance, making them difficult to implement at the edge with today’s typical hardware. Cloud computing is not attractive for toys, due to privacy risks and the importance of low latency for human-like interaction. Xperi has developed a dedicated platform capable of executing multiple AI-based tasks in parallel at the edge with very low power and size requirements, enabling toys to incorporate sophisticated AI-based perception and communication. In this talk, Gabriel Costache, Senior R&D Director at Xperi, introduces this platform, which includes all of the hardware components required for next-generation toys.

ENHANCING DEEP LEARNING MODEL TRANING DATASETS

Human-centric Computer Vision with Synthetic DataUnity Technologies
Companies are continuing to accelerate the adoption of computer vision to detect, identify and understand humans from camera imagery. These human-centric use cases appear in a growing range of applications including augmented reality, self-checkout in retail, automated surveillance and security, player tracking in sports, and consumer electronics. Creating robust solutions for human-centric computer vision applications requires large, balanced, carefully curated labeled data sets. But acquiring real-world image and video data of humans is challenging due to concerns around bias, privacy and safety. And labeling and curating real-world data is expensive and error-prone. Synthetic data provides an elegant and cost-effective alternative to these challenges. In this presentation, Alex Thaman, former Chief Software Architect at Unity Technologies, shows how tools and services can be used to quickly generate perfectly labeled, privacy-compliant, unbiased datasets for human-centric computer vision.

Creating Better Datasets for Training More Robust ModelsVoxel51
Nothing hinders the success of computer vision and machine learning systems more than poor-quality data. Gone are the days of focusing only on the model while assuming the data is given and is static; now, machine learning scientists, engineers and data architects spend significant time working with data. And without effective tools for improving datasets, training robust, deployable models can be time-consuming and inefficient. In this talk, Jason Corso, CEO of Voxel51, presents FiftyOne, an open-source and enterprise software platform that supercharges machine learning workflows by enabling valuable interactive data visualization along with fast model performance analysis. FiftyOne’s flexible data model and integration-minded architecture provide building blocks to enrich most environments and support use cases like improving data quality, managing annotation workflows, building datasets and analyzing model performance. Corso explores these workflows and illustrates the value of FiftyOne’s innovative data-centric capabilities for improving the model development life cycle, enabling more performant vision systems.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 22-24, 2023, Santa Clara, California

More Events

FEATURED NEWS

AMD Unveils Powerful Radeon PRO Graphics Cards Offering Unique Features and High Performance to Tackle Heavy to Extreme Professional Workloads

Intel Foundry and Arm Announce Multigeneration Collaboration on Leading-edge SoC Design

Edge Impulse Adds Bring Your Own Model (BYOM) Capabilities and a Python SDK to Its Toolset

Vision Components’ Versatile MIPI Camera Modules Enable Rapid Embedded Vision Integration

e-con Systems’ 360° Bird Eye View Camera Solution Harnesses the NVIDIA Jetson AGX Orin SoC

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top