Summit 2023

“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” a Presentation from Avnet

Monica Houston, Technical Solutions Manager at Avnet, presents the “Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” tutorial at the May 2023 Embedded Vision Summit. Selecting the appropriate type of image sensor is essential for reliable and accurate performance of vision applications. In this talk, Houston explores some of the critical factors to […]

“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” a Presentation from Avnet Read More +

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai

Oren Debbi, CEO and Co-founder of Visionary.ai, presents the “Can AI Solve the Low Light and HDR Challenge?” tutorial at the May 2023 Embedded Vision Summit. The phrase “garbage in, garbage out” is applicable to machine and human vision. If we can improve the quality of image data at the source by removing noise, this

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai Read More +

“Modernizing the Development of AI-based IoT Devices with Wedge,” a Presentation from Midokura, a Sony Group Company

Dan Mihai Dumitriu, Chief Technology Officer of Midokura, a Sony Group Company, presents the “Modernizing the Development of AI-based IoT Devices with Wedge” tutorial at the May 2023 Embedded Vision Summit. IoT device development has traditionally relied on a monolithic approach, with all firmware developed by a single vendor using a rigid waterfall model, typically

“Modernizing the Development of AI-based IoT Devices with Wedge,” a Presentation from Midokura, a Sony Group Company Read More +

“Using a Neural Processor for Always-sensing Cameras,” a Presentation from Expedera

Sharad Chole, Chief Scientist and Co-founder of Expedera, presents the “Using a Neural Processor for Always-sensing Cameras” tutorial at the May 2023 Embedded Vision Summit. Always-sensing cameras are becoming a common AI-enabled feature of consumer devices, much like the always-listening Siri or Google assistants. They can enable a more natural and seamless user experience, such

“Using a Neural Processor for Always-sensing Cameras,” a Presentation from Expedera Read More +

“A New, Open-standards-based, Open-source Programming Model for All Accelerators,” a Presentation from Codeplay Software

Charles Macfarlane, Chief Business Officer at Codeplay Software, presents the “New, Open-standards-based, Open-source Programming Model for All Accelerators” tutorial at the May 2023 Embedded Vision Summit. As demand for AI grows, developers are attempting to squeeze more and more performance from accelerators. Ideally, developers would choose the accelerators best suited to their applications. Unfortunately, today

“A New, Open-standards-based, Open-source Programming Model for All Accelerators,” a Presentation from Codeplay Software Read More +

“Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework,” a Presentation from CEVA

Rami Drucker, Machine Learning Software Architect at CEVA, presents the “Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework” tutorial at the May 2023 Embedded Vision Summit. Next-generation AI and computer vision applications for autonomous vehicles, cameras, drones and robots require higher-than-ever computing power. Often, the most efficient way

“Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework,” a Presentation from CEVA Read More +

“Streamlining Embedded Vision Development with Smart Vision Components,” a Presentation from Basler

Selena Schwarm, Team Lead for Global Partner Management at Basler, presents the “Streamlining Embedded Vision Development with Smart Vision Components” tutorial at the May 2023 Embedded Vision Summit. The evolution of embedded vision and imaging technologies is enabling the development of powerful applications that would not have been practical previously. The possibilities seem to be

“Streamlining Embedded Vision Development with Smart Vision Components,” a Presentation from Basler Read More +

“A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI,” a Presentation from 7 Sensing Software

Di Ai, Machine Learning Engineer at 7 Sensing Software, presents the “Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI” tutorial at the May 2023 Embedded Vision Summit. Human-machine interaction is essential for smart devices. But growing needs for low power consumption and privacy pose challenges to developers of human-machine interfaces (HMIs). Time-of-flight (ToF)

“A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI,” a Presentation from 7 Sensing Software Read More +

“AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency,” a Presentation from VeriSilicon

Mankit Lo, Chief Architect for NPU IP Development at VeriSilicon, presents the “AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency” tutorial at the May 2023 Embedded Vision Summit. The AI-ISP IP product from VeriSilicon is a revolutionary solution that adds AI functionality to image signal processing (ISP)

“AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency,” a Presentation from VeriSilicon Read More +

“Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints,” a Presentation from STRADVISION

Jack Sim, CTO of STRADVISION, presents the “Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints” tutorial at the May 2023 Embedded Vision Summit. ImmersiView is a deep learning–based augmented reality solution for automotive safety. It uses a head-up display to draw a driver’s attention to important objects. The development of such

“Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints,” a Presentation from STRADVISION Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top