Summit

“Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework,” a Presentation from CEVA

Rami Drucker, Machine Learning Software Architect at CEVA, presents the “Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework” tutorial at the May 2023 Embedded Vision Summit. Next-generation AI and computer vision applications for autonomous vehicles, cameras, drones and robots require higher-than-ever computing power. Often, the most efficient way […]

“Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework,” a Presentation from CEVA Read More +

“Streamlining Embedded Vision Development with Smart Vision Components,” a Presentation from Basler

Selena Schwarm, Team Lead for Global Partner Management at Basler, presents the “Streamlining Embedded Vision Development with Smart Vision Components” tutorial at the May 2023 Embedded Vision Summit. The evolution of embedded vision and imaging technologies is enabling the development of powerful applications that would not have been practical previously. The possibilities seem to be

“Streamlining Embedded Vision Development with Smart Vision Components,” a Presentation from Basler Read More +

“A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI,” a Presentation from 7 Sensing Software

Di Ai, Machine Learning Engineer at 7 Sensing Software, presents the “Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI” tutorial at the May 2023 Embedded Vision Summit. Human-machine interaction is essential for smart devices. But growing needs for low power consumption and privacy pose challenges to developers of human-machine interfaces (HMIs). Time-of-flight (ToF)

“A Very Low-power Human-machine Interface Using ToF Sensors and Embedded AI,” a Presentation from 7 Sensing Software Read More +

“AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency,” a Presentation from VeriSilicon

Mankit Lo, Chief Architect for NPU IP Development at VeriSilicon, presents the “AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency” tutorial at the May 2023 Embedded Vision Summit. The AI-ISP IP product from VeriSilicon is a revolutionary solution that adds AI functionality to image signal processing (ISP)

“AI-ISP: Adding Real-time AI Functionality to Image Signal Processing with Reduced Memory Footprint and Processing Latency,” a Presentation from VeriSilicon Read More +

“Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints,” a Presentation from STRADVISION

Jack Sim, CTO of STRADVISION, presents the “Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints” tutorial at the May 2023 Embedded Vision Summit. ImmersiView is a deep learning–based augmented reality solution for automotive safety. It uses a head-up display to draw a driver’s attention to important objects. The development of such

“Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints,” a Presentation from STRADVISION Read More +

“Introducing the i.MX 93: Your “Go-to” Processor for Embedded Vision,” a Presentation from NXP Semiconductors

Srikanth Jagannathan, Product Manager at NXP Semiconductors, presents the “Introducing the i.MX 93: Your “Go-to” Processor for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit. In this presentation, you’ll learn all about NXP’s just-launched i.MX 93 applications processor family. The i.MX 93 is built with NXP’s innovative Energy Flex architecture, which delivers high

“Introducing the i.MX 93: Your “Go-to” Processor for Embedded Vision,” a Presentation from NXP Semiconductors Read More +

“How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three Days,” a Presentation from Nota AI

Steven Kim, Co-CEO of Nota America, presents the “How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three Days” tutorial at the May 2023 Embedded Vision Summit. NetsPresso, as explained by Kim in this presentation, is a development pipeline that enables developers to build, optimize and deploy vision AI models faster and

“How to Select, Train, Optimize and Deploy Edge Vision AI Models in Three Days,” a Presentation from Nota AI Read More +

“Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays,” a Presentation from Nextchip

Young-Jun Yoo, Vice President of the Automotive Business and Operations Unit at Nextchip, presents the “Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays” tutorial at the May 2023 Embedded Vision Summit. Traditionally, image sensors have been optimized to produce images that look natural to humans. For images consumed by algorithms, what

“Optimized Image Processing for Automotive Image Sensors with Novel Color Filter Arrays,” a Presentation from Nextchip Read More +

“Building Large-scale Distributed Computer Vision Solutions Without Starting from Scratch,” a Presentation from Network Optix

Darren Odom, Director of Platform Business Development at Network Optix, presents the “Building Large-scale Distributed Computer Vision Solutions Without Starting from Scratch” tutorial at the May 2023 Embedded Vision Summit. Video is hard. Network Optix makes it really easy. Video has the potential to become a valuable source of operational data for business, especially with

“Building Large-scale Distributed Computer Vision Solutions Without Starting from Scratch,” a Presentation from Network Optix Read More +

“Fast-track Design Cycles Using Lattice’s FPGAs,” a Presentation from Lattice Semiconductor

Hussein Osman, Segment Marketing Director at Lattice Semiconductor, presents the “Fast-track Design Cycles Using Lattice’s FPGAs” tutorial at the May 2023 Embedded Vision Summit. Being first to market can mean the difference between success and failure of a new product. But rapid product development brings challenges. With the growing use of AI in embedded vision

“Fast-track Design Cycles Using Lattice’s FPGAs,” a Presentation from Lattice Semiconductor Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top