Technical Insights

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance

Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting multiple image sensors to processors […]

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance Read More +

“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap

Dwith Chenna, Senior Embedded DSP Engineer for Computer Vision at Magic Leap, presents the “Practical Approaches to DNN Quantization” tutorial at the May 2023 Embedded Vision Summit. Convolutional neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on resource-constrained devices. Quantization involves modifying

“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap Read More +

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere

Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo vision) for challenging outdoor applications

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere Read More +

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI

Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling the number of cameras, the

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI Read More +

“A Survey of Model Compression Methods,” a Presentation from Instrumental

Rustem Feyzkhanov, Staff Machine Learning Engineer at Instrumental, presents the “Survey of Model Compression Methods” tutorial at the May 2023 Embedded Vision Summit. One of the main challenges when deploying computer vision models to the edge is optimizing the model for speed, memory and energy consumption. In this talk, Feyzkhanov provides a comprehensive survey of

“A Survey of Model Compression Methods,” a Presentation from Instrumental Read More +

“Learning for 360° Vision,” a Presentation from Google

Yu-Chuan Su, Research Scientist at Google, presents the “Learning for 360° Vision,” tutorial at the May 2023 Embedded Vision Summit. As a core building block of virtual reality (VR) and augmented reality (AR) technology, and with the rapid growth of VR and AR, 360° cameras are becoming more available and more popular. People now create,

“Learning for 360° Vision,” a Presentation from Google Read More +

“Efficient Many-function Video ML at the Edge,” a Presentation from Cisco Systems

Chris Rowen, Vice President of AI Engineering for Webex Collaboration at Cisco Systems, presents the “Efficient Many-function Video ML at the Edge,” tutorial at the May 2023 Embedded Vision Summit. Video streams are so rich, and video workloads are so sophisticated, that we may now expect video ML to supply many simultaneous insights and transformations.

“Efficient Many-function Video ML at the Edge,” a Presentation from Cisco Systems Read More +

“Efficient Neuromorphic Computing with Dynamic Vision Sensor, Spiking Neural Network Accelerator and Hardware-aware Algorithms,” a Presentation from Arizona State University

Jae-sun Seo, Associate Professor at Arizona State University, presents the “Efficient Neuromorphic Computing with Dynamic Vision Sensor, Spiking Neural Network Accelerator and Hardware-aware Algorithms” tutorial at the May 2023 Embedded Vision Summit. Spiking neural networks (SNNs) mimic biological nervous systems. Using event-driven computation and communication, SNNs achieve very low power consumption. However, two important issues

“Efficient Neuromorphic Computing with Dynamic Vision Sensor, Spiking Neural Network Accelerator and Hardware-aware Algorithms,” a Presentation from Arizona State University Read More +

“Item Recognition in Retail,” a Presentation from 7-Eleven

Sumedh Datar, Senior Machine Learning Engineer at 7-Eleven, presents the “Item Recognition in Retail” tutorial at the May 2023 Embedded Vision Summit. Computer vision has vast potential in the retail space. 7-Eleven is working on fast frictionless checkout applications to better serve customers. These solutions range from faster checkout systems to fully automated cashierless stores.

“Item Recognition in Retail,” a Presentation from 7-Eleven Read More +

“Embedded Vision in Robotics, Biotech and Education,” an Interview with Dean Kamen

Dean Kamen, Founder of DEKA Research and Development, talks with Jeff Bier, Founder of the Edge AI and Vision Alliance, for the “Embedded Vision in Robotics, Biotech and Education” interview at the May 2023 Embedded Vision Summit. In his 2018 keynote presentation at the Embedded Vision Summit, legendary inventor and technology visionary Dean Kamen memorably

“Embedded Vision in Robotics, Biotech and Education,” an Interview with Dean Kamen Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top