Edge AI and Vision Alliance

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance

Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting multiple image sensors to processors […]

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance Read More +

“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance

Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Interface Standard” tutorial at the May 2023 Embedded Vision Summit. By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption while gaining flexibility to source

“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance Read More +

“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap

Dwith Chenna, Senior Embedded DSP Engineer for Computer Vision at Magic Leap, presents the “Practical Approaches to DNN Quantization” tutorial at the May 2023 Embedded Vision Summit. Convolutional neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on resource-constrained devices. Quantization involves modifying

“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap Read More +

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere

Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo vision) for challenging outdoor applications

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere Read More +

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI

Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling the number of cameras, the

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI Read More +

“A Survey of Model Compression Methods,” a Presentation from Instrumental

Rustem Feyzkhanov, Staff Machine Learning Engineer at Instrumental, presents the “Survey of Model Compression Methods” tutorial at the May 2023 Embedded Vision Summit. One of the main challenges when deploying computer vision models to the edge is optimizing the model for speed, memory and energy consumption. In this talk, Feyzkhanov provides a comprehensive survey of

“A Survey of Model Compression Methods,” a Presentation from Instrumental Read More +

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI

Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk, Ghadiok presents his company’s privacy-preserving

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI Read More +

“Learning for 360° Vision,” a Presentation from Google

Yu-Chuan Su, Research Scientist at Google, presents the “Learning for 360° Vision,” tutorial at the May 2023 Embedded Vision Summit. As a core building block of virtual reality (VR) and augmented reality (AR) technology, and with the rapid growth of VR and AR, 360° cameras are becoming more available and more popular. People now create,

“Learning for 360° Vision,” a Presentation from Google Read More +

Edge AI and Vision Insights: September 27, 2023 Edition

MULTIMODAL PERCEPTION Frontiers in Perceptual AI: First-person Video and Multimodal Perception First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors. The egocentric view offers a special window into the camera wearer’s attention, goals, and interactions with people and objects in the environment, making it an

Edge AI and Vision Insights: September 27, 2023 Edition Read More +

“90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors

Simon Morris, Executive Advisor at Connected Vision Advisors, presents the “90% of Tech Start-Ups Fail. What Do the Other 10% Know?” tutorial at the May 2023 Embedded Vision Summit. Morris is fortunate to have led three tech start-ups with three successful exits. He received a lot of advice along the way from venture investors, co-founders,

“90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top