Technical Insights

“Open Standards Unleash Hardware Acceleration for Embedded Vision,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, presents the “Open Standards Unleash Hardware Acceleration for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit. Offloading visual processing to a hardware accelerator has many advantages for embedded vision systems. Decoupling hardware and software removes barriers to innovation […]

“Open Standards Unleash Hardware Acceleration for Embedded Vision,” a Presentation from the Khronos Group Read More +

“Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft,” a Presentation from Immervision

Julie Buquet, Applied Researcher for Imaging and AI at Immervision, presents the “Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft” tutorial at the May 2023 Embedded Vision Summit. Unmanned aircraft systems (UASs) need to perform accurate autonomous navigation using sense-and-avoid algorithms under varying illumination conditions. This requires robust algorithms able to perform consistently,

“Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft,” a Presentation from Immervision Read More +

“Making Sense of Sensors: Combining Visual, Laser and Wireless Sensors to Power Occupancy Insights for Smart Workplaces,” a Presentation from Camio

Rakshit Agrawal, Vice President of Research and Development at Camio, presents the “Making Sense of Sensors: Combining Visual, Laser and Wireless Sensors to Power Occupancy Insights for Smart Workplaces” tutorial at the May 2023 Embedded Vision Summit. Just as humans rely on multiple senses to understand our environment, electronic systems are increasingly equipped with multiple

“Making Sense of Sensors: Combining Visual, Laser and Wireless Sensors to Power Occupancy Insights for Smart Workplaces,” a Presentation from Camio Read More +

“The OpenVX Standard API: Computer Vision for the Masses,” a Presentation from the Khronos Group

Kiriti Nagesh Gowda, Senior Member of the Technical Staff at AMD and Chair of the OpenVX Working Group at the Khronos Group, presents the “OpenVX Standard API: Computer Vision for the Masses?” tutorial at the May 2023 Embedded Vision Summit. Today, a great deal of effort is wasted in optimizing and re-optimizing computer vision and

“The OpenVX Standard API: Computer Vision for the Masses,” a Presentation from the Khronos Group Read More +

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive

Steve Teig, CEO of Perceive, presents the “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN” tutorial at the May 2023 Embedded Vision Summit. Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet, despite their name, mainstream GANs

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive Read More +

“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” a Presentation from Avnet

Monica Houston, Technical Solutions Manager at Avnet, presents the “Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” tutorial at the May 2023 Embedded Vision Summit. Selecting the appropriate type of image sensor is essential for reliable and accurate performance of vision applications. In this talk, Houston explores some of the critical factors to

“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies,” a Presentation from Avnet Read More +

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon

Mumtaz Vauhkonen, Lead Distinguished Scientist and Head of Computer Vision for Cognitive AI in AI&D at Verizon, presents the “Unifying Computer Vision and Natural Language Understanding for Autonomous Systems” tutorial at the May 2022 Embedded Vision Summit. As the applications of autonomous systems expand, many such systems need the ability to perceive using both vision

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon Read More +

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale

Spyros Tragoudas, Professor and School Director of Southern Illinois University Carbondale, presents the “Compound CNNs for Improved Classification Accuracy” tutorial at the May 2022 Embedded Vision Summit. In this talk, Tragoudas presents a novel approach to improving the accuracy of convolutional neural networks (CNNs) used for classification. The approach utilizes the confusion matrix of the

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale Read More +

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek

Robert Laganiere, CEO of Sensor Cortek, presents the “Strategies and Methods for Sensor Fusion” tutorial at the May 2022 Embedded Vision Summit. Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect the performance of the perception

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek Read More +

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa

Erik Chelstad, CTO and Co-founder of Observa, presents the “Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments” tutorial at the May 2022 Embedded Vision Summit. In many computer vision applications, a key challenge is maintaining accuracy when the real world is changing. In this presentation, Chelstad explores techniques for designing hardware and

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top