Software

Multiclass Confusion Matrix for Object Detection

This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. We introduce the Multiclass Confusion Matrix for Object Detection, a table that can help you perform failure analysis identifying otherwise unnoticeable errors, such as edge cases or non-representative issues in your data. In this article we introduce […]

Multiclass Confusion Matrix for Object Detection Read More +

May 2023 Embedded Vision Summit Vision Tank Competition Finalist Presentations

Swathi A N Kumar, Founder and CEO of BetterMeal AI (substituting for David Hojah, CEO and Founder of Parrots), Amit Mate, Founder and CEO of GMAC Intelligence, Slava Chesnokov, CTO of Lemur Imaging, Tsvi Achler, Founder of Optimizing Mind, and Robert Brown, CEO of ProHawk Technology Group, deliver their Vision Tank finalist presentations at the

May 2023 Embedded Vision Summit Vision Tank Competition Finalist Presentations Read More +

Introducing Temporian: Tryolabs and Google Venture in Temporal Data Processing

This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Today marks a significant milestone for us at Tryolabs as we introduce Temporian. In collaboration with Google, we’ve designed this tool to address the multifaceted challenges of temporal data processing head-on. Let’s explore the inspiration, functionality, and

Introducing Temporian: Tryolabs and Google Venture in Temporal Data Processing Read More +

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Generative AI: How Will It Impact Edge Applications and Machine Perception?” Expert Panel at the May 2023 Embedded Vision Summit. Other panelists include Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion Read More +

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman

Kristen Grauman, Professor at the University of Texas at Austin and Research Director at Facebook AI Research, presents the “Frontiers in Perceptual AI: First-person Video and Multimodal Perception” tutorial at the May 2023 Embedded Vision Summit. First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors.

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman Read More +

“3D Sensing: Market and Industry Update,” a Presentation from the Yole Group

Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), presents the “3D Sensing: Market and Industry Update” tutorial at the May 2023 Embedded Vision Summit. While the adoption of mobile 3D sensing has slowed in Android phones, the market has still been growing fast, thanks to Apple. Apple is

“3D Sensing: Market and Industry Update,” a Presentation from the Yole Group Read More +

“Multiple Object Tracking Systems,” a Presentation from Tryolabs

Javier Berneche, Senior Machine Learning Engineer at Tryolabs, presents the “Multiple Object Tracking Systems” tutorial at the May 2023 Embedded Vision Summit. Multiple object tracking (MOT) is an essential capability in many computer vision systems, including applications in fields such as traffic control, self-driving vehicles, sports and more. In this session, Berneche walks through the

“Multiple Object Tracking Systems,” a Presentation from Tryolabs Read More +

“Open Standards Unleash Hardware Acceleration for Embedded Vision,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, presents the “Open Standards Unleash Hardware Acceleration for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit. Offloading visual processing to a hardware accelerator has many advantages for embedded vision systems. Decoupling hardware and software removes barriers to innovation

“Open Standards Unleash Hardware Acceleration for Embedded Vision,” a Presentation from the Khronos Group Read More +

Top Factors to Consider When Integrating Multiple Cameras into Embedded Vision Applications

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Integrating multiple cameras into a camera-enabled system offers significant advantages, making it a crucial requirement in various applications, including industrial automation, surveillance, virtual reality, and more. Discover the key factors that play a crucial role

Top Factors to Consider When Integrating Multiple Cameras into Embedded Vision Applications Read More +

“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Presentation from Intel

Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit. Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not

“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Presentation from Intel Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top