Tools

Edge Impulse Demonstration of Face Detection with FOMO and Alif Semiconductor’s Ensemble MCU

Shawn Hymel, Developer Relations Engineer at Edge Impulse, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Hymel demonstrates how to use Edge Impulse’s ground-breaking FOMO algorithm for real-time face detection. The demo runs live inference on an Alif Semiconductor Ensemble board, which combines an Arm […]

Edge Impulse Demonstration of Face Detection with FOMO and Alif Semiconductor’s Ensemble MCU Read More +

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon

Mumtaz Vauhkonen, Lead Distinguished Scientist and Head of Computer Vision for Cognitive AI in AI&D at Verizon, presents the “Unifying Computer Vision and Natural Language Understanding for Autonomous Systems” tutorial at the May 2022 Embedded Vision Summit. As the applications of autonomous systems expand, many such systems need the ability to perceive using both vision

“Unifying Computer Vision and Natural Language Understanding for Autonomous Systems,” a Presentation from Verizon Read More +

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale

Spyros Tragoudas, Professor and School Director of Southern Illinois University Carbondale, presents the “Compound CNNs for Improved Classification Accuracy” tutorial at the May 2022 Embedded Vision Summit. In this talk, Tragoudas presents a novel approach to improving the accuracy of convolutional neural networks (CNNs) used for classification. The approach utilizes the confusion matrix of the

“Compound CNNs for Improved Classification Accuracy,” a Presentation from Southern Illinois University Carbondale Read More +

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek

Robert Laganiere, CEO of Sensor Cortek, presents the “Strategies and Methods for Sensor Fusion” tutorial at the May 2022 Embedded Vision Summit. Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect the performance of the perception

“Strategies and Methods for Sensor Fusion,” a Presentation from Sensor Cortek Read More +

NVIDIA Launches IGX Edge AI Computing Platform for Safe, Secure Autonomous Systems

Platform Advances Human-Machine Collaboration Across Manufacturing, Logistics and Healthcare Tuesday, September 20, 2022 –- GTC — NVIDIA today introduced the NVIDIA IGX platform for high-precision edge AI, bringing advanced security and proactive safety to sensitive industries such as manufacturing, logistics and healthcare. In the past, such industries required costly solutions custom built for specific use cases,

NVIDIA Launches IGX Edge AI Computing Platform for Safe, Secure Autonomous Systems Read More +

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa

Erik Chelstad, CTO and Co-founder of Observa, presents the “Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments” tutorial at the May 2022 Embedded Vision Summit. In many computer vision applications, a key challenge is maintaining accuracy when the real world is changing. In this presentation, Chelstad explores techniques for designing hardware and

“Incorporating Continuous User Feedback to Achieve Product Longevity in Chaotic Environments,” a Presentation from Observa Read More +

“A Cost-Effective Approach to Modeling Object Interactions on the Edge,” a Presentation from Nemo @ Ridecell

Arun Kumar, Perception Engineer at Nemo @ Ridecell, presents the “Cost-Effective Approach to Modeling Object Interactions on the Edge” tutorial at the May 2022 Embedded Vision Summit. Determining bird’s eye view (BEV) object positions and tracks, and modeling the interactions among objects, is vital for many applications, including understanding human interactions for security and road

“A Cost-Effective Approach to Modeling Object Interactions on the Edge,” a Presentation from Nemo @ Ridecell Read More +

“COVID-19 Safe Distancing Measures in Public Spaces with Edge AI,” a Presentation from the Government Technology Agency of Singapore

Ebi Jose, Senior Systems Engineer at GovTech, the Government Technology Agency of Singapore, presents the “COVID-19 Safe Distancing Measures in Public Spaces with Edge AI” tutorial at the May 2022 Embedded Vision Summit. Whether in indoor environments, such as supermarkets, museums and offices, or outdoor environments, such as parks, maintaining safe social distancing has been

“COVID-19 Safe Distancing Measures in Public Spaces with Edge AI,” a Presentation from the Government Technology Agency of Singapore Read More +

Arm Supports FP8: A New 8-bit Floating-point Interchange Format for Neural Network Processing

This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. Enabling secure and ubiquitous Artificial Intelligence (AI) is a key priority for the Arm architecture. The potential for AI and machine learning (ML) is clear, with new use cases and benefits emerging almost daily – but alongside

Arm Supports FP8: A New 8-bit Floating-point Interchange Format for Neural Network Processing Read More +

“Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding,” a Presentation from Google

Zizhao Zhang, Staff Research Software Engineer and Tech Lead for Cloud AI Research at Google, presents the “Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding” tutorial at the May 2022 Embedded Vision Summit. In computer vision, hierarchical structures are popular in vision transformers (ViT). In this talk, Zhang presents a novel idea of

“Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding,” a Presentation from Google Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top