Software

Floating-point Arithmetic for AI Inference: Hit or Miss?

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Our latest whitepaper shows that a new floating-point format doesn’t measure up to integer when you’re quantizing AI models to run on edge devices Artificial intelligence (AI) has become pervasive in our lives, improving our phones, cars, […]

Floating-point Arithmetic for AI Inference: Hit or Miss? Read More +

Machine Learning 101: Build, Train, Test, Rinse and Repeat

This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. In the previous episode of our Road to Data Science series, we went through all the general skills needed to become a proficient data scientist. In this post, we start to dig deeper into the specifics of

Machine Learning 101: Build, Train, Test, Rinse and Repeat Read More +

Edge Impulse Demonstration of Predictive Maintenance Using BrickML

Shawn Hymel, Senior Developer Relations Engineer at Edge Impulse, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Hymel demonstrates BrickML, a reference design for a tool that can monitor and measure machine health, using machine learning to detect anomalies and predicting when maintenance is needed.

Edge Impulse Demonstration of Predictive Maintenance Using BrickML Read More +

Supercharge Edge AI with NVIDIA TAO on Edge Impulse

We are excited to announce a significant leap forward for the edge AI ecosystem. NVIDIA’s TAO Toolkit is now fully integrated into Edge Impulse, enhancing our platform’s capabilities and creating new possibilities for developers, engineers, and businesses alike. Check out the Edge Impulse documentation for more information on how to get started with NVIDIA TAO

Supercharge Edge AI with NVIDIA TAO on Edge Impulse Read More +

Maximizing Attention, Minimizing Costs: Embracing Intelligent Digital Assistants with Vision and Speech Processing in the Cloud and Edge

This blog post was originally published by GMAC Intelligence. It is reprinted here with the permission of GMAC Intelligence. Humans mainly rely on speech, vision and touch to operate efficiently and effectively in the physical world. We also rely on smell and taste for our activities and our survival as well, but for most of

Maximizing Attention, Minimizing Costs: Embracing Intelligent Digital Assistants with Vision and Speech Processing in the Cloud and Edge Read More +

Transformer Models and NPU IP Co-optimized for the Edge

Transformers are taking the AI world by storm, as evidenced by super-intelligent chatbots and search queries, as well as image and art generators. These are also based on neural net technologies but programmed in a quite different way from more commonly understood convolution methods. Now transformers are starting to make their way to edge applications.

Transformer Models and NPU IP Co-optimized for the Edge Read More +

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK

Jay Kim, EVP of Technology for DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Kim demonstrates the simplicity of using DEEPX’s software development kit (SDK). Kim shows how to choose a target application and select an AI software framework in just two easy steps.

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK Read More +

Reflections from RSS: Three Reasons DL Fails at Autonomy

This blog post was originally published by Opteran Technologies. It is reprinted here with the permission of Opteran Technologies. Last week I had the pleasure of attending, and presenting at, the annual Robotics: Science and Systems (RSS) in Daegu, South Korea.  RSS ranks amongst the most prestigious of the international robotics conferences, and brings together

Reflections from RSS: Three Reasons DL Fails at Autonomy Read More +

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys

Tom Michiels, System Architect for ARC Processors at Synopsys, presents the “How Transformers Are Changing the Nature of Deep Learning Models” tutorial at the May 2023 Embedded Vision Summit. The neural network models used in embedded real-time applications are evolving quickly. Transformer networks are a deep learning approach that has become dominant for natural language

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys Read More +

Get a Clearer Picture of Vision Transformers’ Potential at the Edge

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. Scenario: Corporate security staff get an alert that a video camera has detected a former employee entering an off-limits building. Scenario: A radiologist receives a flag that an MRI contains early markers for potentially abnormal tissue growth.

Get a Clearer Picture of Vision Transformers’ Potential at the Edge Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top