Tools

Transformer Models and NPU IP Co-optimized for the Edge

Transformers are taking the AI world by storm, as evidenced by super-intelligent chatbots and search queries, as well as image and art generators. These are also based on neural net technologies but programmed in a quite different way from more commonly understood convolution methods. Now transformers are starting to make their way to edge applications. […]

Transformer Models and NPU IP Co-optimized for the Edge Read More +

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK

Jay Kim, EVP of Technology for DEEPX, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Kim demonstrates the simplicity of using DEEPX’s software development kit (SDK). Kim shows how to choose a target application and select an AI software framework in just two easy steps.

DEEPX Demonstration of Simplifying Software Development with DEEPX’s Two-step SDK Read More +

Reflections from RSS: Three Reasons DL Fails at Autonomy

This blog post was originally published by Opteran Technologies. It is reprinted here with the permission of Opteran Technologies. Last week I had the pleasure of attending, and presenting at, the annual Robotics: Science and Systems (RSS) in Daegu, South Korea.  RSS ranks amongst the most prestigious of the international robotics conferences, and brings together

Reflections from RSS: Three Reasons DL Fails at Autonomy Read More +

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys

Tom Michiels, System Architect for ARC Processors at Synopsys, presents the “How Transformers Are Changing the Nature of Deep Learning Models” tutorial at the May 2023 Embedded Vision Summit. The neural network models used in embedded real-time applications are evolving quickly. Transformer networks are a deep learning approach that has become dominant for natural language

“How Transformers Are Changing the Nature of Deep Learning Models,” a Presentation from Synopsys Read More +

Get a Clearer Picture of Vision Transformers’ Potential at the Edge

This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. Scenario: Corporate security staff get an alert that a video camera has detected a former employee entering an off-limits building. Scenario: A radiologist receives a flag that an MRI contains early markers for potentially abnormal tissue growth.

Get a Clearer Picture of Vision Transformers’ Potential at the Edge Read More +

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive

Steve Teig, CEO of Perceive, presents the “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN” tutorial at the May 2023 Embedded Vision Summit. Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet, despite their name, mainstream GANs

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive Read More +

A Buyers Guide to an NPU

This blog post was originally published at Expedera’s website. It is reprinted here with the permission of Expedera. Choosing the right inference NPU (Neural Processing Unit) is a critical decision for a chip architect. There’s a lot at stake because the AI landscape constantly changes, and the choices will impact overall product cost, performance, and

A Buyers Guide to an NPU Read More +

Expedera Announces LittleNPU AI Processors for Always-sensing Camera Applications

Highlights Specialized NPU IP makes it easier for device makers to implement feature-rich, always-sensing camera systems. A dedicated processing solution addresses the power and privacy concerns of always-sensing camera deployments. Santa Clara, California, July 18, 2023—Expedera Inc, a leading provider of scalable Neural Processing Unit (NPU) semiconductor intellectual property (IP), today announced the availability of

Expedera Announces LittleNPU AI Processors for Always-sensing Camera Applications Read More +

Qualcomm Works with Meta to Enable On-device AI Applications Using Llama 2

Highlights: Qualcomm is scheduled to make available Llama 2-based AI implementations on flagship smartphones and PCs starting from 2024 onwards to enable developers to usher in new and exciting generative AI applications using the AI-capabilities of Snapdragon platforms. On-device AI implementation helps to increase user privacy, address security preferences, enhance applications reliability and enable personalization

Qualcomm Works with Meta to Enable On-device AI Applications Using Llama 2 Read More +

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai

Oren Debbi, CEO and Co-founder of Visionary.ai, presents the “Can AI Solve the Low Light and HDR Challenge?” tutorial at the May 2023 Embedded Vision Summit. The phrase “garbage in, garbage out” is applicable to machine and human vision. If we can improve the quality of image data at the source by removing noise, this

“Can AI Solve the Low Light and HDR Challenge?,” a Presentation from Visionary.ai Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top