PROVIDER

Intel-RealSense-D435i-2-690x560_c

New Intel RealSense D435i Stereo Depth Camera Adds 6 Degrees of Freedom Tracking

What’s New: Intel today introduced a new addition to the Intel® RealSense™ D400 series: the Intel RealSense Depth Camera D435i. This latest Intel RealSense camera includes a new inertial measurement unit (IMU) that enables developers to create solutions with more advanced depth-sensing and tracking capabilities for applications including drones, robotics and gaming. “Intel RealSense technology […]

New Intel RealSense D435i Stereo Depth Camera Adds 6 Degrees of Freedom Tracking Read More +

Synopsys Demonstration of Deep Learning Inference and Sparse Optical Flow

Gordon Cooper, product marketing manager at Synopsys, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Cooper demonstrates combining deep learning with traditional computer vision by using the DesignWare EV6x Embedded Vision Processor¹s vector DSP and CNN engine. The tightly integrated CNN engine executes deep learning inference (using TinyYOLO, but any graph

Synopsys Demonstration of Deep Learning Inference and Sparse Optical Flow Read More +

Synopsys Demonstration of Android Neural Network Acceleration with EV6x

Gordon Cooper, product marketing manager, and Mischa Jonker, software engineer, both of Synopsys, deliver a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Cooper and Jonker demonstrate how the DesignWare EV6x Embedded Vision Processor with deep learning can offload application processor tasks to increase performance and reduce power consumption, using an Android Neural

Synopsys Demonstration of Android Neural Network Acceleration with EV6x Read More +

EVA180x100

Embedded Vision Insights: November 13, 2018 Edition

LETTER FROM THE EDITOR Dear Colleague, The Embedded Vision Summit is the preeminent conference on practical computer vision, covering applications at the edge and in the cloud. It attracts a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are creating and using computer vision technology. The Embedded Vision Summit has

Embedded Vision Insights: November 13, 2018 Edition Read More +

Cadence Demonstration of On-Device AI for Image Classification

Megha Daga, senior technical marketing manager at Cadence, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Daga demonstrates the highly capable Tensilica Vision P6 DSP, which does both computer vision and AI processing. The demo showcases Cadence’s automatic code generation tool for neural networks, the Xtensa Neural Network Compiler, which accepts

Cadence Demonstration of On-Device AI for Image Classification Read More +

Cadence Demonstration of On-Device AI for Object Detection

Megha Daga, senior technical marketing manager at Cadence, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Daga demonstrates the power of the Tensilica Vision P6 DSP to perform both computer vision and AI processing. The demo detects all the faces in the camera view using the Tiny Yolo V2 network, and

Cadence Demonstration of On-Device AI for Object Detection Read More +

Arm Demonstration of Image Classification Using Arm NN and the Compute Library

Gian Marco Iodice, Senior Software Engineer at Arm, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Iodice demonstrates image classification using Arm NN and the Compute Library, showing how they provide Arm-based platforms with the flexibility to switch between the CPU and GPU for easy and performant image classification.

Arm Demonstration of Image Classification Using Arm NN and the Compute Library Read More +

Arm Demonstration of the Company’s Object Detection Processor

Alexey Lopich, Principal Hardware Engineer and Team Lead at Arm, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Lopich demonstrates Arm’s Object Detection processor, showing how it detects objects – from 50×60 pixels to full screen – in real time, at high speed (60fps) and in high resolution (Full HD). He

Arm Demonstration of the Company’s Object Detection Processor Read More +

Using MATLAB and TensorRT on NVIDIA GPUs

This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA. As we design deep learning networks, how can we quickly prototype the complete algorithm—including pre- and postprocessing logic around deep neural networks (DNNs) —to get a sense of timing and performance on standalone GPUs? This question comes up

Using MATLAB and TensorRT on NVIDIA GPUs Read More +

Horizon Robotics Demonstration of Its Autonomous Driving Platform Powered by Its Embedded AI Chip

Su Li, Senior Technical Account Manager at Horizon Robotics, delivers a product demonstration at the May 2018 Embedded Vision Summit. Specifically, Li demonstrates the company’s autonomous driving computing platform, Matrix, based on Horizon Robotics’ self-developed embedded AI processor architecture, BPU2.0. Matrix has powerful perceptual computing capability and can provide high-performance sensing system for L4 autonomous

Horizon Robotics Demonstration of Its Autonomous Driving Platform Powered by Its Embedded AI Chip Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top