Processors

“Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product,” a Presentation from Cocoon Health

Pavan Kumar, Co-founder and CTO of Cocoon Health (formerly Cocoon Cam), delivers the presentation “Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Kumar explains how his company is evolving its use of edge and cloud vision computing in continuing to bring new […]

“Edge/Cloud Tradeoffs and Scaling a Consumer Computer Vision Product,” a Presentation from Cocoon Health Read More +

“Quantizing Deep Networks for Efficient Inference at the Edge,” a Presentation from Facebook

Raghuraman Krishnamoorthi, Software Engineer at Facebook, delivers the presentation “Quantizing Deep Networks for Efficient Inference at the Edge” at the Embedded Vision Alliance’s September 2019 Vision Industry and Technology Forum. Krishnamoorthi gives an overview of practical deep neural network quantization techniques and tools.

“Quantizing Deep Networks for Efficient Inference at the Edge,” a Presentation from Facebook Read More +

“Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors,” a Presentation from IHS Markit

Tom Hackenberg, Principal Analyst at IHS Markit, presents the “Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors” tutorial at the May 2019 Embedded Vision Summit. Artificial intelligence is not a new concept. Machine learning has been used for decades in large server and high performance computing environments. Why

“Embedded Vision Applications Lead Way for Processors in AI: A Market Analysis of Vision Processors,” a Presentation from IHS Markit Read More +

“How to Choose a 3D Vision Sensor,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “How to Choose a 3D Vision Sensor” tutorial at the May 2019 Embedded Vision Summit. Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors

“How to Choose a 3D Vision Sensor,” a Presentation from Capable Robot Components Read More +

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys

Bert Moons, Hardware Design Architect at Synopsys, presents the “Five+ Techniques for Efficient Implementation of Neural Networks” tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys Read More +

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One

Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the “Building Complete Embedded Vision Systems on Linux—From Camera to Display” tutorial at the May 2019 Embedded Vision Summit. There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such as NXP, Broadcom, TI and

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One Read More +

Rapid Prototyping on NVIDIA Jetson Platforms with MATLAB

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This article discusses how an application developer can prototype and deploy deep learning algorithms on hardware like the NVIDIA Jetson Nano Developer Kit with MATLAB. In previous posts, we explored how you can design and train deep learning

Rapid Prototyping on NVIDIA Jetson Platforms with MATLAB Read More +

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “Selecting the Right Imager for Your Embedded Vision Application” tutorial at the May 2019 Embedded Vision Summit. The performance of your embedded vision product is inexorably linked to the imager and lens it uses. Selecting these critical components is sometimes overwhelming due to the

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components Read More +

“Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions,” a Presentation from Magik Eye

Takeo Miyazawa, Founder and CEO of Magik Eye, presents the “Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions” tutorial at the May 2019 Embedded Vision Summit. Magik Eye is a global team of computer vision veterans that have developed a new method to determine depth from light directly without the need to

“Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions,” a Presentation from Magik Eye Read More +

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments

Manisha Agrawal, Software Applications Engineer at Texas Instruments, presents the “Machine Learning at the Edge in Smart Factories Using TI Sitara Processors” tutorial at the May 2019 Embedded Vision Summit. Whether it’s called “Industry 4.0,” “industrial internet of things” (IIOT) or “smart factories,” a fundamental shift is underway in manufacturing: factories are becoming smarter. This

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top