Software

Figure5

Combining an ISP and Vision Processor to Implement Computer Vision

An ISP (image signal processor) in combination with one or several vision processors can collaboratively deliver more robust computer vision processing capabilities than vision processing is capable of providing standalone. However, an ISP operating in a computer vision-optimized configuration may differ from one functioning under the historical assumption that its outputs would be intended for […]

Combining an ISP and Vision Processor to Implement Computer Vision Read More +

Figure6

Multi-sensor Fusion for Robust Device Autonomy

While visible light image sensors may be the baseline “one sensor to rule them all” included in all autonomous system designs, they’re not necessarily a sole panacea. By combining them with other sensor technologies: “Situational awareness” sensors; standard and high-resolution radar, LiDAR, infrared and UV, ultrasound and sonar, etc., and “Positional awareness” sensors such as

Multi-sensor Fusion for Robust Device Autonomy Read More +

EVA180x100

“Key Trends Driving the Proliferation of Visual Perception,” a Presentation from the Embedded Vision Alliance

On December 4, 2018, Embedded Vision Alliance founder Jeff Bier delivered the presentation “The Four Key Trends Driving the Proliferation of Visual Perception” to the Bay Area Computer Vision and Deep Learning Meetup Group. From the event description: Recent updates in computer vision markets and technology Computer vision has gained… “Key Trends Driving the Proliferation

“Key Trends Driving the Proliferation of Visual Perception,” a Presentation from the Embedded Vision Alliance Read More +

Using Calibration to Translate Video Data to the Real World

This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA. DeepStream SDK 3.0 is about seeing beyond pixels. DeepStream exists to make it easier for you to go from raw video data to metadata that can be analyzed for actionable insights. Calibration is a key step in this

Using Calibration to Translate Video Data to the Real World Read More +

Using MATLAB and TensorRT on NVIDIA GPUs

This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA. As we design deep learning networks, how can we quickly prototype the complete algorithm—including pre- and postprocessing logic around deep neural networks (DNNs) —to get a sense of timing and performance on standalone GPUs? This question comes up

Using MATLAB and TensorRT on NVIDIA GPUs Read More +

“Harnessing Cloud Computer Vision In a Real-time Consumer Product,” a Presentation from Cocoon Cam

Pavan Kumar, Co-founder and CTO at Cocoon Cam, delivers the presentation "Harnessing Cloud Computer Vision In a Real-time Consumer Product," at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Kumar explains how his successful start-up is using edge and cloud vision computing to bring amazing new capabilities to the previously stagnant product

“Harnessing Cloud Computer Vision In a Real-time Consumer Product,” a Presentation from Cocoon Cam Read More +

“Outside-In Autonomous Systems,” a Presentation from Microsoft

Jie Liu, Visual Intelligence Architect in the Cloud and AI Platforms Group at Microsoft, delivers the presentation "Outside-In Autonomous Systems" at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Liu shares his company's vision for smart environments that observe and understand space, people and things.

“Outside-In Autonomous Systems,” a Presentation from Microsoft Read More +

“Update on Khronos Standards for Vision and Machine Learning,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.

“Update on Khronos Standards for Vision and Machine Learning,” a Presentation from the Khronos Group Read More +

“Update on Khronos Standards for Vision and Machine Learning,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.

“Update on Khronos Standards for Vision and Machine Learning,” a Presentation from the Khronos Group Read More +

“Rethinking Deep Learning: Neural Compute Stick,” a Presentation from Intel

Ashish Pai, Senior Director in the Neural Compute Program at Intel, presents the “Rethinking Deep Learning: Neural Compute Stick” tutorial at the May 2018 Embedded Vision Summit. In July 2017, Intel released the Movidius Neural Compute Stick (NCS)–a first-of-its-kind USB-based device for rapid prototyping and development of inference applications at the edge. NCS is powered

“Rethinking Deep Learning: Neural Compute Stick,” a Presentation from Intel Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top