Processors

Figure8

Camera Interfaces Evolve to Address Growing Vision Processing Needs

Before a still image or video stream can be analyzed, it must first be captured and transferred to the processing subsystem. Cameras, along with the interfaces that connect them to the remainder of the system, are therefore critical aspects of any computer vision design. This article provides an overview of camera interfaces, and discusses their […]

Camera Interfaces Evolve to Address Growing Vision Processing Needs Read More +

Figure5

Deep Learning with INT8 Optimization on Xilinx Devices

This is a reprint of a Xilinx-published white paper which is also available here (1 MB PDF). Xilinx INT8 optimization provide the best performance and most power efficient computational techniques for deep learning inference. Xilinx's integrated DSP architecture can achieve 1.75X solution-level performance at INT8 deep learning operations than other FPGA DSP architectures. ABSTRACT The

Deep Learning with INT8 Optimization on Xilinx Devices Read More +

“Intelligent Video Surveillance: Are We There Yet?,” a Presentation from CheckVideo

Nik Gagvani, President and General Manager of CheckVideo, delivers the presentation "Intelligent Video Surveillance: Are We There Yet?" at the September 2016 Embedded Vision Alliance Member Meeting. Gagvani provides an insider's perspective on vision-enabled video surveillance applications.

“Intelligent Video Surveillance: Are We There Yet?,” a Presentation from CheckVideo Read More +

“Energy-efficient Hardware for Embedded Vision and Deep Convolutional Neural Networks,” a Presentation from MIT

Vivienne Sze, Assistant Professor at MIT, delivers the presentation "Energy-efficient Hardware for Embedded Vision and Deep Convolutional Neural Networks" at the September 2016 Embedded Vision Alliance Member Meeting. Sze describes the results of her team's recent research on optimized hardware for deep learning. Followup: per Professor Sze, "Slides available at: http://www.rle.mit.edu/eems/wp-content/uploads/2016/07/Sze-Energy-Efficient-Hardware-for-Embedded-Vision-and-Deep-Learning-CVPR-2016-EVW.pdf".

“Energy-efficient Hardware for Embedded Vision and Deep Convolutional Neural Networks,” a Presentation from MIT Read More +

“Enabling Efficient Heterogeneous Processing Through Coherency,” a Presentation from the HSA Foundation

Dr. John Glossner, President of the HSA Foundation and CEO of GPT-US, delivers the presentation "Enabling Efficient Heterogeneous Processing Through Coherency" at the September 2016 Embedded Vision Alliance Member Meeting. Glossner describes the organization's goals and deliverables for enabling heterogeneous programming.

“Enabling Efficient Heterogeneous Processing Through Coherency,” a Presentation from the HSA Foundation Read More +

“What’s Augmented? What’s Reality?,” a Presentation from PTC

Jay Wright, President and General Manager of Vuforia at PTC, delivers the presentation "What's Augmented? What's Reality?" at the September 2016 Embedded Vision Alliance Member Meeting. Wright provides insights on augmented reality markets, applications and technology.

“What’s Augmented? What’s Reality?,” a Presentation from PTC Read More +

“Using Vision to Enable Autonomous Land, Sea and Air Vehicles,” a Keynote Presentation from NASA JPL

Larry Matthies, senior research scientist at the NASA Jet Propulsion Laboratory, presents the "Using Vision to Enable Autonomous Land, Sea and Air Vehicles" keynote at the May 2016 Embedded Vision Summit. Say you’re an autonomous rover and you’ve just landed on Mars. Vexing questions now confront you: “Where am I and how am I moving?”

“Using Vision to Enable Autonomous Land, Sea and Air Vehicles,” a Keynote Presentation from NASA JPL Read More +

Vision Processing Opportunities in Virtual Reality

VR (virtual reality) systems are beginning to incorporate practical computer vision techniques, dramatically improving the user experience as well as reducing system cost. This article provides an overview of embedded vision opportunities in virtual reality systems, such as environmental mapping, gesture interface, and eye tracking, along with implementation details. It also introduces an industry alliance

Vision Processing Opportunities in Virtual Reality Read More +

Vision Processing Opportunities in Drones

UAVs (unmanned aerial vehicles), commonly known as drones, are a rapidly growing market and increasingly leverage embedded vision technology for digital video stabilization, autonomous navigation, and terrain analysis, among other functions. This article reviews drone market sizes and trends, and then discusses embedded vision technology applications in drones, such as image quality optimization, autonomous navigation,

Vision Processing Opportunities in Drones Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top