Intel

“Vision for All?,” a Presentation from Intel

Jeff McVeigh, Vice President in the Software and Services Group and General Manager of visual computing products at Intel, presents the "Vision for All?" tutorial at the May 2017 Embedded Vision Summit. So, you’ve decided to incorporate visual intelligence into your device or application. Will you need a team of computer vision PhDs working for […]

“Vision for All?,” a Presentation from Intel Read More +

“Making OpenCV Code Run Fast,” a Presentation from Intel

Vadim Pisarevsky, Software Engineering Manager at Intel, presents the "Making OpenCV Code Run Fast" tutorial at the May 2017 Embedded Vision Summit. OpenCV is the de facto standard framework for computer vision developers, with a 16+ year history,  approximately one million lines of code, thousands of algorithms and tens of thousands of unit tests. While

“Making OpenCV Code Run Fast,” a Presentation from Intel Read More +

“The Battle Between Traditional Algorithms and Deep Learning: The 3 Year Horizon,” a Presentation from Intel’s Movidius Group

Cormac Brick, Director of Machine Intelligence for Intel's Movidius Group, presents the "The Battle Between Traditional Algorithms and Deep Learning: The 3 Year Horizon" tutorial at the May 2017 Embedded Vision Summit. Deep learning techniques are gaining in popularity for many vision tasks. Will they soon dominate every facet of embedded vision? Cormac Brick from

“The Battle Between Traditional Algorithms and Deep Learning: The 3 Year Horizon,” a Presentation from Intel’s Movidius Group Read More +

floating-text-boxes

Helping Out Reality

This article was originally published at Intel's website. It is reprinted here with the permission of Intel. One day was a summer day like any other summer day. The next, everything had changed. Like swarms, like zombies they came, walking randomly in public places, staring lost into their smart phones, growing increasingly agitated. And then,

Helping Out Reality Read More +

Vision Processing Opportunities in Virtual Reality

VR (virtual reality) systems are beginning to incorporate practical computer vision techniques, dramatically improving the user experience as well as reducing system cost. This article provides an overview of embedded vision opportunities in virtual reality systems, such as environmental mapping, gesture interface, and eye tracking, along with implementation details. It also introduces an industry alliance

Vision Processing Opportunities in Virtual Reality Read More +

Vision Processing Opportunities in Drones

UAVs (unmanned aerial vehicles), commonly known as drones, are a rapidly growing market and increasingly leverage embedded vision technology for digital video stabilization, autonomous navigation, and terrain analysis, among other functions. This article reviews drone market sizes and trends, and then discusses embedded vision technology applications in drones, such as image quality optimization, autonomous navigation,

Vision Processing Opportunities in Drones Read More +

“Dataflow: Where Power Budgets Are Won and Lost,” a Presentation from Movidius

Sofiane Yous, Principal Scientist in the machine intelligence group at Movidius, presents the "Dataflow: Where Power Budgets Are Won and Lost" tutorial at the May 2016 Embedded Vision Summit. This presentation showcases stories from the front lines in the battle between power and performance in embedded vision, deep learning and computational imaging applications. First, Youse

“Dataflow: Where Power Budgets Are Won and Lost,” a Presentation from Movidius Read More +

“Getting from Idea to Product with 3D Vision,” a Presentation from Intel and MathWorks

Anavai Ramesh, Senior Software Engineer at Intel, and Avinash Nehemiah, Product Marketing Manager for Computer Vision at MathWorks, present the "Getting from Idea to Product with 3D Vision" tutorial at the May 2016 Embedded Vision Summit. To safely navigate autonomously, cars, drones and robots need to understand their surroundings in three dimensions. While 3D vision

“Getting from Idea to Product with 3D Vision,” a Presentation from Intel and MathWorks Read More +

FPGAs for Deep Learning-based Vision Processing

FPGAs have proven to be a compelling solution for solving deep learning problems, particularly when applied to image recognition. The advantage of using FPGAs for deep learning is primarily derived from several factors: their massively parallel architectures, efficient DSP resources, and large amounts of on-chip memory and bandwidth. An illustration of a typical FPGA architecture

FPGAs for Deep Learning-based Vision Processing Read More +

Figure3

Deep Learning for Object Recognition: DSP and Specialized Processor Optimizations

Neural networks enable the identification of objects in still and video images with impressive speed and accuracy after an initial training phase. This so-called "deep learning" has been enabled by the combination of the evolution of traditional neural network techniques, with one latest-incarnation example known as a CNN (convolutional neural network), by the steadily increasing

Deep Learning for Object Recognition: DSP and Specialized Processor Optimizations Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top