Embedded Vision Insights: August 16, 2016 Edition

EVA180x100


FEATURED VIDEOS

"3D from 2D: Theory, Implementation, and Applications of Structure from Motion," a Presentation from videantisvideantis
Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene. This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning. Marco Jacobs, Vice President of Marketing at videantis, introduces the theory behind structure from motion, provides some representative applications that use it, and explores an efficient implementation for embedded applications.

"Leveraging Computer Vision and Machine Learning to Power the Visual Commerce Revolution," a Presentation from Sight CommerceSight Commerce
Satya Mallick, Co-Founder of Sight Commerce, explains how his company is using vision to enable retailers like Bloomingdale's to create more engaging, personalized shopping experiences.

More Videos

FEATURED ARTICLES

FPGAs for Deep Learning-based Vision ProcessingAltera
FPGAs have proven to be a compelling solution for solving deep learning problems, particularly when applied to image recognition, several engineers from Intel's Programmable Solutions Group (formerly Altera) state in this technical article. According to Intel, the advantage of using FPGAs for deep learning is primarily derived from several factors: their massively parallel architectures, efficient DSP resources, and large amounts of on-chip memory and bandwidth. More

Will iPhone 7 Finally Capture 3D Vision?CEVA
The trade media is abuzz with speculations that the iPhone 7 is going to incorporate 3D vision using dual rear-facing cameras and add depth-sensing capability for mapping 3D environments and tracking body movements and facial expressions. The basis of these speculations, says CEVA's Yair Siegel in this blog post, are multiple 3D technology acquisitions that Apple has made during the past couple of years. More

More Articles

FEATURED NEWS

Next-generation Multi-platform Video Analytics SoC Results From Collaboration Between Imagination and ELVEES

Rockchip and CEVA Extend Partnership to Include CEVA-XM4 Intelligent Vision DSP

PLDA Group and Auviz Systems Team Up to Deliver FPGA-based Computer Vision Accelerators with QuickPlay

Movidius Announces Deep Learning Accelerator and Fathom Software Framework

Almalence SuperSensor Utilizes Qualcomm Snapdragon 820 with Qualcomm Hexagon DSP for Improved Mobile Video Quality

More News

UPCOMING INDUSTRY EVENTS

A Brief Introduction to Deep Learning for Vision and the Caffe Framework: A Free Webinar from the Alliance: August 24, 2016

ARC Processor Summit: September 13, 2016, Santa Clara, California

Deep Learning for Vision Using CNNs and Caffe: A Hands-on Tutorial: September 22, 2016, Cambridge, Massachusetts

IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona

SoftKinetic DepthSense Workshop: September 26-27, 2016, San Jose, California

Sensors Midwest (use code EVA for a free Expo pass): September 27-28, 2016, Rosemont, Illinois

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California

More Events

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top