Embedded Vision Insights: March 15, 2016 Edition

EVA180x100

In this edition of Embedded Vision Insights:

LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

The Embedded Vision Summit will be here in just a month and a half, and the published conference program is rapidly nearing completion. If you haven't visited the Summit area of the Alliance website lately, I've no doubt you'll be impressed with the breadth of presentations and workshops already listed there, with the remainder to follow shortly. Highlights include a Deep Learning Day on May 2, keynotes from Google and NASA respectively on May 2 and 3, and multiple workshops on May 4. And while you're on the website, I encourage you to register now for the Summit, as space is limited and seats are filling up!

Also, while you're on the Alliance website, make sure to check out all the other great new content there. It includes the latest in a series of technical articles from Imagination Technologies on heterogeneous computing for computer vision, along with a video tutorial from Basler on image quality, and a Mobile World Congress show report from videantis. Equally notable are the ten new demonstration videos of various embedded vision technologies, products and applications from Alliance members.

Finally, I encourage you to check out a new section of this newsletter, listing upcoming vision-related industry events from the Alliance and other organizations. If you know of an event that should be added to the list, please email me the details. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better serve your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

"Programming Novel Recognition Algorithms on Heterogeneous Architectures," a Presentation from XilinxXilinx
KKees Vissers, Distinguished Engineer at Xilinx, presents the "Programming Novel Recognition Algorithms on Heterogeneous Architectures" tutorial at the May 2014 Embedded Vision Summit. The combination of heterogeneous systems, consisting of processors and FPGA, is a high-performance implementation platform for image and vision processing. One of the significant hurdles in leveraging the compute potential was the inherent low-level of programming with RTL for the FPGA part and connecting RTL blocks to processors. Novel complete software environments are now available that support algorithm development, programming exclusively in C/C++ and OpenCL. Vissers shows examples of relevant novel vision and recognition algorithms for Zynq based devices, with a complete platform abstraction of any RTL design, High-Level Synthesis interconnect, or processor low level drivers. He also shows the outstanding system level performance and power consumption of a number of applications programmed on these devices..

BDTI Demonstrations of OpenCV- and OpenCL-based Vision AlgorithmsBDTI
Jeremy Giddings, Business Development Director at Berkeley Design Technology, Inc. (BDTI), demonstrates the company's latest embedded vision technologies and products at the May 2015 Embedded Vision Summit. Specifically, Giddings demonstrates OpenCV- and OpenCL-based real-time augmented reality, along with GPU-accelerated background subtraction.

More Videos

FEATURED ARTICLES

OpenVX Enables Portable, Efficient Vision SoftwareOpenVX
Key to the widespread adoption of embedded vision is the ease of developing software that runs efficiently on a diversity of hardware platforms, with high performance, low power consumption and cost-effective system resource needs. In the past, this combination of objectives has been a tall order, since the combination of high performance and low power consumption has historically required significant code optimization for a particular device architecture, thereby hampering portability to other architectures. Fortunately, this situation is beginning to change with the emergence of OpenVX, an open standard created and maintained by the not-for-profit Khronos Group industry consortium. More

Computer Vision Empowers Autonomous DronesCEVA
Computer vision technology is complementing GPS sensors in addressing the quest to support autonomous flying features such as object tracking, environment sensing, and collision avoidance. However, 4K video and other functions such as 3D depth map creation pose significant computational challenges for the high quality image- and video-processing pipelines in visually intelligent drones' electronics. While computer vision is a clearly dominant aspect of the design evolution in such flying robots, it remains algorithmically, requiring significant intelligence and analytical capabilities in order for drones to understand what they are filming. More

More Articles

FEATURED DOWNLOADS

Deep Learning from a Mobile PerspectiveCaffe
Yangqing Jia created the Caffe framework while a graduate student researcher at UC Berkeley. He later was a member of the Google Brain project and recently joined Facebook, working on various aspects of deep learning research and engineering. At the Alliance’s February 2016 tutorial on deep learning for computer vision using convolutional neural networks and Caffe, Jia gave a presentation about implementing deep learning in resource-constrained systems such as mobile phones and embedded devices.

More Downloads

FEATURED COMMUNITY DISCUSSIONS

FPGA Design Engineer (Job Posting)

More Community Discussions

FEATURED NEWS

iCatch Technology Selects CEVA Imaging and Vision DSP for Digital Video and Image Product Line

FotoNation to Deliver Next Generation Multimedia Experiences on Smartphones

ON Semiconductor Expands Optical Image Stabilization Portfolio, Bringing Superior Picture Quality to Built-In Camera Applications

Texas Instruments "Jacinto" Processors Power Volkswagen's MIB II Infotainment Systems

More News

UPCOMING INDUSTRY EVENTS

NVIDIA GPU Technology Conference (GTC): April 4-7, 2016, San Jose, California

Embedded Vision Summit: May 2-4, 2016, Santa Clara, California

NXP FTF Technology Forum: May 16-19, 2016, Austin, Texas

Augmented World Expo: June 1-2, 2016, Santa Clara, California

Low-Power Image Recognition Challenge (LPIRC): June 5, 2016, Austin, Texas

Sensors Expo: June 21-23, 2016, San Jose, California

IEEE Computer Vision and Pattern Recognition (CVPR) Conference: June 26-July 1, 2016, Las Vegas, Nevada

IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top