Embedded Vision Insights: December 13, 2016 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

If you're creating systems that see, the Embedded Vision Summit is the place to be! Plan now to join us May 1-3, 2017 in Santa Clara as stakeholders from every corner of the world's embedded vision ecosystem gather to examine the latest advances in computer vision, machine learning, and embedded intelligence.

Super Earlybird registration rates are available for a limited time only using discount code nlevi1213, so register now to secure your place.

Attend and take a deep dive into the latest innovations in computer vision enablement — from processors and algorithms to software, sensors, and development tools and services. Meet the top technologists in this fast-growing field and network with the product and system design engineers, business leaders, suppliers, market analysts, entrepreneurs, and investors at the forefront of vision-based intelligence.

Key themes at this year's Summit include:

  • Embedded vision applications, including autonomous vehicles, VR/AR, robotics, drones, security and surveillance, medical/healthcare, consumer electronics, manufacturing and control automation, and more.
  • Embedded vision design and development techniques, including algorithms (e.g.: deep learning/CNNs), 3D perception, object/gesture recognition, low-power design options, hardware and component design/selection, deployment and scalability considerations, etc.
  • Embedded vision business opportunities, including market research, business models, investment trends, and other profit-building insights.

Immerse yourself in the Summit's multi-track conference program, which takes you on a deep dive into the technical- and engineering-related underpinnings of embedded vision and examines the exciting new business opportunities it presents. Participate in a variety of technical tutorials and hands-on sessions, and explore the Technology Showcase to see what's new and what's next in embedded vision.

Register now using discount code nlevi1213 and save! We look forward to seeing you there.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

OPENCL AND OPENCV ON FPGAS AND GPUS

"Efficient Implementation of Convolutional Neural Networks using OpenCL on FPGAs," a Presentation From IntelIntel
Convolutional neural networks (CNNs) are becoming increasingly popular in a variety of embedded applications that incorporate vision processing. The structure of CNN systems is characterized by cascades of FIR filters and transcendental functions. FPGA technology offers a very efficient way of implementing these structures by allowing designers to build custom hardware datapaths that implement the CNN structure. One challenge of using FPGAs revolves around the design flow that has been traditionally centered around tedious hardware description languages. In this talk, Deshanand Singh, Director of Software Engineering at Altera (now Intel), gives a detailed explanation of how CNN algorithms can be expressed in OpenCL and compiled directly to FPGA hardware. He gives detail on code optimizations and provides comparisons with the efficiency of hand-coded implementations.

"Understanding Adaptive Machine Learning Vision Algorithms and Implementing Them on GPUs and Heterogeneous Platforms," a Presentation From AMDAMD
Machine learning algorithms are pervasive in computer vision: from object detection to object tracking to full scene recognition, generative or discriminative learning dominates the space, as it is much easier (and closer to biological systems) to program learning algorithms and learn by example, rather than directly create a program that would perform the same tasks from (yet unknown) first principles. However, unlike biological systems, machine learning algorithms tend to view learning as a pre-processing step (training), rather as an online process. In this presentation, Harris Gasparakis, OpenCV Manager at AMD, examines archetypes of algorithms that contain magic numbers and/or fixed logic, and investigates adaptive generalizations and their implementation. Algorithms discussed include constrained energy minimization, adaptive basis function models, mixture models and graph models, with applications in object detection and tracking. Gasparakis shows that OpenCL 2.0/HSA (Heterogeneous System Architecture) and integrated GPUs enable new design patterns and algorithms, enhancing the arsenal of tools of high performance vision scientists. When possible, Gasparakis uses OpenCV 3.0 as the basis of his implementations..

CONNECTING CAMERAS TO SYSTEMS

Camera Interfaces Evolve to Address Growing Vision Processing NeedsCamera Interfaces
Before a still image or video stream can be analyzed, it must first be captured and transferred to the processing subsystem. Cameras, along with the interfaces that connect them to the remainder of the system, are therefore critical aspects of any computer vision design. This collaborative technical article from Alliance member companies Allied Vision, Basler, Intel, videantis and Xilinx, along with a partner organization, the MIPI Alliance, provides an overview of camera interfaces, and discusses their evolutions and associated standards, both in general and with specific respect to particular applications. More

Building an Angstrom Linux Distribution with OpenCV and Camera Driver SupportPathPartner
If your realtime image processing application, such as a driver monitoring system, is dependent on OpenCV, you have to develop an OpenCV build environment on the target board. This article from Alliance member company PathPartner will guide you through the steps necessary to create a Linux OS build with OpenCV and camera driver support for Intel (Altera) SoC FPGAs. More

UPCOMING INDUSTRY EVENTS

Cadence Embedded Neural Network Summit – Deep Learning: The New Moore’s Law: February 1, 2017, San Jose, California

Embedded World Conference: March 14-16, 2017, Messezentrum Nuremberg, Germany

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California

More Events

FEATURED NEWS

Xilinx FPGAs to be Deployed in New Amazon EC2 F1 Instances, Accelerating Genomics, Financial Analytics, Video Processing, Big Data, Security, and Machine Learning Inference

Khronos Announces VR Standards Initiative

DJI Brings Two New Flagship Drones to Lineup Featuring Myriad 2 VPUs

Basler's First 3D Camera Enters Series Production

itSeez3D Announces the Avatar SDK for Creating Photo Realistic 3D Avatars for Games, VR and AR with a Smartphone

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top