LETTER FROM THE EDITOR |
Dear Colleague,
The Embedded Vision Summit is pleased to announce our keynote speaker for May 2nd: Jitendra Malik, Arthur J. Chick Professor and Chair, Dept. of Electrical Engineering & Computer Science, U.C. Berkeley. We also want to remind you that Early Bird pricing ends in two days. Use promotional code nlevi0313 when registering and save 15%!
Deep learning and neural networks, coupled with high-performance computing, have led to remarkable advances in computer vision. But while we can now detect people and objects in a scene, we’re still quite short of "visual understanding"—for example, the ability to predict what might happen next in a video sequence. Professor Malik’s keynote talk, "Deep Visual Understanding from Deep Learning," will review progress in visual understanding, give an overview of the state of the art, and show a tantalizing glimpse into what the future holds. This is your opportunity to learn from an industry pioneer whose cutting-edge research is breaking new ground.
Join us for three days of computer vision inspiration including multiple expert presentations on deep learning for vision, 3D perception and low-power vision implementation plus:
Early Bird pricing ends in two days! Register now using promotional code nlevi0313 and save 15%. Also, we still have a few discounted hotel rooms available at the Santa Clara Hyatt Regency. Book your room now before we sell out!
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
EMBEDDED VISION SOFTWARE DEVELOPMENT |
Using the OpenCL C Kernel Language for Embedded Vision Processors
OpenCL C is a programming language that is used to write computation kernels. It is based on C99 and extended to support features such as multiple levels of memory hierarchy, parallelism and synchronization. This talk from Seema Mirchandaney, Engineering Manager for Software Tools at Synopsys, focuses on the benefits and ease of programming vision-based kernels using the key features of OpenCL C. In addition, Mirchandaney describes language extensions that allow programmers to take advantage of hardware features typical of embedded vision processors, such as wider vector widths, sophisticated accumulator forms of instructions, and scatter/gather capabilities. The talk also addresses advanced topics, such as whole function vectorization support available in the compiler and the benefits of hardware support for predication in the context of lane-based control flow and OpenCL C.
NVIDIA VisionWorks, a Toolkit for Computer Vision
In this talk, Elif Albuz, Technical Lead for the VisionWorks Toolkit at NVIDIA, introduces the NVIDIA VisionWorks toolkit, a software development package for computer vision and image processing. VisionWorks implements and extends the Khronos OpenVX standard, and is optimized for CUDA-enabled GPUs and SOCs, supporting computer vision applications on a scalable and flexible platform. VisionWorks implements a thread-safe API and framework for seamlessly adding user-defined primitives. The talk gives an overview of the VisionWorks toolkit, OpenVX API and framework, and computer vision pipeline examples exercising the OpenVX API. The session then presents an example showing integration of the library API into a computer vision pipeline, and describes CUDA interoperability and ways to transition existing computer vision pipelines into the OpenVX API.
|
CLOUD-vs-EDGE PROCESSING OPTIONS |
Should Visual Intelligence Reside in the Cloud or at the Edge? Trade-offs in Privacy, Security and Performance
The Internet of Things continues to expand and develop, including more intelligent connected devices that respond to people’s needs and alert them of important events. As these devices become more aware, privacy and security are becoming increasingly critical concerns in homes and workplaces alike. Start-ups and large corporations are presented with the question of whether intelligence should reside in edge devices, in the cloud, or a combination of the two. Finding the right answer isn’t simple. In this talk, Andreas Gal, CEO of Silk Labs, explores the trade-offs when deciding between edge-based and cloud-based processing for devices incorporating computer vision and machine learning.
Edge Intelligence in the Computer Vision Market
Smart cameras have been around for some time, notes Tractia Principal Analyst Anand Joshi, and their level of intelligence has increased significantly over time. The number of them installed and active around the world, as well as the amount of data generated by each of them, is increasing dramatically with each passing year. These factors are some of the primary drivers for their increasing intelligence, enabled by computer vision technology, which among other things enables them to decide in real time which data within the captured video feed should be discarded and which data should be sent to a server for further processing. More
|
UPCOMING INDUSTRY EVENTS |
Embedded World Conference: March 14-16, 2017, Messezentrum Nuremberg, Germany
Silicon Catalyst: Jeff Bier, "When Every Device Can See: AI & Embedded Vision in Products," March 22, 2017, Mountain View, California
Machine Learning Developers Conference: April 26-27, 2017, Santa Clara, California
Embedded Vision Summit: May 1-3, 2017, Santa Clara, California
Sensors Expo & Conference: June 27-29, 2017, San Jose, California
More Events
|
FEATURED NEWS |
Imagination’s New PowerVR Furian GPU Architecture Will Deliver Captivating and Engaging Visual and Vision Experiences
NVIDIA Jetson TX2 Enables AI at the Edge
In Series Production: Basler dart Camera Modules with BCON Interface and Development Kit for Embedded Vision Applications
ON Semiconductor Establishes Advanced Sensor Design Center in Europe
Luxoft Launches Symtavision 4.0 Timing Design, Analysis and Verification Tool Suite
More News
|