LETTER FROM THE EDITOR |
Dear Colleague,
The Embedded Vision Summit, the only event dedicated entirely to the creation of products and services that see, is right around the corner, and the Super Early Bird Discount is ending today. Join industry leaders, engineers, marketers, business and technology executives and analysts for three robust days of learning, exploration and networking . This is the preeminent event for anyone involved with the computer vision industry, and we hope that you'll join us May 1-3 in Santa Clara, California!
Today, Wednesday, February 1, is the last day for the Super Early Bird Discount rate! Register now using discount code nlevi0201 and take advantage of our lowest offer, 25% off. Prices will increase beginning tomorrow. We look forward to seeing you there!
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
OBJECT RECOGNITION FOR DEVICE AUTONOMY AND OTHER APPLICATIONS |
The Evolution of Object Recognition in Embedded Systems
Camera-enabled devices have made great strides in performance and quality in recent years, but they still fall far short of human visual perception. To reach their potential, vision-enabled systems must perform more intelligent scene analysis, including more robust object recognition. In this presentation, Moshe Shahar, Director of System Architecture at CEVA, explores how techniques for object recognition have evolved, with particular emphasis on algorithms and implementation techniques that enable robust recognition in power- and cost-constrained devices.
Trends, Challenges and Opportunities in Vision-Based Automotive Safety and Autonomous Driving Systems
The automotive industry has embraced embedded vision as a key safety technology. Many car models today ship with vision-based safety features such as forward collision avoidance and lane departure warning. New Euro-NCAP regulations are accelerating the adoption of vision-based safety features. And going forward, it’s clear that vision will plan a critical role in enabling vehicles to become more autonomous. The increased safety and autonomy enabled by these systems brings obvious value to consumers and with roughly 90 million light vehicles manufactured per year, the opportunity is compelling for technology suppliers. But developers of automotive vision systems (and their suppliers) face tough challenges. For example, to enable multiple safety features, vision systems are increasingly expected to run multiple sophisticated algorithms in parallel on high resolution, high-frame rate video streams. This requires an enormous amount of processing power—which must be delivered with extremely low power consumption. At the same time, new functional safety requirements intended to ensure reliability and security place additional burdens on system and component suppliers. These requirements will continue to become more challenging as safety features expand and as vehicles become increasingly autonomous. This presentation from Simon Morris, CEO of CogniVue (now part of NXP Semiconductors), provides an overview of embedded vision for automotive safety, focusing on key requirements, trends, and challenges—and ways that these challenges can be met.
|
GPU-ACCELERATED VISION PROCESSING |
Understanding the Role of Integrated GPUs in Vision Applications
Today, systems-on-chip (SoCs) used in applications such as smart TVs, smartphones and tablets typically include a multi-core CPU with single-instruction, multiple-data capabilities as well as a GPU that can be harnessed for parallel computation (in addition to its traditional role handling 3D graphics). This presentation from Roberto Mijat, Visual Computing Marketing Manager at ARM, explores when it makes sense to utilize the GPU as a coprocessor for computer vision algorithms, what to expect from the GPU, and other key considerations. Mijat illustrates these concepts using real-life use cases from applications such as real-time face beautification, gesture user-interface, and vision-based automotive safety.
A Smart, Efficient Approach to Mobile Compute
Imagination Technologies designed its PowerVR Tile-Based Deferred Rendering (TBDR) graphics architecture more than 20 years ago with a focus on efficiency across performance, power consumption and system level integration. This approach has equally been applied to the company's integration of compute functionality in its GPU architecture; PowerVR Rogue, the most recent version, fully supports mobile compute for a variety of use cases. This focus on optimising the most common use case is critical for success, and avoids making niche (but costly) features mandatory. Such niche features not only impact power consumption for compute usage cases, but also impact all other usage scenarios, including traditional graphics.
This is the final entry in a series of technical articles from the company on GPU-based heterogenous computing for vision processing, all published on the Embedded Vision Alliance website. More
|
UPCOMING INDUSTRY EVENTS |
Cadence Embedded Neural Network Summit – Deep Learning: The New Moore’s Law: February 1, 2017, San Jose, California
Embedded World Conference: March 14-16, 2017, Messezentrum Nuremberg, Germany
Embedded Vision Summit: May 1-3, 2017, Santa Clara, California
More Events
|
FEATURED NEWS |
Upcoming Silicon Valley Meetup Presentations Discuss Image Pre-Processing and Smart Optimization for Vision
"Approaches to Driver Monitoring Systems," an Upcoming Free Webinar from PathPartner Technology
New: Allied Vision Manta G-1236 with 12 Megapixel Sony CMOS Sensor
Series Production Start: Twelve New Ace Models with IMX Sensors from Sony
L’Oréal and Founders Factory Announce Five Startups Selected for their Accelerator Program
More News
|