Embedded Vision Insights: November 21, 2017 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Google TensorFlow Webinar

Google will deliver the free webinar "An Introduction to Developing Vision Applications Using Deep Learning and Google's TensorFlow Framework" on January 17, 2018 at 9 am Pacific Time, in partnership with the Embedded Vision Alliance. The webinar will be presented by Pete Warden, research engineer and technology lead on the mobile and embedded TensorFlow team. It will begin with an overview of deep learning and its use for computer vision tasks. Warden will then introduce Google's TensorFlow as a popular open source framework for deep learning development, training, and deployment, and provide an overview of the resources Google offers to enable you to kick-start your own deep learning project. He'll conclude with several case study design examples that showcase TensorFlow use and optimization on resource-constrained mobile and embedded devices. For more information and to register, see the event page.

The Embedded Vision Summit is the preeminent conference on practical computer vision, covering applications at the edge and in the cloud. It attracts a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are creating and using computer vision technology. The Embedded Vision Summit has experienced exciting growth over the last few years, with 98% of 2017 Summit attendees reporting that they’d recommend the event to a colleague. The next Summit will take place May 22-24, 2018 in Santa Clara, California. My colleagues and I encourage you to register for next year's Embedded Vision Summit while Super Early Bird discount rates are still available, using discount code NLEVI1121!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

THE VISION SENSING REVOLUTION

Video Cameras Without Video: Opportunities For Sensing With Embedded VisionMichael Tusch
Within the next few years, network cameras will cease to be regarded primarily as image capture devices. They will instead transform into intelligent data capture nodes whose functionality will in many, but not all, cases include video capture. This evolution is driven not just by technological development but by the impossibility of transmitting, storing and remotely analyzing video from these devices in a manner commensurate with the cost and ease of deploying them. This transformation presents a host of opportunities for embedded vision in existing and yet-to-be-created markets. Using examples in biometric identification and smart city public safety, this talk from Michael Tusch, founder and CEO of Apical Limited (now part of Arm), provides insights into how these opportunities emerge and how they match up with the current state of the art in embedded vision and camera technology.

The Coming Shift from Image Sensors to Image SensingLG
The image sensor space is entering the fourth disruption in its evolution. The first three disruptions primarily focused on taking “pretty pictures” for human consumption, evaluation, and storage. The coming disruption will be driven by machine vision moving into the mainstream. Smart homes, offices, cars, devices – as well as AR/MR, biometrics and crowd monitoring – all need to run image data through a processor to activate responses without human viewing. The opportunity this presents is massive, but as the growth efficiencies come into play the solutions will become specialized. This talk from Paul Gallagher, Senior Director of Technology and Product Planning for LG, highlights the opportunities that the emerging shift to image-based sensing will bring throughout the imaging and vision industry. It explores the ingredients that industry participants will need in order to capitalize on these opportunities, and why the entrenched players may not be at as great an advantage as might be expected.

ADAS AND AUTONOMOUS VEHICLES

Automakers at a Crossroads: How Embedded Vision and Autonomy Will Reshape the IndustryLux Research
The auto and telecom industries have been dreaming of connected cars for twenty years, explains Mark Bünger, VP of Research at Lux Research in this presentation, but their results have been mediocre and mixed. Now, just as a potentially costly standards battle looms between DSRC and 5G wireless communications technologies, those technologies may be leapfrogged by embedded vision – enabled by the combination of rapidly advancing image sensors, machine vision algorithms, and embedded AI chips. These technologies are not just changing the car itself; they enable new driving patterns and business models that are fueling new competitors and transforming the industry.


Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous VehiclesNXP
A diverse set of sensor technologies is available and emerging to provide vehicle autonomy or driver assistance. These sensor technologies often have overlapping capabilities, but each has its own strengths and limitations. Drawing on design experience from deployed real-world applications, Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, explores trade-offs among sensor technologies such as radar, lidar, 2D and 3D camera sensors and others that are making inroads into vehicles for safety applications. He also examines how these sensor technologies are progressing to meet the needs of autonomous vehicles.

UPCOMING INDUSTRY EVENTS

Consumer Electronics Show: January 9-12, 2018, Las Vegas, Nevada

Embedded Vision Alliance Webinar – An Introduction to Developing Vision Applications Using Deep Learning and Google's TensorFlow Framework: January 17, 2017, 9:00 am PT

Embedded World Conference: February 27-March 1, 2018, Nuremberg, Germany

Embedded Vision Summit: May 22-24, 2018, Santa Clara, California

More Events

FEATURED NEWS

Baumer SmartApplets: The Easy and Cost-efficient Way to Master Complex Tasks in Image Processing

Intel Movidius Neural Compute Stick Honored with CES Best of Innovation Award 2018

Ground-breaking Image Sensor from ON Semiconductor Enables Next Generation ADAS Solution

HiSilicon Selects Cadence Tensilica Vision P6 DSP for its Latest Kirin 970 Mobile Application Processor

Allied Vision 1 Product Line Awarded by French Metrology Magazine

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top