Dear Colleague,
The next session of the Embedded Vision Alliance's in-person, hands-on technical training class series, Deep Learning for Computer Vision with TensorFlow, takes place in less than a month in San Jose, California. These classes give you the critical knowledge you need to develop deep learning computer vision applications with TensorFlow. The one-day class takes place on October 4, 2018. Details, including online registration, can be found here.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision
In retail environments, there’s great value in understanding how shoppers move in the space and interact with products. And, while the retail environment has some favorable characteristics for computer vision (such as reasonable lighting), the large number and diversity of products sold, along with the potential ambiguity of shopper movements, mean that accurately measuring shopper behavior is challenging. In this talk, Emanuele Frontoni, Professor, and Rocco Pietrini, Ph.D. student, both of the Università Politecnica delle Marche, explore some of these challenges and present a set of deep learning algorithms that they’ve utilized to address them. These algorithms have been deployed in the cloud and have been used to measure the activity of more than one million shoppers worldwide.
Using Vision to Transform Retail
This talk from Sumit Gupta, Vice President of AI, Machine Learning and HPC at IBM, explores how recent advances in deep learning-based computer vision have fueled new opportunities in retail. Using case studies based on deployed systems, Gupta explores how deep learning-based computer vision is enabling new ways to improve in-store consumer experiences, to gather more customer insights, to understand which types of in-store displays work better and to optimize workforce scheduling and management. He also touches upon how a combination of robotics and vision are improving inventory management and worker safety by automating aspects of warehouse operations.
|
A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems
Embedded vision systems have typically relied on low-cost image sensor modules with a MIPI CSI-2 interface. Now, machine vision camera vendors are entering the market with so-called camera modules, some of which also rely on the MIPI CSI-2 interface standard. What is the difference between a camera module and sensor module? This presentation from Paul Maria Zalewski, Product Line Manager at Allied Vision Technologies, investigates the different architectures of camera and sensor modules, in particular the built-in image-processing hardware and software capabilities of the brand new Allied Vision 1 Product Line camera modules. By performing advanced image processing inside the camera, the 1 product line cameras minimize CPU requirements on the host side. Another major difference is the flexibility of system integration; these modules will be available with various image sensors. One camera driver installed on the host supports all sensor variants, which opens new possibilities for system developers to test various sensors, offer different versions of their system or upgrade to new sensors with minimal programming effort.
High-end Multi-camera Technology, Applications and Examples
For OEMs and system integrators, many of today's applications in VR/AR/MR, ADAS, measurement and automation require multiple coordinated high performance cameras. Current generic components are not optimized to achieve the desired traits in terms of resolution, frame rate, latency, imaging quality, reliability, scalability and time to market. In this presentation, Max Larin, CEO of XIMEA, explains how XIMEA’s xPlatform provides unique solutions to address all of these requirements, utilizing PCI Express and various image sensor technologies.
|