Retail Applications for Embedded Vision
“Sensory Fusion for Scalable Indoor Navigation,” a Presentation from Brain Corp
Oleg Sinyavskiy, Director of Research and Development at Brain Corp, presents the “Sensory Fusion for Scalable Indoor Navigation” tutorial at the May 2019 Embedded Vision Summit. Indoor autonomous navigation requires using a variety of sensors in different modalities. Merging together RGB, depth, lidar and odometry data streams to achieve autonomous operation requires a fusion of
May 2019 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2019 Embedded Vision Summit
“Embedded AI for Smart Cities and Retail in China,” a Presentation from Horizon Robotics
Yufeng Zhang, VP of Global Business at Horizon Robotics, presents the “Embedded AI for Smart Cities and Retail in China,” tutorial at the May 2018 Embedded Vision Summit. Over the past ten years, online shopping has changed the way we do business. Now, with the development of AI technology, we are seeing the beginning of
“Using Vision to Transform Retail,” a Presentation from IBM
Sumit Gupta, Vice President of AI, Machine Learning and HPC at IBM, presents the “Using Vision to Transform Retail” tutorial at the May 2018 Embedded Vision Summit. This talk explores how recent advances in deep learning-based computer vision have fueled new opportunities in retail. Using case studies based on deployed systems, Gupta explores how deep
“Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision,” a Presentation from the Università Politecnica delle Marche
Emanuele Frontoni, Professor, and Rocco Pietrini, Ph.D. student, both of the Università Politecnica delle Marche, present the “Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In retail environments, there’s great value in understanding how shoppers move in the space and interact with products. And, while
May 2018 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 21-24, 2018 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2018 Embedded Vision Summit
“Using Vision to Collect Rich Data in the Moment of Truth for Retail Analytics and Market Research,” a Presentation from GfK
Dr. Anja Dieckmann of GfK Verein and Markus Iwanczok of GfK SE deliver the presentation "Using Vision to Collect Rich Data in the Moment of Truth for Retail Analytics and Market Research" at the Embedded Vision Alliance's September 2017 Vision Industry and Technology Forum. In their presentation, Dieckmann and Iwanczok cover the following topics: Market
May 2017 Embedded Vision Summit Slides
The Embedded Vision Summit was held on May 1-3, 2017 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2017 Embedded Vision Summit
“Deploying Embedded Vision for Retail Analytics,” a Presentation from RetailNext
RetailNext's Mark Jamtgaard, Director of Technology, and Bill Adamec, Research and Development Manager, deliver the presentation "Deploying Embedded Vision for Retail Analytics" at the December 2016 Embedded Vision Alliance Member Meeting. Jamtgaard and Adamec Chehebar explain how retailers are using embedded vision to optimize store layout and staffing based on measured customer behavior at scale.
Facial Analysis Delivers Diverse Vision Processing Capabilities
Computers can learn a lot about a person from their face – even if they don’t uniquely identify that person. Assessments of age range, gender, ethnicity, gaze direction, attention span, emotional state and other attributes are all now possible at real-time speeds, via advanced algorithms running on cost-effective hardware. This article provides an overview of
“What’s Hot in Embedded Vision for Investors?,” an Embedded Vision Summit Panel Discussion
Jeff Bier of the Embedded Vision Alliance (moderator), Don Faria of Intel Capital, Jeff Hennig of Bank of America Merrill Lynch, Gabriele Jansen of Vision Ventures, Helge Seetzen of TandemLaunch, and Peter Shannon of Firelake Capital Management participate in the Investor Panel at the May 2016 Embedded Vision Summit. This moderated panel discussion addresses emerging
May 2016 Embedded Vision Summit Proceedings
The Embedded Vision Summit was held on May 2-4, 2016 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2016 Embedded Vision Summit
Deep Learning Use Cases for Computer Vision (Download)
Six Deep Learning-Enabled Vision Applications in Digital Media, Healthcare, Agriculture, Retail, Manufacturing, and Other Industries The enterprise applications for deep learning have only scratched the surface of their potential applicability and use cases. Because it is data agnostic, deep learning is poised to be used in almost every enterprise vertical… Deep Learning Use Cases for
“Combining Vision, Machine Learning and Natural Language Processing to Answer Everyday Questions,” a Presentation from QM Scientific
Faris Alqadah, CEO and Co-Founder of QM Scientific, delivers the presentation "Combining Vision, Machine Learning and Natural Language Processing to Answer Everyday Questions" at the May 2015 Embedded Vision Alliance Member Meeting. Faris explains how his company's GPU-accelerated Quazi platform combines proprietary natural language processing, computer vision and machine learning technologies to extract, connect and
“Leveraging Computer Vision and Machine Learning to Power the Visual Commerce Revolution,” a Presentation from Sight Commerce
Satya Mallick, Co-Founder of Sight Commerce, delivers the presentation "Leveraging Computer Vision and Machine Learning to Power the Visual Commerce Revolution" at the May 2015 Embedded Vision Alliance Member Meeting. Satya explains how his company is using vision to enable retailers like Bloomingdale’s to create more engaging, personalized shopping experiences.
Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53
This demonstration, which pairs a Freescale i.MX Quick Start board and CogniMem Technologies CM1K evaluation module, showcases how to use your eyes (specifically where you are looking at any particular point in time) as a mouse. Translating where a customer is looking to actions on a screen, and using gaze tracking to electronically control objects