Edge AI and Vision Alliance

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade

Dr. Takeo Kanade, U.A. and Helen Whitaker Professor at Carnegie Mellon University, presents the “Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In this keynote presentation, Dr. Kanade shares his experiences and lessons learned in developing a vast range of […]

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade Read More +

EVA180x100

Embedded Vision Insights: June 26, 2018 Edition

DEEP LEARNING FOR VISION PROCESSING The Caffe2 Framework for Mobile and Embedded Deep Learning Fei Sun, software engineer at Facebook, introduces Caffe2, a new open-source machine learning framework, in this presentation. Sun also explains how Facebook is using Caffe2 to enable computer vision in mobile and embedded devices. Methods for Understanding How Deep Neural Networks

Embedded Vision Insights: June 26, 2018 Edition Read More +

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science

Bruce Maxwell, Director of Research at Tandent Vision Science, presents the “A Physics-based Approach to Removing Shadows and Shading in Real Time” tutorial at the May 2018 Embedded Vision Summit. Shadows cast on ground surfaces can create false features and modify the color and appearance of real features, masking important information used by autonomous vehicles,

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science Read More +

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University

Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit. While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University Read More +

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft

Anirudh Koul, Senior Data Scientist, and Jin Yamamoto, Principal Program Manager, both from Microsoft, present the “Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms” tutorial at the May 2018 Embedded Vision Summit. Microsoft offers its state-of-the-art computer vision algorithms, used internally in several products, through the Cognitive Services cloud APIs.

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft Read More +

May 2018 Embedded Vision Summit Introductory Presentation (Day 1)

Jeff Bier, Founder of the Embedded Vision Alliance, welcomes attendees to the May 2018 Embedded Vision Summit on May 22, 2018 (Day 1). Bier provides an overview of the embedded vision market opportunity, challenges, solutions and trends. He also introduces the Embedded Vision Alliance and the resources it offers for both product creators and potential

May 2018 Embedded Vision Summit Introductory Presentation (Day 1) Read More +

May 2018 Embedded Vision Summit Introductory Presentation (Day 2)

Jeff Bier, Founder of the Embedded Vision Alliance, welcomes attendees to the May 2018 Embedded Vision Summit on May 23, 2018 (Day 2). Bier provides an overview of the embedded vision market opportunity, challenges, solutions and trends, in the context of reviewing the presentation highlights and take-aways from the previous day. He also introduces the

May 2018 Embedded Vision Summit Introductory Presentation (Day 2) Read More +

EVA180x100

Embedded Vision Insights: June 12, 2018 Edition

LETTER FROM THE EDITOR Dear Colleague, Newly published on the Alliance website are the first 7 of what will eventually be nearly 100 presentation recordings from last month’s Embedded Vision Summit, as well as the downloadable set of presentation slides that I mentioned last time. Additional presentation recordings, along with nearly 50 demonstration videos, will

Embedded Vision Insights: June 12, 2018 Edition Read More +

“Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision,” a Presentation from the Università Politecnica delle Marche

Emanuele Frontoni, Professor, and Rocco Pietrini, Ph.D. student, both of the Università Politecnica delle Marche, present the “Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In retail environments, there’s great value in understanding how shoppers move in the space and interact with products. And, while

“Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision,” a Presentation from the Università Politecnica delle Marche Read More +

“Introduction to Creating a Vision Solution in the Cloud,” a Presentation from GumGum

Nishita Sant, Computer Vision Scientist at GumGum, presents the “Introduction to Creating a Vision Solution in the Cloud” tutorial at the May 2018 Embedded Vision Summit. A growing number of applications utilize cloud computing for execution of computer vision algorithms. In this presentation, Sant introduces the basics of creating a cloud-based vision service, based on

“Introduction to Creating a Vision Solution in the Cloud,” a Presentation from GumGum Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top