Edge AI and Vision Alliance

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue

Simon Morris, CEO of CogniVue, presents the "Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality" tutorial at the May 2014 Embedded Vision Summit. Augmented reality (AR) applications are based on accurately computing a camera's 6 degrees of freedom (6DOF) position in 3-dimensional space, also known as its "pose". In vision-based approaches to AR, […]

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue Read More +

EVA180x100

Embedded Vision Insights: June 17, 2014 Edition

In this edition of Embedded Vision Insights: Embedded Vision Summit West Content 3D Stereo Vision Training Resources Embedded Vision in the News LETTER FROM THE EDITOR Dear Colleague, Videos of presentations from the recent Embedded Vision Summit West have begun to appear on the Alliance website. We’ve just published the two outstanding keynotes delivered that

Embedded Vision Insights: June 17, 2014 Edition Read More +

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook

Yann LeCun, Director of AI Research at Facebook and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University, presents the "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems" keynote at the May 2014 Embedded Vision Summit. Convolutional Networks (ConvNets) have become the dominant method

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Fast 3D Object Recognition in Real-World Environments,” Ken Lee, VanGogh Imaging

Ken Lee, Founder of VanGogh Imaging, presents the "Fast 3D Object Recognition in Real-World Environments" tutorial at the May 2014 Embedded Vision Summit. Real-time 3D object recognition can be computationally intensive and difficult to implement when there are a lot of other objects (i.e. clutter) around the target. There are several approaches to deal with

May 2014 Embedded Vision Summit Technical Presentation: “Fast 3D Object Recognition in Real-World Environments,” Ken Lee, VanGogh Imaging Read More +

“Self-Driving Cars,” an Embedded Vision Summit Keynote Presentation from Google

Nathaniel Fairfield, Technical Lead at Google, presents the "Self-Driving Cars" keynote at the May 2014 Embedded Vision Summit. Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was

“Self-Driving Cars,” an Embedded Vision Summit Keynote Presentation from Google Read More +

EVA180x100

Embedded Vision Insights: June 3, 2014 Edition

In this edition of Embedded Vision Insights: Embedded Vision Summit West Success Integrating Vision and Motion Heterogeneous Processing How-Tos Embedded Vision in the News LETTER FROM THE EDITOR Dear Colleague, Last Thursday's Embedded Vision Summit West was an absolutely amazing day. The keynotes from Yann LeCun of Facebook and Nathaniel Fairfield of Google provided compelling

Embedded Vision Insights: June 3, 2014 Edition Read More +

EVSummit_West2014e

May 2014 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 29, 2014 in Santa Clara, California, as a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The program for the event included the following presentations, whose PDF-formatted foilsets are available for download as a… May 2014 Embedded Vision Summit

May 2014 Embedded Vision Summit Proceedings Read More +

EVA180x100

Embedded Vision Insights: May 15, 2014 Edition

In this edition of Embedded Vision Insights: In Two Weeks: The Embedded Vision Summit From Planetary Exploration to Consumer Devices 360-Degree Panorama Photography Embedded Vision in the News LETTER FROM THE EDITOR Dear Colleague, Two weeks from today, my colleagues at the Embedded Vision Alliance and I will kick off the next, and biggest and

Embedded Vision Insights: May 15, 2014 Edition Read More +

CENTR: An Embedded Vision Case Study Winner

Computational photography is, as a recently published contributed article points out, one of the most visible current examples of embedded vision processing for the masses. And panorama "stitching", the ability to combine multiple horizontally- and vertically-captured frames into one higher-resolution image, is one of the most common computational photography features. Most of today's panorama stitching

CENTR: An Embedded Vision Case Study Winner Read More +

March 2014 Embedded Vision Alliance Member Meeting Presentation: “Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices,” Larry Matthies, NASA

Larry Matthies, Supervisor of the Computer Vision Group at NASA's Jet Propulsion Laboratory, delivers the technology presentation, "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," at the March 2014 Embedded Vision Alliance Member Meeting. Dr. Matthies is a Senior Research Scientist at JPL and is the Supervisor of the Computer Vision Group in the

March 2014 Embedded Vision Alliance Member Meeting Presentation: “Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices,” Larry Matthies, NASA Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top