PROVIDER

“Hybrid Semi-Parallel Deep Neural Networks (SPDNN) – Example Methodologies & Use Cases,” a Presentation from Xperi

Peter Corcoran, co-founder of FotoNation (now a core business unit of Xperi) and lead principle investigator and director of C3Imaging (a research partnership between Xperi and the National University of Ireland, Galway), presents the “Hybrid Semi-Parallel Deep Neural Networks (SPDNN) – Example Methodologies & Use Cases” tutorial at the May 2018 Embedded Vision Summit. Deep […]

“Hybrid Semi-Parallel Deep Neural Networks (SPDNN) – Example Methodologies & Use Cases,” a Presentation from Xperi Read More +

BrainChip-Akida-3_700_341

BrainChip Announces the Akida Architecture, a Neuromorphic System-on-Chip

Company introduces the architecture of the first in a new breed of neural network acceleration SoCs that puts artificial intelligence at the edge and enterprise San Francisco – 10 September 2018: BrainChip Holdings Ltd. (“BrainChip” or the “Company”) (ASX: BRN), the leading neuromorphic computing company, today establishes itself as the first company to bring a

BrainChip Announces the Akida Architecture, a Neuromorphic System-on-Chip Read More +

“Building Efficient CNN Models for Mobile and Embedded Applications,” a Presentation from Facebook

Peter Vajda, Research Scientist at Facebook, presents the “Building Efficient CNN Models for Mobile and Embedded Applications” tutorial at the May 2018 Embedded Vision Summit. Recent advances in efficient deep learning models have led to many potential applications in mobile and embedded devices. In this talk, Vajda discusses state-of-the-art model architectures, and introduces Facebook’s work

“Building Efficient CNN Models for Mobile and Embedded Applications,” a Presentation from Facebook Read More +

“Harnessing the Edge and the Cloud Together for Visual AI,” a Presentation from Au-Zone Technologies

Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, presents the “Harnessing the Edge and the Cloud Together for Visual AI” tutorial at the May 2018 Embedded Vision Summit. Embedded developers are increasingly comfortable deploying trained neural networks as static elements in edge devices, as well as using cloud-based vision services to implement visual intelligence remotely.

“Harnessing the Edge and the Cloud Together for Visual AI,” a Presentation from Au-Zone Technologies Read More +

EVA180x100

Embedded Vision Insights: September 6, 2018 Edition

LETTER FROM THE EDITOR Dear Colleague, The next session of the Embedded Vision Alliance's in-person, hands-on technical training class series, Deep Learning for Computer Vision with TensorFlow, takes place in less than a month in San Jose, California. These classes give you the critical knowledge you need to develop deep learning computer vision applications with

Embedded Vision Insights: September 6, 2018 Edition Read More +

“Improving and Implementing Traditional Computer Vision Algorithms Using DNN Techniques,” a Presentation from Imagination Technologies

Paul Brasnett, Senior Research Manager for Vision and AI in the PowerVR Division at Imagination Technologies, presents the “Improving and Implementing Traditional Computer Vision Algorithms Using DNN Techniques” tutorial at the May 2018 Embedded Vision Summit. There has been a very significant shift in the computer vision industry over the past few years, from traditional

“Improving and Implementing Traditional Computer Vision Algorithms Using DNN Techniques,” a Presentation from Imagination Technologies Read More +

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast

Hongcheng Wang, Senior Manager of Technical R&D at Comcast, presents the “Architecting a Smart Home Monitoring System with Millions of Cameras” tutorial at the May 2018 Embedded Vision Summit. Video monitoring is a critical capability for the smart home. With millions of cameras streaming to the cloud, efficient and scalable video analytics becomes essential. To

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast Read More +

“The Perspective Transform in Embedded Vision,” a Presentation from Cadence

Shrinivas Gadkari, Design Engineering Director, and Aditya Joshi, Lead Design Engineer, both of Cadence, present the “Perspective Transform in Embedded Vision” tutorial at the May 2018 Embedded Vision Summit. This presentation focuses on the perspective transform and its role in many state-of-the-art embedded vision applications like video stabilization, high dynamic range (HDR) imaging and super

“The Perspective Transform in Embedded Vision,” a Presentation from Cadence Read More +

“Utilizing Neural Networks to Validate Display Content in Mission Critical Systems,” a Presentation from VeriSilicon

Shang-Hung Lin, Vice President of Vision and Imaging Products at VeriSilicon, presents the “Utilizing Neural Networks to Validate Display Content in Mission Critical Systems” tutorial at the May 2018 Embedded Vision Summit. Mission critical display systems in aerospace, automotive and industrial markets require validation of the content presented to the user, in order to enable

“Utilizing Neural Networks to Validate Display Content in Mission Critical Systems,” a Presentation from VeriSilicon Read More +

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the “Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge” tutorial at the May 2018 Embedded Vision Summit. Regardless of the processing topology—distributed, centralized or hybrid —sensor processing in automotive is an edge compute problem. However, with

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top