Summit 2018

“Understanding Automotive Radar: Present and Future,” a Presentation from NXP Semiconductors

Arunesh Roy, Radar Algorithms Architect at NXP Semiconductors, presents the “Understanding Automotive Radar: Present and Future” tutorial at the May 2018 Embedded Vision Summit. Thanks to its proven, all-weather range detection capability, radar is increasingly used for driver assistance functions such as automatic emergency braking and adaptive cruise control. Radar is considered a crucial sensing […]

“Understanding Automotive Radar: Present and Future,” a Presentation from NXP Semiconductors Read More +

“Hybrid Semi-Parallel Deep Neural Networks (SPDNN) – Example Methodologies & Use Cases,” a Presentation from Xperi

Peter Corcoran, co-founder of FotoNation (now a core business unit of Xperi) and lead principle investigator and director of C3Imaging (a research partnership between Xperi and the National University of Ireland, Galway), presents the “Hybrid Semi-Parallel Deep Neural Networks (SPDNN) – Example Methodologies & Use Cases” tutorial at the May 2018 Embedded Vision Summit. Deep

“Hybrid Semi-Parallel Deep Neural Networks (SPDNN) – Example Methodologies & Use Cases,” a Presentation from Xperi Read More +

“Building Efficient CNN Models for Mobile and Embedded Applications,” a Presentation from Facebook

Peter Vajda, Research Scientist at Facebook, presents the “Building Efficient CNN Models for Mobile and Embedded Applications” tutorial at the May 2018 Embedded Vision Summit. Recent advances in efficient deep learning models have led to many potential applications in mobile and embedded devices. In this talk, Vajda discusses state-of-the-art model architectures, and introduces Facebook’s work

“Building Efficient CNN Models for Mobile and Embedded Applications,” a Presentation from Facebook Read More +

“Harnessing the Edge and the Cloud Together for Visual AI,” a Presentation from Au-Zone Technologies

Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, presents the “Harnessing the Edge and the Cloud Together for Visual AI” tutorial at the May 2018 Embedded Vision Summit. Embedded developers are increasingly comfortable deploying trained neural networks as static elements in edge devices, as well as using cloud-based vision services to implement visual intelligence remotely.

“Harnessing the Edge and the Cloud Together for Visual AI,” a Presentation from Au-Zone Technologies Read More +

“Improving and Implementing Traditional Computer Vision Algorithms Using DNN Techniques,” a Presentation from Imagination Technologies

Paul Brasnett, Senior Research Manager for Vision and AI in the PowerVR Division at Imagination Technologies, presents the “Improving and Implementing Traditional Computer Vision Algorithms Using DNN Techniques” tutorial at the May 2018 Embedded Vision Summit. There has been a very significant shift in the computer vision industry over the past few years, from traditional

“Improving and Implementing Traditional Computer Vision Algorithms Using DNN Techniques,” a Presentation from Imagination Technologies Read More +

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast

Hongcheng Wang, Senior Manager of Technical R&D at Comcast, presents the “Architecting a Smart Home Monitoring System with Millions of Cameras” tutorial at the May 2018 Embedded Vision Summit. Video monitoring is a critical capability for the smart home. With millions of cameras streaming to the cloud, efficient and scalable video analytics becomes essential. To

“Architecting a Smart Home Monitoring System with Millions of Cameras,” a Presentation from Comcast Read More +

“The Perspective Transform in Embedded Vision,” a Presentation from Cadence

Shrinivas Gadkari, Design Engineering Director, and Aditya Joshi, Lead Design Engineer, both of Cadence, present the “Perspective Transform in Embedded Vision” tutorial at the May 2018 Embedded Vision Summit. This presentation focuses on the perspective transform and its role in many state-of-the-art embedded vision applications like video stabilization, high dynamic range (HDR) imaging and super

“The Perspective Transform in Embedded Vision,” a Presentation from Cadence Read More +

“Utilizing Neural Networks to Validate Display Content in Mission Critical Systems,” a Presentation from VeriSilicon

Shang-Hung Lin, Vice President of Vision and Imaging Products at VeriSilicon, presents the “Utilizing Neural Networks to Validate Display Content in Mission Critical Systems” tutorial at the May 2018 Embedded Vision Summit. Mission critical display systems in aerospace, automotive and industrial markets require validation of the content presented to the user, in order to enable

“Utilizing Neural Networks to Validate Display Content in Mission Critical Systems,” a Presentation from VeriSilicon Read More +

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the “Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge” tutorial at the May 2018 Embedded Vision Summit. Regardless of the processing topology—distributed, centralized or hybrid —sensor processing in automotive is an edge compute problem. However, with

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors Read More +

“Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020,” a Presentation from Algolux

Felix Heide, CTO and Co-founder of Algolux, presents the “Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020” tutorial at the May 2018 Embedded Vision Summit. ADAS and autonomous driving systems rely on sophisticated sensor, image processing and neural-network based perception technologies. This has resulted in effective driver assistance capabilities and

“Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020,” a Presentation from Algolux Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top