Technical Insights

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors

Ali Osman Ors, Director of Automotive Microcontrollers and Processors at NXP Semiconductors, presents the “Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge” tutorial at the May 2018 Embedded Vision Summit. Regardless of the processing topology—distributed, centralized or hybrid —sensor processing in automotive is an edge compute problem. However, with […]

“The Role of the Cloud in Autonomous Vehicle Vision Processing: A View from the Edge,” a Presentation from NXP Semiconductors Read More +

“Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020,” a Presentation from Algolux

Felix Heide, CTO and Co-founder of Algolux, presents the “Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020” tutorial at the May 2018 Embedded Vision Summit. ADAS and autonomous driving systems rely on sophisticated sensor, image processing and neural-network based perception technologies. This has resulted in effective driver assistance capabilities and

“Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020,” a Presentation from Algolux Read More +

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch

Markus Tremmel, Chief Expert for ADAS at Bosch, presents the “Computer Vision Hardware Acceleration for Driver Assistance” tutorial at the May 2018 Embedded Vision Summit. With highly automated and fully automated driver assistance system just around the corner, next generation ADAS sensors and central ECUs will have much higher safety and functional requirements to cope

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch Read More +

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot

Mario Munich, Senior Vice President of Technology at iRobot, presents the “Roomba 980: Computer Vision Meets Consumer Robotics” tutorial at the May 2018 Embedded Vision Summit. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. The availability of affordable electro-mechanical components, powerful embedded microprocessors and

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot Read More +

“How Simulation Accelerates Development of Self-Driving Technology,” a Presentation from AImotive

László Kishonti, founder and CEO of AImotive, presents the “How Simulation Accelerates Development of Self-Driving Technology” tutorial at the May 2018 Embedded Vision Summit. Virtual testing, as discussed by Kishonti in this presentation, is the only solution that scales to address the billions of miles of testing required to make autonomous vehicles robust. However, integrating

“How Simulation Accelerates Development of Self-Driving Technology,” a Presentation from AImotive Read More +

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill

Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill Read More +

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One

YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One Read More +

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel

Dukhwan Kim, computer vision software architect at Intel, presents the “Developing Computer Vision Algorithms for Networked Cameras” tutorial at the May 2018 Embedded Vision Summit. Video analytics is one of the key elements in network cameras. Computer vision capabilities such as pedestrian detection, face detection and recognition and object detection and tracking are necessary for

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel Read More +

“What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications,” a Presentation from Ryad B. Benosman

Ryad B. Benosman, Professor at the University of Pittsburgh Medical Center, Carnegie Mellon University and Sorbonne Universitas, presents the “What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications” tutorial at the May 2018 Embedded Vision Summit. In this presentation, Benosman introduces neuromorphic, event-based approaches for image sensing and processing. State-of-the-art image sensors suffer from

“What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications,” a Presentation from Ryad B. Benosman Read More +

“Words, Pictures, and Common Sense: Visual Question Answering,” a Presentation from Facebook and Georgia Tech

Devi Parikh, Research Scientist at Facebook AI Research (FAIR) and Assistant Professor at Georgia Tech, presents the “Words, Pictures, and Common Sense: Visual Question Answering” tutorial at the May 2018 Embedded Vision Summit. Wouldn’t it be nice if machines could understand content in images and communicate this understanding as effectively as humans? Such technology would

“Words, Pictures, and Common Sense: Visual Question Answering,” a Presentation from Facebook and Georgia Tech Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top