Summit 2018

“Rapid Development of Efficient Vision Applications Using the Halide Language and CEVA Processors,” a Presentation from CEVA and mPerpetuo

Yair Siegel, Director of Business Development at CEVA, and Gary Gitelson, VP of Engineering at mPerpetuo, presents the “Rapid Development of Efficient Vision Applications Using the Halide Language and CEVA Processors” tutorial at the May 2018 Embedded Vision Summit. Halide is a domain-specific programming language for imaging and vision applications that has been adopted by […]

“Rapid Development of Efficient Vision Applications Using the Halide Language and CEVA Processors,” a Presentation from CEVA and mPerpetuo Read More +

“Mythic’s Analog Deep Learning Accelerator Chip: High Performance Inference,” a Presentation from Mythic

Frederick Soo, Head of Product Development at Mythic, presents the “Mythic’s Analog Deep Learning Accelerator Chip: High Performance Inference” tutorial at the May 2018 Embedded Vision Summit. This presentation explains how Mythic’s deep learning accelerator chip uses a unique analog circuit approach to deliver massive power, speed and scalability advantages over current generation deep learning

“Mythic’s Analog Deep Learning Accelerator Chip: High Performance Inference,” a Presentation from Mythic Read More +

“From 2D to 3D: How Depth Sensing Will Shape the Future of Vision,” a Presentation from Yole Développement

Guillaume Girardin, Division Director for photonics, sensing and displays at Yole Développement, presents the “From 2D to 3D: How Depth Sensing Will Shape the Future of Vision” tutorial at the May 2018 Embedded Vision Summit. For several decades, 3D imaging and sensing technologies have matured, thanks to extensive, successful deployments in high-end applications, mainly in

“From 2D to 3D: How Depth Sensing Will Shape the Future of Vision,” a Presentation from Yole Développement Read More +

“Enabling Cross-platform Deep Learning Applications with the Intel CV SDK,” a Presentation from Intel

Yury Gorbachev, Principal Engineer and the Lead Architect for the Computer Vision SDK at Intel, presents the “Enabling Cross-platform Deep Learning Applications with the Intel CV SDK” tutorial at the May 2018 Embedded Vision Summit. Intel offers a wide array of processors for computer vision and deep learning at the edge, including CPUs, GPUs, VPUs

“Enabling Cross-platform Deep Learning Applications with the Intel CV SDK,” a Presentation from Intel Read More +

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch

Markus Tremmel, Chief Expert for ADAS at Bosch, presents the “Computer Vision Hardware Acceleration for Driver Assistance” tutorial at the May 2018 Embedded Vision Summit. With highly automated and fully automated driver assistance system just around the corner, next generation ADAS sensors and central ECUs will have much higher safety and functional requirements to cope

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch Read More +

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot

Mario Munich, Senior Vice President of Technology at iRobot, presents the “Roomba 980: Computer Vision Meets Consumer Robotics” tutorial at the May 2018 Embedded Vision Summit. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. The availability of affordable electro-mechanical components, powerful embedded microprocessors and

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot Read More +

“How Simulation Accelerates Development of Self-Driving Technology,” a Presentation from AImotive

László Kishonti, founder and CEO of AImotive, presents the “How Simulation Accelerates Development of Self-Driving Technology” tutorial at the May 2018 Embedded Vision Summit. Virtual testing, as discussed by Kishonti in this presentation, is the only solution that scales to address the billions of miles of testing required to make autonomous vehicles robust. However, integrating

“How Simulation Accelerates Development of Self-Driving Technology,” a Presentation from AImotive Read More +

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill

Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill Read More +

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One

YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One Read More +

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel

Dukhwan Kim, computer vision software architect at Intel, presents the “Developing Computer Vision Algorithms for Networked Cameras” tutorial at the May 2018 Embedded Vision Summit. Video analytics is one of the key elements in network cameras. Computer vision capabilities such as pedestrian detection, face detection and recognition and object detection and tracking are necessary for

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top