Processors

“Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020,” a Presentation from Algolux

Felix Heide, CTO and Co-founder of Algolux, presents the “Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020” tutorial at the May 2018 Embedded Vision Summit. ADAS and autonomous driving systems rely on sophisticated sensor, image processing and neural-network based perception technologies. This has resulted in effective driver assistance capabilities and […]

“Understanding Real-World Imaging Challenges for ADAS and Autonomous Vision Systems – IEEE P2020,” a Presentation from Algolux Read More +

“Rapid Development of Efficient Vision Applications Using the Halide Language and CEVA Processors,” a Presentation from CEVA and mPerpetuo

Yair Siegel, Director of Business Development at CEVA, and Gary Gitelson, VP of Engineering at mPerpetuo, presents the “Rapid Development of Efficient Vision Applications Using the Halide Language and CEVA Processors” tutorial at the May 2018 Embedded Vision Summit. Halide is a domain-specific programming language for imaging and vision applications that has been adopted by

“Rapid Development of Efficient Vision Applications Using the Halide Language and CEVA Processors,” a Presentation from CEVA and mPerpetuo Read More +

“Mythic’s Analog Deep Learning Accelerator Chip: High Performance Inference,” a Presentation from Mythic

Frederick Soo, Head of Product Development at Mythic, presents the “Mythic’s Analog Deep Learning Accelerator Chip: High Performance Inference” tutorial at the May 2018 Embedded Vision Summit. This presentation explains how Mythic’s deep learning accelerator chip uses a unique analog circuit approach to deliver massive power, speed and scalability advantages over current generation deep learning

“Mythic’s Analog Deep Learning Accelerator Chip: High Performance Inference,” a Presentation from Mythic Read More +

“Enabling Cross-platform Deep Learning Applications with the Intel CV SDK,” a Presentation from Intel

Yury Gorbachev, Principal Engineer and the Lead Architect for the Computer Vision SDK at Intel, presents the “Enabling Cross-platform Deep Learning Applications with the Intel CV SDK” tutorial at the May 2018 Embedded Vision Summit. Intel offers a wide array of processors for computer vision and deep learning at the edge, including CPUs, GPUs, VPUs

“Enabling Cross-platform Deep Learning Applications with the Intel CV SDK,” a Presentation from Intel Read More +

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch

Markus Tremmel, Chief Expert for ADAS at Bosch, presents the “Computer Vision Hardware Acceleration for Driver Assistance” tutorial at the May 2018 Embedded Vision Summit. With highly automated and fully automated driver assistance system just around the corner, next generation ADAS sensors and central ECUs will have much higher safety and functional requirements to cope

“Computer Vision Hardware Acceleration for Driver Assistance,” a Presentation from Bosch Read More +

Computer Vision for Augmented Reality in Embedded Designs

Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources

Computer Vision for Augmented Reality in Embedded Designs Read More +

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot

Mario Munich, Senior Vice President of Technology at iRobot, presents the “Roomba 980: Computer Vision Meets Consumer Robotics” tutorial at the May 2018 Embedded Vision Summit. In 2015, iRobot launched the Roomba 980, introducing intelligent visual navigation to its successful line of vacuum cleaning robots. The availability of affordable electro-mechanical components, powerful embedded microprocessors and

“The Roomba 980: Computer Vision Meets Consumer Robotics,” a Presentation from iRobot Read More +

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill

Alexander C Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, presents the “Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors” tutorial at the May 2018 Embedded Vision Summit. Berg’s group’s 2016 work on single-shot object detection (SSD) reduced the computation cost for accurate detection of object

“Recognizing Novel Objects in Novel Surroundings with Single-shot Detectors,” a Presentation from the University of North Carolina at Chapel Hill Read More +

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One

YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the “Building a Typical Visual SLAM Pipeline” tutorial at the May 2018 Embedded Vision Summit. Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has

“Building a Typical Visual SLAM Pipeline,” a Presentation from Virgin Hyperloop One Read More +

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel

Dukhwan Kim, computer vision software architect at Intel, presents the “Developing Computer Vision Algorithms for Networked Cameras” tutorial at the May 2018 Embedded Vision Summit. Video analytics is one of the key elements in network cameras. Computer vision capabilities such as pedestrian detection, face detection and recognition and object detection and tracking are necessary for

“Developing Computer Vision Algorithms for Networked Cameras,” a Presentation from Intel Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top