Sensors and Cameras

Figure6

Multi-sensor Fusion for Robust Device Autonomy

While visible light image sensors may be the baseline “one sensor to rule them all” included in all autonomous system designs, they’re not necessarily a sole panacea. By combining them with other sensor technologies: “Situational awareness” sensors; standard and high-resolution radar, LiDAR, infrared and UV, ultrasound and sonar, etc., and “Positional awareness” sensors such as […]

Multi-sensor Fusion for Robust Device Autonomy Read More +

Intel-RealSense-T265-2

Intel Announces New Class of RealSense Stand-Alone Inside-Out Tracking Camera

January 23, 2019 – What’s New: Intel today introduced the Intel® RealSense™ Tracking Camera T265, a new class of stand-alone inside-out tracking device that will provide developers with a powerful building block for autonomous devices, delivering high-performance guidance and navigation. The T265 uses proprietary visual inertial odometry simultaneous localization and mapping (V-SLAM) technology with computing at the edge

Intel Announces New Class of RealSense Stand-Alone Inside-Out Tracking Camera Read More +

EVA180x100

“Key Trends Driving the Proliferation of Visual Perception,” a Presentation from the Embedded Vision Alliance

On December 4, 2018, Embedded Vision Alliance founder Jeff Bier delivered the presentation “The Four Key Trends Driving the Proliferation of Visual Perception” to the Bay Area Computer Vision and Deep Learning Meetup Group. From the event description: Recent updates in computer vision markets and technology Computer vision has gained… “Key Trends Driving the Proliferation

“Key Trends Driving the Proliferation of Visual Perception,” a Presentation from the Embedded Vision Alliance Read More +

Using Calibration to Translate Video Data to the Real World

This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA. DeepStream SDK 3.0 is about seeing beyond pixels. DeepStream exists to make it easier for you to go from raw video data to metadata that can be analyzed for actionable insights. Calibration is a key step in this

Using Calibration to Translate Video Data to the Real World Read More +

“Harnessing Cloud Computer Vision In a Real-time Consumer Product,” a Presentation from Cocoon Cam

Pavan Kumar, Co-founder and CTO at Cocoon Cam, delivers the presentation "Harnessing Cloud Computer Vision In a Real-time Consumer Product," at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Kumar explains how his successful start-up is using edge and cloud vision computing to bring amazing new capabilities to the previously stagnant product

“Harnessing Cloud Computer Vision In a Real-time Consumer Product,” a Presentation from Cocoon Cam Read More +

“Outside-In Autonomous Systems,” a Presentation from Microsoft

Jie Liu, Visual Intelligence Architect in the Cloud and AI Platforms Group at Microsoft, delivers the presentation "Outside-In Autonomous Systems" at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Liu shares his company's vision for smart environments that observe and understand space, people and things.

“Outside-In Autonomous Systems,” a Presentation from Microsoft Read More +

“Embedded AI for Smart Cities and Retail in China,” a Presentation from Horizon Robotics

Yufeng Zhang, VP of Global Business at Horizon Robotics, presents the “Embedded AI for Smart Cities and Retail in China,” tutorial at the May 2018 Embedded Vision Summit. Over the past ten years, online shopping has changed the way we do business. Now, with the development of AI technology, we are seeing the beginning of

“Embedded AI for Smart Cities and Retail in China,” a Presentation from Horizon Robotics Read More +

May 2018 Embedded Vision Summit Vision Tank Competition Finalist Presentations

Anil Cheriyadat, CEO of Sturfee, João Diogo Falcão, Director of Product at AiFi, Dan Connors, CTO of Boulder AI, Carlo Dal Mutto, CTO of Aquifi, and Deepak Gaddipati, CTO of VirtuSense Technologies, deliver their Vision Tank finalist presentations at the May 2018 Embedded Vision Summit. The Vision Tank, a unique spin on the Shark Tank

May 2018 Embedded Vision Summit Vision Tank Competition Finalist Presentations Read More +

“Introduction to Optics for Embedded Vision,” a Presentation from Edmund Optics

Jessica Gehlhar, Vision Solutions Engineer at Edmund Optics, presents the "Introduction to Optics for Embedded Vision" tutorial at the May 2018 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics. She explains key parameters and concepts

“Introduction to Optics for Embedded Vision,” a Presentation from Edmund Optics Read More +

“Introduction to Optics for Embedded Vision,” a Presentation from Edmund Optics

Jessica Gehlhar, Vision Solutions Engineer at Edmund Optics, presents the "Introduction to Optics for Embedded Vision" tutorial at the May 2018 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics. She explains key parameters and concepts

“Introduction to Optics for Embedded Vision,” a Presentation from Edmund Optics Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top