Tools

Introducing Temporian: Tryolabs and Google Venture in Temporal Data Processing

This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Today marks a significant milestone for us at Tryolabs as we introduce Temporian. In collaboration with Google, we’ve designed this tool to address the multifaceted challenges of temporal data processing head-on. Let’s explore the inspiration, functionality, and […]

Introducing Temporian: Tryolabs and Google Venture in Temporal Data Processing Read More +

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Generative AI: How Will It Impact Edge Applications and Machine Perception?” Expert Panel at the May 2023 Embedded Vision Summit. Other panelists include Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion Read More +

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman

Kristen Grauman, Professor at the University of Texas at Austin and Research Director at Facebook AI Research, presents the “Frontiers in Perceptual AI: First-person Video and Multimodal Perception” tutorial at the May 2023 Embedded Vision Summit. First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors.

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman Read More +

“Multiple Object Tracking Systems,” a Presentation from Tryolabs

Javier Berneche, Senior Machine Learning Engineer at Tryolabs, presents the “Multiple Object Tracking Systems” tutorial at the May 2023 Embedded Vision Summit. Multiple object tracking (MOT) is an essential capability in many computer vision systems, including applications in fields such as traffic control, self-driving vehicles, sports and more. In this session, Berneche walks through the

“Multiple Object Tracking Systems,” a Presentation from Tryolabs Read More +

“Open Standards Unleash Hardware Acceleration for Embedded Vision,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group and Vice President of Developer Ecosystems at NVIDIA, presents the “Open Standards Unleash Hardware Acceleration for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit. Offloading visual processing to a hardware accelerator has many advantages for embedded vision systems. Decoupling hardware and software removes barriers to innovation

“Open Standards Unleash Hardware Acceleration for Embedded Vision,” a Presentation from the Khronos Group Read More +

Top Factors to Consider When Integrating Multiple Cameras into Embedded Vision Applications

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Integrating multiple cameras into a camera-enabled system offers significant advantages, making it a crucial requirement in various applications, including industrial automation, surveillance, virtual reality, and more. Discover the key factors that play a crucial role

Top Factors to Consider When Integrating Multiple Cameras into Embedded Vision Applications Read More +

“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Presentation from Intel

Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit. Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not

“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Presentation from Intel Read More +

Immervision Awarded $5.7 Million Contract from DRDC to Develop Panoramic Imaging Systems

MONTREAL-September 5, 2023–Immervision, the world’s leading developer of advanced vision systems combining optics, image processing, and sensor fusion technology, is pleased to announce the award of a $5.7 million contract from Defence Research and Development Canada (DRDC) for the design and development of panoramic imaging components and systems. This contract underscores Immervision’s industry leading expertise

Immervision Awarded $5.7 Million Contract from DRDC to Develop Panoramic Imaging Systems Read More +

Reimagining Indoor Localization with Dragonfly: A Glimpse into Uncharted Precision

This blog post was originally published by Onit. It is reprinted here with the permission of Onit. Hello tech enthusiasts! Today, we’re diving into the dynamic world of indoor localization once again, this time with a closer look at the ingenious technology driving Dragonfly. As many of you are already aware, Dragonfly stands as a

Reimagining Indoor Localization with Dragonfly: A Glimpse into Uncharted Precision Read More +

“Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft,” a Presentation from Immervision

Julie Buquet, Applied Researcher for Imaging and AI at Immervision, presents the “Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft” tutorial at the May 2023 Embedded Vision Summit. Unmanned aircraft systems (UASs) need to perform accurate autonomous navigation using sense-and-avoid algorithms under varying illumination conditions. This requires robust algorithms able to perform consistently,

“Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft,” a Presentation from Immervision Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top