Brian Dipert

Robotaxis on the Rise: Exploring Autonomous Vehicles

Robotaxis are proving they can offer driverless services in certain cities, as a means of accessible and modern public transport. IDTechEx states in its latest report, “Autonomous Vehicles Market 2025-2045: Robotaxis, Autonomous Cars, Sensors“, that testing is taking place worldwide, with the most commercial deployment happening in China currently. The report explores the commercial readiness […]

Robotaxis on the Rise: Exploring Autonomous Vehicles Read More +

Federated Learning: Risks and Challenges

This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. In the first article of our mini-series on Federated Learning (FL), Privacy-First AI: Exploring Federated Learning, we introduced the basic concepts behind the decentralized training approach, and we also presented potential applications in certain domains. Undoubtedly, FL

Federated Learning: Risks and Challenges Read More +

Vision Components at SPIE Photonics West: MIPI Vision Components for RPi5 with Hailo TPU

Ettlingen, January 16, 2025.  Vision Components presents an expanded portfolio of its MIPI vision components at SPIE Photonics West, which takes place from January 27 to February 1, 2025 in San Francisco. For the first time, a demo will show the full support of the VC MIPI Cameras with the new Raspberry Pi 5 including

Vision Components at SPIE Photonics West: MIPI Vision Components for RPi5 with Hailo TPU Read More +

Edge Intelligence and Interoperability are the Key Components Driving the Next Chapter of the Smart Home

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The smart home industry is on the brink of a significant leap forward, fueled by generative AI and edge capabilities The smart home is evolving to include advanced capabilities, such as digital assistants that interact like friends

Edge Intelligence and Interoperability are the Key Components Driving the Next Chapter of the Smart Home Read More +

NAMUGA Signs Strategic Partnership MOU with Singapore’s METAOPTICS, Pioneers of Metalens Technology

January 16, 2025 – NAMUGA CEO Don Lee (right) and METAOPTICS Technologies CEO Mark Thng signed the MOU, committing to future collaboration in applying metalens technology to AI vision solution modules. This partnership will focus on applications such as smartphone cameras, AR/VR devices, autonomous vehicles, and facial recognition systems using 3D sensing LiDAR. The companies

NAMUGA Signs Strategic Partnership MOU with Singapore’s METAOPTICS, Pioneers of Metalens Technology Read More +

Key Takeaways from CES 2025: Innovations in Semiconductors, IoT, and Autonomous Driving

The Consumer Electronics Show (CES) 2025 has once again proven to be a hotbed of innovation and technological advancements. The show was back to pre-pandemic levels, with an estimated 150,000 attendees. Despite being a hotbed, there was no single central theme apart from AI being embedded everywhere. AI was not the be-all and end-all, however

Key Takeaways from CES 2025: Innovations in Semiconductors, IoT, and Autonomous Driving Read More +

Edge AI and Vision Insights: January 15, 2025

LETTER FROM THE EDITOR Dear Colleague, This is your last chance to submit a product for consideration in the 2025 Product of the Year Awards from the Edge AI and Vision Alliance. The deadline is this Friday January 17, so act now to ensure you don’t miss this once-a-year opportunity. Award winners receive: Year-round promotion:

Edge AI and Vision Insights: January 15, 2025 Read More +

Single- vs. Multi-camera Systems: The Ultimate Guide

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Single and multiple camera setups play a vital role in today’s vision systems, enhancing applications such as Automated sports broadcasting, Industrial robots, Traffic monitoring and more. Explore the key differences, advantages, and real-world applications of

Single- vs. Multi-camera Systems: The Ultimate Guide Read More +

Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Generative AI has evolved from text-based models to multimodal models, with a recent expansion into video, opening up new potential uses across various industries. Video models can create new experiences for users or simulate scenarios for training

Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top