Edge AI and Vision Insights: November 20, 2024

 LETTER FROM THE EDITOR

Dear Colleague, 

If your company introduced a building-block product enabling computer vision or edge AI in 2024, we hope you’ll consider entering it in the 2025 Edge AI and Vision Product of the Year Awards competition! These Awards, open only to Alliance Member companies, give the winners industry-wide recognition as top providers of building-block technologies that form the foundation of edge AI- and vision-based products.

Why enter? Past winners tell us that  winning this award is a fantastic way to:

  1. Boost your product’s reputation: Stand out as the best in a dynamic, crowded  market.
  2. Earn industry credibility: Be recognized by a panel of independent expert judges.
  3. Increase visibility: Showcase your product throughout the year with promotion by the Edge AI and Vision Alliance.

Learn more and enter today!

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

COMBATING AI BIAS

Identifying and Mitigating Bias in AI

From autonomous driving to immersive shopping, and from enhanced video collaboration to graphic design, AI is placing a wealth of possibilities at our fingertips. However, AI comes with vulnerabilities, which can result in costly mishaps. In this talk, Nikita Tiwari, AI Enabling Engineer for OEM PC Experiences in the Client Computing Group at Intel, explores risks related to bias in AI models. She examines the different types of biases that can arise in defining, training, evaluating and deploying AI models, and illustrates them with examples. She then introduces practical techniques and tools for detecting and mitigating bias, outlining their capabilities and limitations. She also touches on fairness metrics that can be useful when developing models.

Harm and Bias Evaluation and Solution for Adobe Firefly

In this presentation, Rebecca Li, Machine Learning Engineering Manager at Adobe, explores the comprehensive approach Adobe has taken to mitigate harm and bias for Firefly, Adobe’s groundbreaking AI art generation tool, integrating AI ethics across all development stages. From design to deployment, Firefly prioritizes ethical considerations, embedding AI ethics into every process. Li showcases strategies for identifying and mitigating potential harm and bias, utilizing advanced AI techniques to ensure content safety. Additionally, she emphasizes her company’s active involvement in shaping the global discourse on technology, highlighting the importance of collective responsibility in fostering ethical AI practices. You’ll learn how to navigate the complex landscape of AI ethics, with insights and strategies to promote responsible innovation and accountability in the digital era.

TECHNOLOGY FUNDAMENTALS

Introduction to Computer Vision with Convolutional Neural Networks

This presentation covers the basics of computer vision using convolutional neural networks. Mohammad Haghighat, Senior Manager for CoreAI at eBay, begins by introducing some important conventional computer vision techniques and then transitions to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception. Haghighat illustrates the building blocks and computational elements of neural networks through examples. You’ll gain a solid overview understanding of how modern computer vision algorithms are designed, trained and used in real-world applications.

Introduction to Depth Sensing

We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, introduced the main types of depth cameras, including passive and active stereo; structured light; and direct and indirect time-of-flight cameras. He explains how each of these types of cameras work and highlights their key strengths and weaknesses. He also touches on applications enabled by depth cameras.

UPCOMING INDUSTRY EVENTS

The CMOS Image Sensor Industry: Technology Trends and Emerging Applications – Yole Group Webinar: December 3, 2024, 9:00 am PT

Sensing In ADAS and Autonomous Vehicles: What’s Winning, and Why? – TechInsights Webinar: January 28, 2025, 9:00 am PT

Embedded Vision Summit: May 20-22, 2025, Santa Clara, California

More Events

FEATURED NEWS

Microchip Technology’s PolarFire FPGA Ethernet Sensor Bridge Accelerates Real-time Edge AI with NVIDIA’s Holoscan

Renesas Electronics Brings the High Performance Arm Cortex-M85 Processor to Cost-sensitive Applications with Its Latest RA8 Entry-line MCUs

STMicroelectronics’ New Web-based Development Tool Accelerates AIoT Projects with Smart Sensors

Qualcomm Accelerates the Evolution of Software-defined Vehicles with Its New Snapdragon Cockpit Elite and Snapdragon Ride Elite Platforms

MIPS’ P8700 AI-enabled RISC-V Processor for ADAS and Autonomous Vehicles Has Reached General Release Status

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Tenyks Data-Centric CoPilot for Vision (Best Edge AI Developer Tool)

Tenyks’ Data-Centric CoPilot for Vision is the 2024 Edge AI and Vision Product of the Year Award Winner in the Edge AI Developer Tools category. The Data-Centric CoPilot for Vision platform helps computer vision teams develop production-ready models 8x faster. The platform enables machine learning (ML) teams to mine edge cases, failure patterns and annotation quality issues for more accurate, capable and robust models. In addition, it helps ML teams intelligently sub-sample datasets to increase model quality and cost efficiency. The platform supports the use of multimodal prompts to quickly compare model performance on customized training scenarios, such as pedestrians jaywalking at dusk, in order to discover blind spots and enhance reliability. ML teams can also leverage powerful search functionality to conduct data curation in hours vs. weeks. One notable feature of the platform is its multimodal Embeddings-as-a-Service (EaaS) to expertly organize, curate, and manage datasets. Another key platform feature is the streamlined cloud integration, supporting a multitude of cloud storage services and facilitating effortless access and management of large-scale datasets.

Please see here for more information on Tenyks’ Data-Centric CoPilot for Vision. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top