Edge AI and Vision Insights: March 23, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,Ryad Benosman

We’re excited to announce the keynote speaker for the May 16-19 Embedded Vision Summit: Dr. Ryad Benosman, a leader in neuromorphic sensing and computing. Dr. Benosman is a Professor at the University of Pittsburgh and an Adjunct Professor at the CMU Robotics Institute. He is widely recognized as a pioneer and visionary in neuromorphic sensing and processing. In his keynote talk, “Event-based Neuromorphic Perception and Computation: The Future of Sensing and AI,” Benosman will introduce the fundamentals of bio-inspired, event-based image sensing and processing approaches, explore their strengths and weaknesses, and show that bio-inspired vision systems have the potential to dramatically outperform conventional visual AI approaches.

The Embedded Vision Summit, returning to an in-person format this year in Santa Clara, California, is the key event for system and application developers who are incorporating computer vision and visual AI into products. It attracts a unique audience of over 1,000 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and visual AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and visual AI.

Once again we’ll be offering a packed program with 100+ sessions, 50+ technology exhibits, and 100+ demos, all covering the technical and business aspects of practical computer vision, deep learning, visual AI and related technologies. And new for 2022 are the Edge AI Deep Dive Days, a series of in-depth sessions focused on specific topics in visual AI at the edge. Registration is now open, and if you register now, you can save 15% by using the code SUMMIT22-NL. Register now and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

VISION SYSTEM FUNDAMENTALS

Building the Eyes of a Vision System: From Photons to BitsGoPro
In this tutorial from last year’s Embedded Vision Summit, Jon Stern, Director of Optical Systems at GoPro, presents a guide to the multidisciplinary science of building the eyes of a vision system. CMOS image sensors have been instrumental in lowering the barrier for embedding vision into systems. Their high degree of integration allows photons to be converted into bits with minimal support circuitry. Simple protocols and interfaces mean that companies can design camera-based systems with comparatively little specialist expertise. To produce high-quality output, the image sensor and optics must be carefully co-optimized to fit the application. To assist with component selection and help avoid common pitfalls, Stern describes the key parameters and provides a practical guide to selecting both sensor and optics for a camera. He also provides an introduction to other hardware considerations and to correcting optical aberrations in the image processing pipeline.

A Survey of CMOS Imagers and Lenses—and the Trade-offs You Should ConsiderCapable Robot Components
Selecting the right imager and lens for
your vision application is often a daunting challenge due to the vast
number of products on the market and the large technical and performance
differences between different product lines and technologies. In this
talk, Chris Osterwood, Founder and CEO of Capable Robot Components,
presents overviews of the imager and lens market trade-spaces as well as
an analysis showing some unexpected correlations and clusters in
product offerings. This analysis will help you understand performance,
size, weight, and cost tradeoffs and will help guide your component
selections and integration.

COMPREHENDING AND QUANTIFYING DEEP LEARNING MODEL CAPABILITIES

Explainability in Computer Vision: A Machine Learning Engineer’s OverviewAltaML
With the increasing use of deep neural networks in computer vision applications, it has become more difficult for developers to explain how their algorithms work. This can make it difficult to establish trust and confidence among customers and other stakeholders, such as regulators. Lack of explainability also makes it more difficult for developers to improve their solutions. In this talk, Navaneeth Kamballur Kottayil, Lead Machine Learning Developer at AltaML, introduces methods for enabling explainability in deep-learning-based computer vision solutions. He also illustrates some of these techniques via real-world examples, and shows how they can be used to improve customer trust in computer vision models, to debug computer vision models, to obtain additional insights about data and to detect bias in models.

Benchmarking Value in the Age of AI: Rational Decisions About Black Box TechnologyVision Ventures
The combination of AI and embedded vision promises compelling solutions to an almost infinite range of practical problems, often vastly outperforming historical techniques, while stimulating a pace of development unsurpassed since the early days of the internet. As innovative companies progress through each stage of the life cycle from launch, through growth, and towards successful exits, assessments of the commercial value of the company are critical. With many AI vision companies relying on technical excellence as the foundation of competitive advantage, it has become increasingly difficult for external investors and stakeholders to make informed judgments about value. In this presentation, Chris Yates, Director at Vision Ventures, shares his company’s view and practical approach to assessing value contained in AI companies as a means to help investors make more informed decisions, help companies effectively communicate their value and assist acquiring companies in identifying and valuing acquisition targets.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 16-19, 2022, Santa Clara, California

More Events

FEATURED NEWS

MathWorks’ MATLAB and Simulink Release 2022a Support Simulating and Testing Automated Driving Systems

Lattice Semiconductor Expands Its mVision Solution Stack with New Image Processing and Bridging Capabilities

e-con Systems Launches an 8 MPixel UVC USB Camera with High Dynamic Range and Dual Stream Support

STMicroelectronics Transforms Digital Vision With Its First 0.5 Mpixel Depth Image Sensor

Alliance Member companies EdgeCortix, Quadric and Retrocausal have all made recent investment announcements.

More News

EMBEDDED VISION SUMMIT SPONSOR SHOWCASE

Attend the Embedded Vision Summit to meet these and other leading computer vision and edge AI technology suppliers!

Edge ImpulseEdge Impulse
Edge Impulse is a leading development platform for machine learning on edge devices. The company’s mission is to enable every developer and device maker with the best development and deployment experience for machine learning on the edge, focusing on sensor, audio, and computer vision applications.

 

QualcommMicrosoft M12
For more than 30 years, Qualcomm has served as the essential accelerator of wireless technologies and the ever-growing mobile ecosystem. Now our inventions are set to transform other industries by bringing connectivity, machine vision and intelligence to billions of machines and objects, catalyzing the IoT.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top