Edge AI and Vision Insights: April 6, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,2022 Embedded Vision Summit

Need to see the latest edge AI and vision techniques, processors, tools and algorithms? Looking for live technology demos, not just PowerPoint presentations? Want to meet the experts behind the latest innovations? If you answered “yes” to any or all of these questions, then you should be at the Embedded Vision Summit, coming up in person May 16-19 in Santa Clara, California. It’s the key event for system and application developers who are incorporating practical computer vision and visual AI into products, and you don’t want to miss it.

Why? Here’s what makes the Summit different from other tech conferences:

  • The Summit is created by innovators, for innovators – the conference organizers work in the industry
  • We have a relentless focus on practical information for people incorporating vision and AI into products to solve real-world problems
  • We’ve been doing this for 11 years, and
  • A whopping 98% of our attendees would recommend the Summit to a colleague

The Summit attracts a unique audience of over 1,000 product creators, entrepreneurs and business decision-makers who are creating and using computer vision and visual AI technologies. It’s a unique venue for learning, sharing insights and getting the word out about interesting new technologies, techniques, applications, products and practical breakthroughs in computer vision and visual AI.

Once again we’ll be offering a packed program with 80+ sessions, 50+ technology exhibits, and 100+ demos, all covering the technical and business aspects of practical computer vision, deep learning, visual AI and related technologies. And new for 2022 are the Edge AI Deep Dive Days, a series of in-depth sessions focused on specific topics in visual AI at the edge. Registration is now open, and if you register now, you can save 15% by using the code SUMMIT22-NL. Register now and tell a friend! You won’t want to miss what is shaping up to be our best Summit yet.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

VISION FOR VEHICULAR AUTONOMY

Automotive Vision – What’s Growing, What’s Not, and Why?Strategy Analytics
In this presentation from last year’s Embedded Vision Summit, Ian Riches, Vice President of Automotive Practice and Director at Strategy Analytics, looks at the key applications and use cases that are driving rapid adoption of vision systems in automotive applications, and where other sensor types, such as RADAR and LiDAR fit into this picture. He also highlights the automotive applications he thinks are unlikely to see rapid lift-off in the next five to seven years, and explains why. He concludes with a look at how the structure of the global automotive industry is changing and what the implications are for existing players and new entrants alike.

Deploying an Autonomous Valet Parking Solution with Cameras Using AINextchip and VADAS
In this talk, James Kim, Marketing Team Leader at Nextchip, explains why his company targeted autonomous valet parking (AVP) with its newest APACHE6 SoC, and how Nextchip tailored the chip design for this application. Co-presenter Peter Lee, CEO of VADAS, shares insights from his company’s development of camera-based AVP software, and provides an overview of the AVP software stack.

COMBINING DIFFERENT TYPES OF ALGORITHMS AND SENSOR DATA

Flexible Machine Learning Solutions with FPGAsLattice Semiconductor
The ability to perform neural network inference in resource-constrained devices is fueling the growth of machine learning at the edge. But application solutions require more than just inference—they also incorporate aggregation and pre-processing of input data, and post-processing of inference results. In addition, new neural network topologies are emerging rapidly. This diversity of functionality and quick evolution of topologies means that processing engines must have the flexibility to execute different types of workloads. I/O flexibility is also key, to enable system developers to choose the best sensor and connectivity options for their applications. In this talk, Sreepada Hegade, Senior Manager for ML Software and Solutions at Lattice Semiconductor, explores how the configurable nature of Lattice FPGAs and the soft cores implemented on them allow for quick adoption of emerging neural network topologies, efficient execution of pre- and post-processing functions, and flexible I/O interfacing. He also shows how his company optimizes network topologies and its compiler to get the best out of FPGAs.

Highly Scalable Sensor Hub DSP for Computer Vision, AI and Multi-sensor Fusion for Contextually Aware DevicesCEVA
Contextual awareness is the ability of a system to gather information about its environment and adapt behaviors accordingly. Contextual awareness is enabling enriched user experiences in applications like XR, robotics and automotive. It requires the processing of data from multiple sensors—such as cameras, radar, lidar and motion sensors—using various computer vision and neural network workloads. In this presentation, Gil Abraham, Director of Business Development in the Vision and AI Business Unit at CEVA, covers CEVA’s latest SensPro2 sensor hub DSP family for contextual awareness processing, including its scalable architecture (combining parallel signal processing and AI inferencing) as well as the unique CEVA neural network optimizing compiler that supports it.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 16-19, 2022, Santa Clara, California

More Events

FEATURED NEWS

Flex Logix Announces Production Availability of InferX X1M Boards for Edge AI Vision Systems

FRAMOS Introduces the Intel RealSense Depth Camera D405

Immervision Launches a Low-light Navigation Camera Module for Unmanned Aerial Vehicles

Edge Impulse Releases the FOMO (Faster Objects, More Objects) Algorithm

NVIDIA Announces Availability of the Jetson AGX Orin Developer Kit to Advance Robotics and Edge AI

More News

WHO’S HIRING

Perceive (Machine Learning, Software, Embedded Software)Perceive
Do you love finding the most elegant solution to a problem? Perceive is making neural networks and machine learning work better at the edge, so that everyday devices work better in our homes, businesses, and the world around us. Join us!

EMBEDDED VISION SUMMIT PARTNER SHOWCASE

Hackster.ioHackster.io
Hackster, an Avnet community, is the world’s largest developer community for learning, programming, and building hardware with 1.9M+ members and 30K+ open source projects.

 

Vision Systems DesignVision Systems Design
Vision Systems Design is the machine vision and imaging resource for engineers and integrators worldwide. Receive unique, unbiased and in-depth technical information about the design of machine vision and imaging systems for demanding applications in
your inbox today
.

EMBEDDED VISION SUMMIT SPONSOR SHOWCASE

Attend the Embedded Vision Summit to meet these and other leading computer vision and edge AI technology suppliers!

Flex Logix TechnologiesFlex Logix Technologies
Flex Logix Technologies offers the InferX family of inference accelerators for use in edge AI and vision applications. Based on dynamically reconfigurable tensor processor technology, InferX-based products offer high performance, low power and efficient execution of complex deep learning models in real time at the edge.

 

IntelIntel
Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, Intel continuously works to advance the design and manufacturing of semiconductors to help address its customers’ greatest challenges.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top