Edge AI and Vision Insights: July 20, 2022 Edition

LETTER FROM THE EDITOR
Dear Colleague,Intel webinar

On Thursday, August 30 at 9 am PT, Edge Impulse will deliver the free webinar “Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination” in partnership with the Edge AI and Vision Alliance. Edge Impulse’s FOMO (Faster Objects, More Objects), introduced earlier this year, is a brand new approach to running object detection models on resource-constrained devices. This ground-breaking algorithm brings real-time object detection, tracking and counting to microcontrollers, such as Sony’s Spresense product line, for the first time. Sony’s latest multicore Spresense microcontrollers, in combination with the company’s high-resolution image sensor and camera portfolios as well as global LTE connectivity capabilities, create robust computer vision hardware platforms.

In this webinar, you’ll learn how Edge Impulse’s software expertise and products unlock this hardware potential to deliver an optimized total solution for agriculture technology, industrial IoT, smart cities, remote monitoring and other application opportunities. The webinar will be presented by Jenny Plunkett, Senior Developer Relations Engineer at Edge Impulse, and Armaghan Ebrahimi, Partner Solutions Engineer at Sony Electronics Professional Solutions Americas. Plunkett and Ebrahimi will introduce their respective companies’ technologies and products, as well as explain how they complement each other in delivering enhanced edge machine learning, computer vision and IoT capabilities. The webinar will include demonstrations of the concepts discussed, detailing how to bring to life applications with needs for sensor analysis, machine learning, image processing and data filtering. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

FUNDAMENTALS TUTORIALS

Modern Machine Vision from Basics to Advanced Deep LearningDeep Netts
In this 2021 Embedded Vision Summit talk, Zoran Sevarac, Associate Professor at the University of Belgrade and Co-founder and CEO of Deep Netts, introduces the fundamentals of deep learning for image understanding. He begins by explaining the basics of convolutional neural networks (CNNs), and showing how CNNs are used to perform image classification and object detection. He provides an overview of the recent evolution of CNN topologies for object detection. He also illustrates typical use cases for CNN-based image classification and object detection, and provides a roadmap for getting started with deep learning for image understanding.

Introduction to Simultaneous Localization and Mapping (SLAM)Gareth Cross
This presentation from the 2021 Embedded Vision Summit provides an introduction to the fundamentals of SLAM. Independent game developer (and former technical lead of state estimation at Skydio) Gareth Cross aims to provide foundational knowledge, and viewers are not expected to have any prerequisite experience in the field. The talk consists of an introduction to the concept of SLAM, as well as practical design considerations in formulating SLAM problems. Visual inertial odometry is introduced as a motivating example of SLAM, and Cross explains how this problem is structured and solved.

IDENTIFYING AND EVALUATING BUSINESS OPPORTUNITIES

The Five Rights of an Edge AI Computer Vision System: Right Data, Right Time, Right Place, Right Decision, Right ActionADLINK Technology
Solutions builders and business decision-makers designing edge AI computer vision systems should focus on five key factors to ensure outcomes that deliver ROI. The Five Rights of an edge AI computer vision system are streaming the right data, at the right time, to the right place, for the right decision, to drive the right action. And the best place to get started is the fifth and final right—the right action—defining what exactly is the outcome you want to achieve with your system. What business problem does it solve? Once you identify this you then need to work backward from there, embracing the benefits and the challenges of AI at the edge. In this 2021 Embedded Vision Summit talk, Toby McClean, Vice President of AIoT Technology and Innovation at ADLINK Technology, explains these key concepts and illustrates them via real-world use cases.

Computer Vision for the Built EnvironmentNomad Go
Facilities and operations managers of buildings, college campuses, retail and foodservice establishments all struggle to answer one fundamental question: “What are people doing in our spaces?” Computer vision – specifically edge computer vision – provides the ability to not only capture what is happening in a physical space but also unlocks actionable data about physical spaces that allows building managers and owners to greatly improve energy efficiency, sustainability and overall operations. In this presentation from the 2021 Embedded Vision Summit, you’ll learn about challenges in understanding the built environment, the methods currently being used to address them, and how edge computer vision is being applied to solve these challenges by providing real-time, actionable data at scale. David Greschler, CEO and co-founder of Nomad Go, presents real-world case studies of deployed edge computer vision solutions, showing how they are helping save energy costs, reduce greenhouse gases and improve operations such as cleaning and space planning.

UPCOMING INDUSTRY EVENTS

Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code – Intel Webinar: August 25, 2022, 9:00 am PT

Edge Impulse’s FOMO Technology and Sony’s Computer Vision Platform: A Compelling Combination – Edge Impulse Webinar: August 30, 2022, 9:00 am PT

More Events

FEATURED NEWS

EdgeCortix Collaborates with Renesas to Deliver Enhanced Feature-Rich Compiler for the Renesas DRP-AI AI-Accelerator

STMicroelectronics Reveals FlightSense Multi-zone ToF Sensor for Gesture Recognition, Intruder Alert, and Human Presence Detection In Front of PC

Alliance Member Companies Deci and Opteran Technologies Have Both Made Recent Funding Announcements

Teledyne FLIR Object Detection and Tracking Software Accelerates Thermal Camera Integration for ADAS and Autonomous Vehicles

Flex Logix and CEVA Announce First Working Silicon of a DSP with Embedded FPGA to Allow a Flexible/Changeable ISA

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Edge Impulse EON Tuner (Best Edge AI Developer Tool)Edge Impulse
Edge Impulse’s EON Tuner is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Developer Tools category. The EON Tuner helps you find and select the best edge machine learning model for your application within the constraints of your target device. While existing “AutoML” tools focus only on machine learning, the EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping developers find and select the ideal tradeoff between these two types of processing blocks to achieve optimal performance for their computer vision application within the latency and memory constraints of their target edge device. The EON Tuner is designed to quickly assist developers in discovering preprocessing algorithms and NN model architectures specifically tailored for their use case and dataset. The EON Tuner eliminates the need for processing block and manual parameter selection to obtain the best model accuracy, reducing user’s technical knowledge requirements and decreasing the total time to get from data collection to a model that runs optimally on an edge device in the field.

Please see here for more information on Edge Impulse’s EON Tuner. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top