Edge AI and Vision Insights: November 18, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Vision Components webinar

On Tuesday, February 16, 2021 at 9 am PT, Vision Components will deliver the free webinar “Adding Embedded Cameras to Your Next Industrial Product Design,” in partnership with the Edge AI and Vision Alliance. Powerful embedded vision creates new possibilities and added value for many industrial products. In this presentation, Vision Components will demonstrate multiple ways to integrate camera technology in hardware designs. The company will also share its know-how and experience, from the invention of the first industrial-grade intelligent camera 25 years ago to state-of-the-art MIPI modules, board-level cameras and ready-to-use solutions. For more information and to register, please see the event page.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

DEEP LEARNING INFERENCE AT THE EDGE

Optimize, Deploy and Scale Edge AI and Video Analytics ApplicationsNVIDIA
In this tutorial, you will learn how to use Amazon Web Services (AWS) and NVIDIA technologies to develop solutions for edge computing and video analytics. Edwin Weill, Ph.D., Enterprise Data Scientist at NVIDIA, and Ryan Vanderwerf, Partner Solutions Architect at Amazon, introduce the use of key NVIDIA SDKs, including TensorRT and DeepStream, for optimizing your deep learning models and deploying them in a video analytics stack. And they show you how to use AWS cloud services like SageMaker Neo and Greengrass to deploy and scale your solution.

Cadence Tensilica Edge AI Processor IP Solutions for Broad Market Use CasesCadence
Pulin Desai, Vision and AI Product Marketing Group Director at Cadence, presents the full range of Cadence Tensilica edge AI processing solutions in this talk. Desai shows how the Cadence Tensilica edge AI processors meet the needs of specific markets, including distributing processing across low-power, programmable DSP and AI accelerators. He also highlights key trends in DNN topologies, and makes the case that, due to rapid progress in neural network research and varying processing requirements, programmable solutions are essential. Finally, he outlines the other elements of the Cadence Tensilica edge AI processing solutions, including software tools and support for a wide range of software frameworks.

HARDWARE AND SOFTWARE DEVELOPMENT

Accelerating Time to an Image Processing PrototypeAvnet
In this tutorial, you will learn how you can go from an initial concept/architecture of an embedded vision design to demonstrating application and algorithm feasibility using off-the-shelf prototyping systems from Avnet and Xilinx. The session, presented by Adam Taylor, Founder and Principal Consultant at Adiuvo Engineering and Training, along with Dan Rozwood, Engineer, and Kevin Keryk, Software Engineering Manager and Machine Learning Specialist, both of Avnet, demonstrates key steps used to create a prototype using the Ultra96-V2 and Xilinx solutions:

  • Creation of an image processing pipeline using Vivado
  • Testing the image processing pipeline on the Ultra96 using an open-source Python-based framework and OpenCV with the Dual MIPI Board and MIPI cameras running algorithms on the processor cores
  • Acceleration of the vision algorithm using Vitis and OpenCL to create a final application

Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case StudyIntel
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa. In this presentation, Manas Pathak, Global AI Lead for Oil and Gas at Intel, illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow, based on the Intel Distribution of the OpenVINO toolkit, to address this challenge and perform accelerated AI on seismic data.

UPCOMING INDUSTRY EVENTS

Yole Développement Webinar – Sensor Fusion for Autonomous Vehicles: December 15, 2020, 9:00 am PT

BrainChip Webinar – Power-efficient Edge AI Applications through Neuromorphic Processing: December 17, 2020, 9:00 am PT

Vision Components Webinar – Adding Embedded Cameras to Your Next Industrial Product Design: February 16, 2021, 9:00 am PT

More Events

FEATURED NEWS

Imagination Launches the Multi-core IMG Series4 Neural Network Accelerator for ADAS and Autonomous Driving

Intel Executes Toward its XPU Vision with oneAPI and Iris Xe MAX Graphics

AMD Unveils AMD Ryzen Embedded V2000 Processors with Enhanced Performance and Power Efficiency

MediaTek Introduces the i350 Edge AI Platform Designed for Voice and Vision Processing Applications

Ambarella’s CV28M SoC With CVflow Enables New Categories of Intelligent Sensing Devices

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

NVIDIA Jetson Nano (Best AI Processor)NVIDIA
NVIDIA’s Jetson Nano is the 2020 Vision Product of the Year Award Winner in the AI Processors category. Jetson Nano delivers the power of modern AI in the smallest supercomputer for embedded and IoT. Jetson Nano is a small form factor, power-efficient, low-cost and production-ready System on Module (SOM) and Developer Kit that opens up AI to the educators, makers and embedded developers previously without access to AI. Jetson Nano delivers up to 472 GFLOPS of accelerated computing, can run many modern neural networks in parallel, and delivers the performance to process data from multiple high-resolution sensors, including cameras, LIDAR, IMU, ToF and more, to sense, process and act in an AI system, consuming as little as 5 W.

Please see here for more information on NVIDIA and its Jetson Nano. The Vision Product of the Year Awards are open to Member companies of the Edge AI and Vision Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

EMBEDDED VISION SUMMIT MEDIA PARTNER SHOWCASE

EE TimesEE Times
EE Times—part of the AspenCore collection—is a respected news website that cuts through the industry noise by delivering original reporting, trusted analysis, and a diversity of industry voices to design engineers and management executives. With an expanding base of expert contributors, guided by award-winning editors, community leaders and journalists, EE Times’ singular mission is to serve as your guide to what’s really important in the global electronics industry. Register here to get your free eNewsletters.

Vision Spectra (Photonics)Vision Spectra (Photonics)
Vision Spectra magazine covers the latest innovations that are transforming today’s manufacturing landscape: neural networking, 3D sensing, embedded vision and more. Each issue includes rich content on a range of topics, with an in-depth look at how vision technologies are transforming industries from food and beverage to automotive and beyond. Information is presented with integrators, designers, and end-users in mind. Subscribe for free today!

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top