LETTER FROM THE EDITOR |
Dear Colleague,
You spoke and we listened: the 2021 Embedded Vision Summit, the premier conference for innovators adding computer vision and visual AI to products, is now a four-day event, taking place online May 25-28! Expanding the program enables us to offer 80+ highly relevant, top-quality sessions, the type of content that’s been earning the Summit 96%+ approval ratings from attendees for 10 years. Other recent program enhancements include:
- Offering both live online and on-demand-only sessions during the event, opening up lots of flexibility to fit your schedule, and
- Scaling up demos and including them as part of the main agenda, so you don’t have to choose between seeing live sessions and demos
Learn more about the keynote from UC Berkeley Professor Pieter Abbeel, the general session presentations from Edge Impulse’s Zach Shelby and Qualcomm’s Ziad Asghar, and all of the other exciting presentations and activities at the Summit, and then register today with promo code EARLYBIRDNL21 to receive your 15%-off Early Bird Discount!
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
DEEP LEARNING MEDICAL OPPORTUNITIES |
AI-based Face Mask Detection and Analytics
In this video, BDTI and its partners, Tryolabs S.A. and Jabil Optics, demonstrate MaskCam, an open-source smart camera prototype reference design based on the NVIDIA Jetson Nano, capable of estimating the number and percentage of people wearing face masks in its field of view. MaskCam was developed as part of an independent, hands-on evaluation of the Jetson Nano for building real-world edge AI/vision applications. You can read the detailed report at https://bdti.com/maskcam and MaskCam’s source code is available under the MIT License at https://github.com/bdtinc/maskcam. If you have a Jetson Nano Developer Kit and a USB web camera, you can get the MaskCam software running on your system with two simple commands described in the README.
Enabling Embedded AI for Healthcare
Wearable electronics have started to become part of our daily lives, in the form of watches, wristbands, fitness trackers and the like. Advances in sensor design and in AI processing have made these little helpers more capable. The next level could elevate these devices from casual consumer conveniences to true medical monitoring and support, and could include devices that we not only wear, but also devices that we implant them in our bodies. This presentation from Shang-Hung Lin, Vice President of Machine Learning and Neural Processor Product Development at VeriSilicon, discusses the challenges that must be overcome to get to this next stage, and considers approaches and technical solutions, drawing on VeriSilicon’s experience designing chips for a wide range of cost- and power-constrained applications.
|
DEVELOPMENT AND DEPLOYMENT TOOLSETS |
Deploying Deep Learning Applications on FPGAs with MATLAB
Designing deep learning networks for embedded devices is challenging because of processing and memory resource constraints. FPGAs present an even greater challenge due to the complexity of programming in Verilog or VHDL, and the hardware expertise needed for prototyping on an FPGA. This talk from Jack Erickson, Principal Product Marketing Manager at MathWorks, illustrates a workflow to facilitate the design and deployment of these applications to FPGAs using pre-built bitstreams without the need for much hardware expertise. Starting with a pre-trained model trained either in MATLAB or any framework of your choice, Erickson demonstrates the workflow to prototype and deploy the trained network from MATLAB to an FPGA. He illustrates this flow using a deep learning network for image recognition, deploying it to the Xilinx MPSoC board for inference using APIs from MATLAB. This demonstrates how deep learning algorithm engineers can quickly explore different networks and their performance on an FPGA from MATLAB.
Parallelizing Machine Learning Applications with Kubernetes
In this talk, Rajy Rawther, PMTS Software Architect in the Machine Learning Software Engineering group at AMD, presents techniques for obtaining the best inference performance when deploying machine learning applications. With the increasing use of AI in applications ranging from image classification/object detection to natural language processing, it is vital to deploy AI applications in ways that are scalable and efficient. Much work has focused on how to distribute DNN training for parallel execution using machine learning frameworks (TensorFlow, MXNet, PyTorch and others). There has been less work on scaling and deploying trained models on multi-processor systems. Rawther presents a case study analysis of scaling an image classification application using multiple Kubernetes pods. She explores the factors and bottlenecks affecting performance and examine techniques for building a scalable application pipeline.
|
UPCOMING INDUSTRY EVENTS |
Deep Learning for Embedded Computer Vision: An Introduction – Edge AI and Vision Alliance Webinar: March 24, 2021, 9:00 am PT
Enabling Small Form Factor, Anti-tamper, High-reliability, Fanless Artificial Intelligence and Machine Learning – Microchip Technologies Webinar: March 25, 2021, 9:00 am PT
Optimizing a Camera ISP to Automatically Improve Computer Vision Accuracy – Algolux Webinar: March 30, 2021, 9:00 am PT
More Events
|
FEATURED NEWS |
Xilinx’s Cost-Optimized UltraScale+ Portfolio Expands into New Applications for Ultra-Compact, High-Performance Edge Compute
Eta Compute’s Low Power AI Vision Board Accelerates the Design, Test, and Deployment of Embedded Vision Solutions
MediaTek’s MT9638 4K Smart TV Chip Ushers in a New Era of AI-Enabled Interactive Multimedia Experiences
Hailo Collaborates with Leopard Imaging, Socionext, and AWS to Launch the EdgeTuring Video Analytics Platform
Qualcomm’s Snapdragon XR1 Smart Viewer Reference Design Advances the AR Industry
More News
|
VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
Morpho Semantic Filtering (Best AI Software or Algorithm)
Morpho’s Semantic Filtering is the 2020 Vision Product of the Year Award Winner in the AI Software and Algorithms category. Semantic Filtering improves camera image quality by combining the best of AI-based segmentation and pixel processing filters. In conventional imaging, computational photography algorithms are typically applied to the entire image, which can sometimes cause unwanted side effects such as loss of detail and textures, as well as in the appearance of noise in certain areas. Morpho’s Semantic Filtering is trained to identify the meaning of each pixel in the object of interest, allowing the application of the right algorithm for each category, with different strength levels that are most effective to achieve the best image quality for still-image capture.
Please see here for more information on Morpho and its Semantic Filtering. The Edge AI and Vision Product of the Year Awards (an expansion of previous years’ Vision Product of the Year Awards) celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes your leadership in edge AI and computer vision as evaluated by independent industry experts. The Edge AI and Vision Alliance is now accepting applications for the 2021 Awards competition. The submission deadline has been extended to this Friday, March 26; for more information and to enter, please see the program page. |
EMBEDDED VISION SUMMIT MEDIA PARTNER SHOWCASE |
Aspencore
Everyone wants safety on the road. Can advancements in sensing and decision-making technologies help drivers, passengers and vulnerable road users? Advanced driver-assistance systems (ADAS) and autonomous vehicles (AV) are still works in progress that rely on constantly evolving technologies. The newly published 152-page book “Sensors in Automotive“, with contributions from leading thinkers of the automotive industry, marks and heralds the industry’s progress, identifies the remaining challenges, and examines with an unbiased eye what it will take to overcome them.
|