LETTER FROM THE EDITOR |
Dear Colleague,
On Thursday, August 25 at 9 am PT, Intel will deliver the free webinar “Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code” in partnership with the Edge AI and Vision Alliance. Are you using Google’s TensorFlow framework to develop your deep learning models? And are you doing inference processing on those models using Intel compute devices: CPUs, GPUs, VPUs and/or HDDL (High Density Deep Learning) processing solutions? If the answer to both questions is “yes”, then this hands-on tutorial on how to integrate TensorFlow with Intel’s Distribution of the OpenVINO toolkit for rapid development while also achieving accurate and high-performance inference results is for you!
TensorFlow developers can now take advantage of OpenVINO optimizations with TensorFlow inference applications across a wide range of Intel compute devices by adding just two lines of code. In addition to introducing OpenVINO and its capabilities, the webinar will include demonstrations of the concepts discussed via a code walk-through of a sample application. It will be presented by Kumar Vishwesh, Technical Product Manager, and Ragesh Hajela, Senior AI Engineer, both of Intel. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
COMPENSATING FOR REAL-WORLD CONSTRAINTS AND CHANGES |
Maintaining DNN Accuracy When the Real World is Changing
We commonly train deep neural networks (DNNs) on existing data and then use the trained model to make predictions on new data. Once trained, these predictive models approximate a static mapping function from their input onto their predicted output. However, in many applications, the trained model is used on data that changes over time. In these cases, the predictive performance of these models degrades over time. In this 2021 Embedded Vision Summit talk, Erik Chelstad, CTO and co-founder of Observa, introduces the problem of concept drift in deployed DNNs. He discusses the types of concept drift that occur in the real world, from small variances in the predicted classes all the way to the introduction of a new, previously unseen class. He also discusses approaches to recognizing these changes and identifying the point in time when it becomes necessary to update the training dataset and retrain a new model. The talk concludes with a real-world case study.
Developing Edge Computer Vision Solutions for Applications with Extreme Limitations on Real-World Testing
Deploying computer vision-based solutions in very remote locations (such as those often found in mining and oil drilling) introduces unique challenges. For example, it is typically impractical to test solutions in the real operating environment—or to replicate the environment to enable testing during development. Further complicating matters, these remote locations typically lack network connectivity, making it impossible to monitor deployed systems. In this 2021 Embedded Vision Summit talk, Alexander Belugin, Computer Vision Product Manager at Nedra, presents specific practical techniques for overcoming these challenges. He covers 3D modeling, the use of GANs for data generation, testing set-ups and specialized software techniques. He also discusses methods for accelerating software adaptation to resolve issues when systems are deployed.
|
PEOPLE DETECTION AND TRACKING |
Case Study: Facial Detection and Recognition for Always-On Applications
Although there are many applications for low-power facial recognition in edge devices, perhaps the most challenging to design are always-on, battery-powered systems that use facial recognition for access control. Laptop, tablet and cellphone users expect hands-free and instantaneous facial recognition. This means the electronics must be always on, constantly looking to detect a face, and then ready to pull from a data set to recognize the face. This 2021 Embedded Vision Summit presentation from Jamie Campbell, Product Marketing Manager for Embedded Vision IP at Synopsys, describes the challenges of moving traditional facial detection neural networks to the edge. It explores a case study of a face recognition access control application requiring continuous operation and extreme energy efficiency. Finally, it describes how the combination of Synopsys DesignWare ARC EM and EV processors provides low-power, efficient DSP and CNN acceleration for this application.
Person Re-Identification and Tracking at the Edge: Challenges and Techniques
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging. In this 2021 Embedded Vision Summit talk, Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.
|
FEATURED NEWS |
Intel to Acquire Fellow Alliance Member Company Codeplay Software
STMicroelectronics Releases Its First Automotive IMU with Embedded Machine Learning
Vision Components’ VC Stereo Camera Targets 3D and Dual-camera Embedded Vision Applications
e-con Systems Launches a Time of Flight Camera for Accurate 3D Depth Measurement
Microchip Technology’s 1 GHz SAMA7G54 Single-Core MPU Includes a MIPI CSI-2 Camera Interface and Advanced Audio Features
More News
|
EDGE AI AND
VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
Blaize Pathfinder P1600 Embedded System on Module (Best Edge AI Processor)
Blaize’s Pathfinder P1600 Embedded System on Module (SoM) is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Processors category. The Blaize Pathfinder P1600 Embedded SoM, based on the Blaize Graph Streaming Processor (GSP) architecture, enables new levels of processing power at low power with high system utilization ideal for AI inferencing workloads in edge-based applications. Smaller than the size of a credit card, the P1600 operates with 50x lower memory bandwidth, 7 W of power at 16 TOPS, 10x lower latency, and 30x better efficiency than legacy GPUs – opening doors to previously unfeasible AI inferencing solutions for edge vision use cases including in-camera and machine at the sensor edge, and network edge equipment. The Pathfinder platform is 100% programmable via the Blaize Picasso SDK, a comprehensive software environment that accelerates AI development cycles, uniquely based on open standards – OpenCL, OpenVX, and supporting ML frameworks such as TensorFlow, Pytorch, Caffe2, and ONNX. The Picasso SDK permits building complete end-to-end applications with higher transparency, flexibility, and portability levels.
Please see here for more information on Blaize’s Pathfinder P1600 Embedded SoM. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |