Software for Embedded Vision
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/Hou_2024_GeneralSessionSpeakerCard_Hou-300x158.jpg)
2024 Embedded Vision Summit Showcase: Qualcomm General Session Presentation
Check out the general session presentation “What’s Next in On-Device Generative AI” at the upcoming 2024 Embedded Vision Summit, taking place May 21-23 in Santa Clara, California! The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/KapteinM_GeneralSession_1-1-300x158.jpg)
2024 Embedded Vision Summit Showcase: Network Optix General Session Presentation
Check out the general session presentation “Scaling Vision-Based Edge AI Solutions: From Prototype to Global Deployment” at the upcoming 2024 Embedded Vision Summit, taking place May 21-23 in Santa Clara, California! The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/adressing-challenges-in-amr-design-768x432-1-300x169.jpg)
Navigating the Future: How Avnet is Addressing Challenges in AMR Design
This blog post was originally published at Avnet’s website. It is reprinted here with the permission of Avnet. Autonomous mobile robots (AMRs) are revolutionizing industries such as manufacturing, logistics, agriculture, and healthcare by performing tasks that are too dangerous, tedious, or costly for humans. AMRs can navigate complex and dynamic environments, communicate with other devices
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/2024_Summit_Logo_banner-date_Final-1_1200-600-300x150.jpg)
Embedded Vision Summit® Announces Full Conference Program for Edge AI and Computer Vision Innovators, May 21-23 in Santa Clara, California
The premier event for product creators incorporating computer vision and edge AI in products and applications SANTA CLARA, Calif., April 29, 2024 /PR Newswire/ — The Edge AI and Vision Alliance, a worldwide industry partnership, today announced the full program for the 2024 Embedded Vision Summit, taking place May 21-23 at the Santa Clara Convention
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/MicrosoftTeams-image-46--300x214.jpg)
On Finding CLIKA: the Founders’ Journey
This blog post was originally published at CLIKA’s website. It is reprinted here with the permission of CLIKA. CLIKA, a tinyAI startup, was founded based on the realization that the future of artificial intelligence (AI) would depend on how well and quickly businesses would be able to scale and productionize their AI. Ben Asaf was
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/potential-for-adas-scalability-300x169.jpg)
Unleashing the Potential for Assisted and Automated Driving Experiences Through Scalability
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Working within an ecosystem of innovators and suppliers is paramount to addressing the challenge of building a scalable ADAS solution While the recent sentiment around fully autonomous vehicles is not overly positive, more and more vehicles on
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/foundation-models-nv-blog-1280x680-1-300x159.jpg)
The Building Blocks of AI: Decoding the Role and Significance of Foundation Models
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. These neural networks, trained on large volumes of data, power the applications driving the generative AI revolution. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible,
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/Ceva_Blog_Post_Image_240228_Drive-300x190.jpg)
Oriented FAST and Rotated BRIEF (ORB) Feature Detection Speeds Up Visual SLAM
This blog post was originally published at Ceva’s website. It is reprinted here with the permission of Ceva. In the realm of smart edge devices, signal processing and AI inferencing are intertwined. Sensing can require intense computation to filter out the most significant data for inferencing. Algorithms for simultaneous localization and mapping (SLAM), a type
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/Dragonfly-warehouses-1-300x172.jpg)
Achieving a Zero-incident Vision In Your Warehouse with Dragonfly
This blog post was originally published by Onit. It is reprinted here with the permission of Onit. At Onit, we’re revolutionizing the efficiency and safety standards in warehouse environments through edge AI and computer vision. Leveraging our state-of-the-art Dragonfly and RTLS (real-time locating system) applications, we address the complex challenges inherent in chaotic and labor-intensive
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/Snapdragon-Summit-Democratization-of-AI-Panel-300x169.jpg)
Democratizing AI: Top 5 Insights from Axios, Meta, Black Magic Design, and Our Panel of Industry Titans
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. In a panel discussion at our annual Snapdragon Summit in the breathtaking setting of Maui, Hawaii, we had the privilege of engaging in a dynamic conversation with four esteemed experts about the democratization of artificial intelligence (AI).
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/nvidia-ai-decoded-week-2-nv-blog-1280x680-1-300x159.jpg)
AI Decoded: Demystifying Large Language Models, the Brains Behind Chatbots
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Explore what LLMs are, why they matter and how to use them. Editor’s note: This post is part of our AI Decoded series, which aims to demystify AI by making the technology more accessible, while showcasing new
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/A4-300x150.jpg)
Here’s Why the SDV Market Will Be Worth $700 Billion by 2034
The SDV and AI Cars market is set to be worth over US$700 billion by 2034, representing around 20% of the global car market, according to IDTechEx‘s “Software-Defined Vehicles, Connected Cars, and AI in Cars 2024-2034” report. That sum can be sourced from several areas, such as monthly connectivity subscriptions, commission from in-vehicle payments, and
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/NE_fz3fXJr0-300x169.jpg)
Edge AI and Vision Alliance Conversation with GenAI Nerds on Generative AI At the Edge
Kerry Shih of GenAI Nerds interviews Jeff Bier, Founder of the Edge AI and Vision Alliance, and Phil Lapsley, the Alliance’s Vice President of Business Development, about the opportunities and trends for generative AI at the edge. Shih, Bier and Lapsley discuss topics such as: Where we are in the generative AI hype cycle What
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/53625438151_2382de2631_o-300x167.jpg)
Microchip Technology Acquires Neuronix AI Labs
Innovative technology enhances AI-enabled intelligent edge solutions and increases neural networking capabilities CHANDLER, Ariz., April 15, 2024 — Microchip Technology (Nasdaq: MCHP) has acquired Neuronix AI Labs to expand its capabilities for power-efficient, AI-enabled edge solutions deployed on field programmable gate arrays (FPGAs). Neuronix AI Labs provides neural network sparsity optimization technology that enables a
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/IMG-20240411-WA0003-768x781_cropped-300x220.jpg)
Visidon Wins SIA NPS Video Surveillance Advanced Imaging Technologies Award
April 11 2024 – Las Vegas, Nev. – Visidon was recognized by the Security Industry Association (SIA) as an awardee at the 2024 SIA New Products and Solutions (NPS) Awards, the flagship awards program presented in partnership with ISC West recognizing innovative security products, services and solutions. In video surveillance advanced imaging technologies, Visidon was selected
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/Ceva_Blog_Post_Image_240125_Optimize_AI-600x454-1-300x227.jpg)
Partitioning Strategies to Optimize AI Inference for Multi-core Platforms
This blog post was originally published at Ceva’s website. It is reprinted here with the permission of Ceva. Not so long ago, AI inference at the edge was a novelty easily supported by a single NPU IP accelerator embedded in the edge device. Expectations have accelerated rapidly since then. Now we want embedded AI inference
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/0B-300x150.jpg)
AI Chips and Chat GPT: Exploring AI and Robotics
AI chips can empower the intelligence of robotics, with future potential for smarter and more independent cars and robots. Alongside the uses of Chat GPT and chatting with robots at home, the potential for this technology to enhance working environments and reinvent socializing is promising. Cars that can judge the difference between people and signposts
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/04/Chenna_Fig1-300x160.png)
Quantization of Convolutional Neural Networks: Quantization Analysis
See “Quantization of Convolutional Neural Networks: Model Quantization” for the previous article in this series. In the previous articles in this series, we discussed quantization schemes and the effect of different choices on model accuracy. The ultimate choice of quantization scheme depends on the available tools. TFlite and Pytorch are the most popular tools used