LETTER FROM THE EDITOR |
Dear Colleague,
On Tuesday, February 6, 2024 at 9 am PT, Immervision will deliver the free webinar “Optimizing Camera Design for Machine Perception Via End-to-end Camera Simulation” in partnership with the Edge AI and Vision Alliance. Camera design is an iterative process that minimizes complex cost functions while optimizing the optics, sensor and image signal processor (ISP) for multiple interconnected parameters. New types of lenses (such as metalenses and free-form lenses), along with new types of image sensors and improved image processing techniques offer many ways to optimize a camera for an application. But what criteria are we optimizing for? Camera designers today still typically focus on key performance indicators (KPIs) derived from human perception (e.g., removing aberrations and increasing sharpness). But in a growing range of applications, images are used for computer vision rather than human viewing. Camera design KPIs need to be redefined for best results in machine perception applications.
In this webinar, Julie Buquet, AI Scientist for Imaging, and Ludimila Centeno, Associate Vice President of Technology Offer and Support, both of Immervision, will explain how to optimize camera performance for machine perception applications through simulation and end-to-end design. Computer vision requirements are often initially based on algorithm performance targets. Upfront translation of these objectives into camera KPIs is necessary to support the design process. Then, by simulating the entire camera—optics, sensor, ISP and other factors—it is possible to estimate the impact of each component on machine perception performance even prior to building a physical prototype. This approach also allows for automated optimization of optical parameters. Buquet and Centeno will step through the imaging pipeline, describing various stages including commonly undocumented ones such as image quality tuning done by the ISP. They will highlight how Immervision has built complex cameras more efficiently while meeting machine perception performance requirements at reasonable cost. Buquet and Centeno will use a wide-angle in-cabin camera case study to show how these techniques play out in applications. A question-and-answer session will follow the presentation. For more information and to register, please see the event page.
The 2024 Embedded Vision Summit Call for Presentation Proposals is still open, but only through the end of this week! I invite you to share your expertise. Our team is curating what will be more than 100 expert sessions and we’d love to see your proposal. From case studies on integrating perceptual AI into products to tutorials on the latest tools and algorithms, send in your session idea today. And if you’re not sure about your topic, check out the topics list to see what’s trending for 2024. The deadline for submissions is this Friday, December 8.
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
ENSURING ACCOUNTABLE, INCLUSIVE AI |
Responsible AI: Tools and Frameworks for Developing AI Solutions
Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety and security, AI systems can cause significant harm to people and society and result in financial and reputational damage for companies. How can we take a human-centric approach to design AI solutions? How can we identify different types of bias and what tools can we use to mitigate those? What are model cards, and how can we use them to improve transparency? What tools can we use to preserve privacy and improve security? In this talk, Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, discusses practical approaches to adoption of responsible AI principles. She highlights relevant tools and frameworks and explores industry case studies. She also discusses building a well-defined response plan to help address an AI incident efficiently.
Bias in Computer Vision—It’s Bigger Than Facial Recognition!
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias. This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
|
STARTUP INSIGHTS |
2023 Vision Tank Finalist Competition
Parrots, GMAC Intelligence, Lemur Imaging, Optimizing Mind and ProHawk Technology Group deliver their Vision Tank finalist presentations at the May 2023 Embedded Vision Summit. The Vision Tank highlightes the most promising early-stage start-ups that incorporate visual intelligence in their products. In a lively, engaging, and interactive format, these companies compete for awards and prizes as well as benefiting from the feedback of an expert panel of judges: Vin Ratford, CEO of Piera Systems and Executive Director of the Edge AI and Vision Alliance, Shweta Shrivastava, Senior Product Leader at Waymo, Forrest Iandola, AI Research Scientist at Meta, and John Feland, Master of Ceremonies and Data Whisper and Design Thinker.
90% of Tech Start-Ups Fail. What the Other 10% Know
Simon Morris, Executive Advisor at Connected Vision Advisors, is fortunate to have led three tech start-ups with three successful exits. He received a lot of advice along the way from venture investors, co-founders, colleagues, competitors, customers and other tech entrepreneurs. He has always been fascinated by the success and failure stories of businesses in general, and tech start-ups in particular. What are the common factors that lead some to succeed while most fail? In this talk, Morris explores the most important success factors, with a focus on the last decade of computer vision and edge AI start-ups. His own experiences as well what he has learned from others suggests that the most important success factors include a very clear understanding of the target customer and market need; a clear-eyed quantification of both the value that the solution brings to customers and its differentiation vs. competitors; a robust go-to-market strategy that can achieve repeatable and scalable growth; and a strong, diverse leadership team.
|
UPCOMING INDUSTRY EVENTS |
Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision – e-con Systems Webinar: January 24, 2024, 9:00 am PT
Optimizing Camera Design for Machine Perception Via End-to-end Camera Simulation – Immervision Webinar: February 6, 2024, 9:00 am PT
Embedded Vision Summit: May 21-23, 2024, Santa Clara, California
More Events
|
FEATURED NEWS |
Axelera AI, DEEPX Named as CES 2024 Innovation Awards Honorees
Arm Extends Its Cortex-M Portfolio to Bring AI to the Smallest Endpoint Devices
Latest Qualcomm Snapdragon 7-series Mobile Platform Provides Leading Performance and Power Efficiency with First-in-tier AI, Other Features
Silo AI-led SiloGen Consortium, Developing the Poro Open Large Language Model Family with European Language Support, Adds Computer Vision Capabilities via LAION partnership in Latest Release
New AMD Radeon PRO Workstation Graphics Card Powers Next-generation AI Applications
More News
|
EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
Outsight SHIFT LiDAR Software (Best Edge AI Software or Algorithm)
Outsight’s SHIFT LiDAR Software is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The SHIFT LiDAR Software is a real-time 3D LiDAR pre-processor that enables application developers and integrators to easily utilize LiDAR data from any supplier and for any use case outside of automotive (e.g. smart infrastructure, robotics, and industrial). Outsight’s SHIFT LiDAR Software is the industry’s first 3D data pre-processor, performing all essential features required to integrate any LiDAR into any project (SLAM, object detection and tracking, segmentation and classification, etc.). One of the software’s greatest advantages is that it produces an “explainable” real-time stream of data low-level enough to either directly feed ML algorithms or be fused with other sensors, and smart enough to decrease network and central processing requirements, thereby enabling a new range of LiDAR applications. Outsight believes that accelerating the adoption of LiDAR technology with easy-to-use and scalable software will meaningfully contribute to creating transformative solutions and products to make a smarter and safer world.
Please see here for more information on Outsight’s SHIFT LiDAR Software. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |