Shifts in the Automated Driving Industry
Recent transitions will have a lasting effect on the self-driving industry, says László Kishonti, CEO of AImotive, as key stakeholders have turned from the unattainable goal of full autonomy by 2021 to more realistic development and productization roadmaps. This will in turn result in consolidation in the automated driving industry, which is currently too fragmented. To realize self- driving and gain regulatory, consumer and (in the case of startups) investor trust, Kishonti suggests in this presentation, the walled-garden approaches of the industry must change to a more collaborative approach. To support collaboration, cooperation and standardization, companies must adapt the way they create their software solutions and move to more modular designs. The companies with the most effective and meaningful collaborations will be the ones that survive the inevitable consolidation of the automated driving industry.
Automotive Vision Systems— Seeing the Way Forward
It was not long ago, according to Ian Riches, Executive Director for Global Automotive Practice at Strategy Analytics, that cameras were a rarity on all but luxury cars. In 2018, as many automotive cameras were shipped as were vehicles! Riches' presentation quantifies the likely future growth, and explores the applications and industry forces that are driving camera fitment. The automotive industry is also undergoing unprecedented change, with longstanding vehicle architectures and business models under threat. Riches' presentation therefore also looks at the wider automotive landscape as it impacts the embedded vision community, examining topics such as centralized vs. decentralized architectures and the impact of automated driving on the value chain.
|
Selecting and Exploiting Sensors for Sensor Fusion in Consumer Robots
How do you design robots that are aware of their unstructured environments at a consumer price point? Excellent sensing is required but using low cost sensors is also necessary. By fusing data from multiple sensors and making sure every sensor does multiple jobs (like the hidden sensors in your camera ISP), we can achieve rolling shutter correction, drivable space understanding and other applications. Careful sensor selection with a focus on ultimate customer utility, data bus architecture with a plan for fusion, and full understanding of the inner workings of each sensor can make all these fusion tasks easier and reduces computational complexity. This talk from Daniel Casner, formerly a systems engineer at Anki, covers the pragmatic approach used for hardware design and some of the design choices to enable sensor fusion made across multiple generations of products. Casner also gives an overview of how information is fused from multiple sensors in order to make robots that feel alive and aware of their surroundings.
Visual AI Enables Autonomous Security
Knightscope, based in Silicon Valley, develops and sells a line of autonomous robots that deter, detect and report anomalies to private customers and public law enforcement agencies. In this on-stage interview, William "Bill" Santana Li, Co-founder, Chairman and CEO of Knightscope and a former Ford Motor Company executive, discusses the market needs that Knightscope is addressing and the role computer vision plays in Knightscope’s solution. Lee and Vin Ratford, Executive Director of the Embedded Vision Alliance, explore obstacles that Knightscope has faced in developing and deploying its current product, and challenges the company will have to overcome to achieve its vision for widespread deployment.
|