Dear Colleague,
Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and its products with industry experts, investors and entrepreneurs? The 4th annual Vision Tank competition offers startup companies the opportunity to present their new products and product ideas to more than 1,000 influencers and product creators at the 2019 Embedded Vision Summit, the premier event for innovators developing products with visual intelligence, at the edge and in the cloud. The Vision Tank is a unique spin on the popular “Shark Tank” reality show, judged by well-known vision industry leaders, venture capitalists, and successful product creators.
The deadline to enter is January 31, 2019. All applicants will receive feedback from the expert panel of judges, and all finalists will additionally receive a free two-day Embedded Vision Summit registration package. The finalist competition takes place during the Embedded Vision Summit, May 20-23, 2019 in Santa Clara, California. The Judge's Choice and Audience Choice winners will each receive a free one-year membership in the Embedded Vision Alliance, providing unique access to the embedded vision industry ecosystem. For more information, including detailed instructions and an online submission form, please see the event page on the Alliance website. Good luck!
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
Building Up a Start-up in Embedded Vision: Lessons from Machine Vision
As embedded vision becomes more capable, it is proliferating into many new markets and attracting many new start-ups. How can you gauge when the time is right to launch an embedded vision start-up? Should you focus your business on one application, or try to address many? How should you decide when to expand your start-up into additional markets and regions? One way to inform such decisions is to examine the experience of past start-ups in related fields. In this presentation, Arndt Bake, Chief Marketing Officer at Basler, briefly examines the development of the machine vision market (vision mainly for factory automation), including the main phases of the growth of this market over the past 30 years. He looks at examples of successful start-up companies in machine vision, distills key learnings from these examples and transfers the resulting success recipes to the current situation of embedded vision start-ups. Finally, he identifies which market verticals are ripe for embedded vision start-ups and illustrates one representative opportunity.
2018 Embedded Vision Summit Vision Entrepreneurs' Panel
What can we learn from leaders of successful vision-based start-ups? The expanding applications of embedded vision are opening up exciting business opportunities, and countless entrepreneurs are developing diverse vision-based end-products and enabling technologies. But building a vision-based company brings unique risks and challenges. This panel brings together an amazing group of visionary leaders—Nik Gagvani, President of CheckVideo; László Kishonti, CEO of AImotive; Radha Basu, CEO of iMerit; and Gary Bradski, CTO and Co-founder of Arraiy.com, and CEO and Founder of OpenCV.org—who have conceived and scaled vision-based businesses to multi-hundred-million-dollar valuations. Sharing their failures as well as their successes, along with key lessons learned, these successful entrepreneurs "pay it forward" – helping to enable the next generation of vision-based start-up leaders.
|
Balancing Safety, Convenience and Privacy in the Era of Ubiquitous Cameras
Computer vision-enabled cameras are proliferating rapidly and will soon be ubiquitous – in, on and around vehicles, homes, toys, stores, public transit, schools, restaurants and more. Clearly, this offers tremendous benefits in terms of safety, security, convenience and efficiency. But what about privacy? Are we doomed to give up our privacy as cameras proliferate? Possibly, but not necessarily. Many of the same technologies that are fueling the proliferation of visual intelligence can also be used to enhance privacy, if product developers choose to do so, and if consumers, enterprises and governments prioritize privacy. For example, accelerating innovation in sensors means that system designers have many choices of sensor types beyond the typical CMOS image sensor, enabling engineers to choose sensor types that capture only the information that is required for the application. And rapid progress in embedded processors – combined with efficient, accurate algorithms – makes it increasingly feasible to consume images at the edge or in the fog, and then discard them, retaining only the required meta-data. In this talk, Charlotte Dryden, Director of the Visual Computing Developer Solutions team at Intel, explores trade-offs related to privacy in a world filled with connected cameras.
Creating a Computationally Efficient Embedded CNN Face Recognizer
Face recognition systems have made great progress thanks to availability of data, deep learning algorithms and better image sensors. Face recognition systems should be tolerant of variations in illumination, pose and occlusions, and should be scalable to large numbers of users with minimal need for capturing images during registration. Machine learning approaches are limited by their scalability. Existing deep learning approaches make use of either "too-deep" networks with increased computational complexity or customized layers that require large model files. In this talk, Praveen G.B., Technical Lead at PathPartner Technology, explores low-to-high complexity CNN architectures for face recognition and shows that with the right combination of training data and cost functions, you can indeed train a low-complexity CNN architecture (an AlexNet-like model, for example) that can achieve reasonably good accuracy compared with more-complex networks. He then explores system-level algorithmic customizations that will enable you to create a robust real-time embedded face recognition system using low-complexity CNN architectures.
|