Dear Colleague,
Next Thursday, September 22 from 9 am to 5 pm, the primary Caffe developers from U.C. Berkeley's Vision and Learning Center will present "Deep Learning for Vision Using CNNs and Caffe," a full-day detailed technical tutorial focused on convolutional neural networks (CNNs) for vision and the Caffe framework for deep learning. Organized by the Embedded Vision Alliance and BDTI, the tutorial will take place at the Hyatt Regency in Cambridge, Massachusetts. It takes participants from an introduction to convolutional neural networks, through the theory behind them to their actual implementation, and includes multiple hands-on labs using Caffe. For more information, including online registration, please see the event page.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
"Bringing Computer Vision to the Consumer," a Keynote Presentation from Dyson
While vision has been a research priority for decades, the results have often remained out of reach of the consumer. Huge strides have been made, but the final, and perhaps toughest, hurdle is how to integrate vision into real world products. It’s a long road from concept to finished machine, and to succeed, companies need clear objectives, a robust test plan, and the ability to adapt when those fail. The Dyson 360 Eye robot vacuum cleaner uses computer vision as its primary localization technology. 10 years in the making, it was taken from bleeding edge academic research to a robust, reliable and manufacturable solution by Mike Aldred, Electronics Lead at Dyson, and his team. Aldred’s Embedded Vision Summit keynote talk charts some of the high and lows of the project, the challenges of bridging between academia and business, and how to use a diverse team to take an idea from the lab into real homes.
"Vision-as-a-Service: Democratization of Vision for Consumers and Businesses," a Presentation from Tend
Hundreds of millions of video cameras are installed around the world, in businesses, homes, and public spaces, but most of them provide limited insights. Installing new, more intelligent cameras requires massive deployments with long time-to-market cycles. Computer vision enables us to extract meaning from video streams generated by existing cameras, creating value for consumers, businesses, and communities in the form of improved safety, quality, security, and health. But how can we bring computer vision to millions of deployed cameras? The answer is through "Vision-as-a-Service" (VaaS), a new business model that leverages the cloud to apply state-of-the-art computer vision techniques to video streams captured by inexpensive cameras. Centralizing vision processing in the cloud offers some compelling advantages, such as the ability to quickly deploy sophisticated new features without requiring upgrades of installed camera hardware. It also brings some tough challenges, such as scaling to bring intelligence to millions of cameras. In this Embedded Vision Summit talk, Herman Yau, Co-Founder and CEO of Tend, explains the architecture and business model behind VaaS, shows how it is being deployed in a wide range of real-world use cases, and highlights some of the key challenges and how they can be overcome.
More Videos
|
Speeding Up the Fast Fourier Transform Mixed-Radix on Mobile ARM Mali GPUs By Means of OpenCL
In this three-part technical article series (part 1, part 2 and part 3), Gian Marco Iodice, GPU Compute Software Engineer at ARM, covers the following topics:
- Background information on the one-dimension complex FFT algorithm, pointing out the limits of the DFT (discrete Fourier transform) using direct computation
- An analysis of the three main computation blocks of the FFT (fast Fourier transform) mixed-radix in a step-by-step approach, in both theory and implementation, and
- Extension of the mixed-radix FFT OpenCL implementation to two dimensions, along with explanations of optimizations for mobile ARM Mali GPUs.
Also see Iodice's technical presentation "Using SGEMM and FFTs to Accelerate Deep Learning" from this year's Embedded Vision Summit.
Is the Future of Machine Vision Already Here? and Computer Vision as the New Industry Growth Driver?
In this two-article series from Dave Tokic, consultant to the Embedded Vision Alliance, the author shares a diversity of insights and perspectives obtained from this year's Embedded Vision Summit.
More Articles
|
Deep Learning for Vision Using CNNs and Caffe: A Hands-on Tutorial: September 22, 2016, Cambridge, Massachusetts
IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona, including the following events:
SoftKinetic DepthSense Workshop: September 26-27, 2016, San Jose, California
Sensors Midwest (use code EVA for a free Expo pass): September 27-28, 2016, Rosemont, Illinois
Embedded Vision Summit: May 1-3, 2017, Santa Clara, California
More Events
|