Embedded Vision Insights: March 20, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Vision Systems Design Webinar

On March 27, 2019 at 11 am ET (8 am PT), Jeff Bier, founder of the Embedded Vision Alliance, will deliver a free hour-long webinar, "Embedded Vision: The Four Key Trends Driving the Proliferation of Visual Perception," in partnership with Vision Systems Design. Bier will examine the four most important trends that are fueling the proliferation of vision applications and influencing the future of the visual AI industry. For more information, including online registration, please visit the event page. For more than 20 years, Vision Systems Design has provided in-depth technical and integration insights focused exclusively on the information needs of machine vision and imaging professionals. Sign up today for a free subscription to stay up to date.

The Embedded Vision Summit attracts a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are developing and using computer vision technology. The Embedded Vision Summit has experienced exciting growth over the last few years, with 97% of 2018 Summit attendees reporting that they'd recommend the event to a colleague. The next Summit will take place May 20-23, 2019 in Santa Clara, California, and online registration is now available. The Summit is the place to learn about the latest applications, techniques, technologies, and opportunities in visual AI and deep learning. In 2019, the event will feature new, deeper and more technical sessions, with more than 90 expert presenters in 4 conference tracks (we're delighted to announce the first round of accepted speakers and talks, with more to follow soon) and 100+ demonstrations in the Technology Showcase. Two hands-on trainings, the updated Deep Learning for Computer Vision with TensorFlow 2.0 and brand new Computer Vision Applications in OpenCV, will run concurrently on the first day of the Summit, May 20. And the last day of the Summit, May 23, will again feature the popular vision technology workshops from Intel (both introductory and advanced), Khronos and Synopsys. Register today using promotion code EARLYBIRDNL19 to save 15% with our limited-time Early Bird Discount rate.

Even as Moore's Law winds down, the processor industry continues to thrive. In the data center, next-generation server processors are now supplemented with a variety of accelerators to offload networking, storage, and deep-learning tasks. Deep learning is also a key technology in the development of ADAS and autonomous-driving systems and is moving into IoT and embedded systems. Networking equipment is evolving to SDN, NFV, 5G, and other new technologies. And the Internet of Things is quietly gaining traction in both industrial and consumer products. New processor chips and IP cores, along with memory and networking chips, are emerging to support these new applications. The Linley Spring Processor Conference will explore these and other hot topics; attendees will also have the opportunity to meet with industry leaders and The Linley Group analysts, network with peers, attend an evening reception with sponsor exhibits, demos, and more. Admission is free to qualified registrants who sign up online by April 4. Join fellow engineering professionals already registered for an inspiring day of education! For more information and to register, see the event page.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

COMPUTER VISION FOR AUTONOMOUS ENVIRONMENTS

Intelligent Consumer Robots Powering the Smart HomeiRobot
The Internet of Things (IoT) has rapidly developed in the past few years, enabled by affordable electronics components and powerful embedded microprocessors, ubiquitous internet access and WiFi in the household. Connected devices such as the Nest or Canary cameras, the Ring doorbell, Philips Hue lights, and smart TVs are now commonplace in homes. However, the common control of these devices in a concerted manner to improve the comfort of the user has proven challenging. Multiple frameworks have been proposed to tackle this problem, but most of them require the user to manually place the devices in a map of the house, thus rendering the user interface cumbersome for regular consumers. iRobot is focused on mapping and navigation technology development to make its robots smarter, simpler to use, and to provide valuable spatial information to the broader ecosystem of connected devices in the home. Robot-built and -maintained home maps provide important spatial context by capturing the physical space of the home. In this talk, Mario Munich, Senior Vice President of Technology at iRobot, describes iRobot's vision of the Smart Home, a home that maintains itself and magically just does the right thing in anticipation of occupant needs. This home will be built on an ecosystem of connected and coordinated robots, sensors, and devices that provides the occupants with a high quality of life by seamlessly responding to the needs of daily living–from comfort to convenience to security to efficiency.

Outside-In Autonomous SystemsMicrosoft
In this presentation, Jie Liu, Visual Intelligence Architect in the Cloud and AI Platforms Group at Microsoft, shares his company's vision for smart environments that observe and understand space, people and things.

CLOUD AND EDGE VISION PROCESSING

Cloud Computer Vision for a Real-time Consumer ProductCocoon Cam
The capabilities of cloud computing are expanding rapidly. At the same time, cloud computing costs are falling. This makes it increasingly attractive to implement computer vision in the cloud, even for cost-sensitive applications requiring real-time response. In this presentation, Pavan Kumar, Co-founder and CTO at Cocoon Cam, explores the benefits and limitations of computer vision in the cloud today – both for initial prototyping and for product deployment – based on Cocoon Cam’s experience creating the first vision-enabled baby monitor.

Leveraging Edge and Cloud for Visual Intelligence SolutionsXilinx
For many computer vision systems, a critical decision is whether to implement vision processing at the edge or in the cloud. In a growing number of cases, designers are choosing to use both edge and cloud processing, which opens up the possibility of leveraging the strengths of both approaches. But determining the best mix of edge and cloud processing for an application can be challenging because the trade-offs are often complex and subtle, involving numerous factors such as latency, cost, bandwidth and power consumption. In this talk, Salil Raje, Senior Vice President in the Software and IP Products Group at Xilinx, explores the advantages of combining edge and cloud processing for visual intelligence, and outlines ways that solution developers can optimize their applications for the right blend.

UPCOMING INDUSTRY EVENTS

Vision Systems Design Webinar – Embedded Vision: The Four Key Trends Driving the Proliferation of Visual Perception: March 27, 2019, 11 am ET

Embedded Vision Summit: May 20-23, 2019, Santa Clara, California

More Events


FEATURED NEWS

Algolux Announces Ion – The Industry’s First Platform for Autonomous Vision System Design

Out Now: Allied Vision's Mako Camera with Polarization Sensor Technology

CEVA Computer Vision, Deep Learning and Long Range Communication Technologies Power DJI Drones

In Series Production: Basler MED ace Camera Compliant with DIN EN ISO 13485:2016

Xnor Appoints Mobile Computing Expert to Principal Engineer

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top