Dear Colleague,
The Embedded
Vision Summit
is the preeminent conference on practical computer
vision, covering applications at the edge and in the cloud. It attracts
a
global audience of over one thousand product creators, entrepreneurs
and business decision-makers who are creating and using computer vision
technology. The Embedded Vision Summit has experienced exciting growth
over the last few years, with 98% of 2017 Summit attendees reporting
that they’d recommend the event to a colleague. The next Summit will
take place May 22-24, 2018 in Santa Clara, California. My colleagues
and I invite you to share your expertise by proposing a presentation.
The deadline to
submit presentation proposals is this Friday, November
10, 2017. For detailed proposal requirements and to submit
proposals, please visit https://www.embedded-vision.com/summit/2018/call-proposals.
For questions or more information, please email [email protected].
The Embedded Vision Alliance is performing research to better
understand what types of technologies are needed by product developers
who are incorporating computer vision in new systems and applications.
To help guide suppliers in creating the technologies that will be most
useful to you, please take a few minutes to fill out this brief survey.
As our way of saying thanks for completing it, you’ll receive $50 off
an Embedded Vision Summit 2018 2-Day Pass. Plus, you’ll be entered into
a drawing for one of several
cool prizes. The deadline for entries is November 20, 2017. Please fill out the survey here.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
Blending Cloud and Edge Machine Learning to Deliver Real-time Video Monitoring
Network cameras and other edge devices are
collecting ever-more video – far more than can be economically
transported to the cloud. This argues for putting intelligence in edge
devices. But the cloud offers unique, valuable capabilities, such as
aggregating information from multiple cameras, applying
state-of-the-art algorithms, and providing users with access to their
data anywhere, any time. Camio uses a combination of machine learning
at the edge (in network cameras and network video recorders) and in the
cloud to generate alerts, highlight the most significant events
captured by a camera, and to let users search for events of interest.
In this talk, Carter Maslan, Camio’s CEO, explores the trade-offs
between edge and cloud processing for systems that extract meaning from
video, and explains how the two approaches can be combined to create
big opportunities.
Intelligent Video Surveillance: Are We There Yet?
The video surveillance market has been an
early adopter of computer vision technology. After more than a decade
of experience with deployed systems, what have we learned? This talk
from Nik Gagvani, Founder and President of CheckVideo, covers the state
of the market and current applications of vision technology in
surveillance. Gagvani examines the economic considerations of taking
vision from the lab to mass-market adoption. He examines the pros
and cons of edge vs. cloud computing for surveillance, and previews
what’s coming and what that means for investors and companies
looking to get into video surveillance.
|
Introduction to Optics for Embedded Vision
This talk by Jessica Gehlhar, Vision
Solutions Engineer at Edmund Optics, provides an introduction to optics
for embedded vision system and algorithm developers. Gehlhar begins by
presenting fundamental imaging lens specifications and quality metrics.
She explains key parameters and concepts such as field of view, f
number, working f number, NA (numerical aperture), focal length,
working distance, depth of field, depth of focus, resolution, MTF
(modulation transfer function), distortion, keystoning, and
telecentricity and their relationships. Optical design basics and
trade-offs introduced include design types, aberrations, aspheres,
pointing accuracy, sensor matching, color and protective coatings,
filters, temperature and environmental considerations, and their
relation to sensor artifacts. She also explores manufacturing
considerations, including testing the optical components and imaging
lenses in a product, and the industrial optics used for a wide range of
manufacturing tests. Depending on requirements, a wide variety of tests
and calibrations may be performed. These tests and calibrations become
important with designs that include technologies such as multi-camera,
3D, color and NIR (near-infrared).
Computer Vision Applications Beyond Visible Light
Computer vision systems aren’t restricted
to analyzing the portion of the electromagnetic spectrum that is
visible by humans, as this technical article from the Embedded Vision
Alliance and member companies Allied Vision and XIMEA explains.
Expanding the input to encompass the infrared and/or ultraviolet
spectrum, either broadly or selectively, can be of great benefit in a
range of visual intelligence applications.
|