Imagination at the Embedded Vision Summit 2017

Furian-Iguana-final-sm-1024x540

This blog post was originally published at Imagination Technologies' website. It is reprinted here with the permission of Imagination Technologies.

The Imagination PowerVR team are busily preparing for the Embedded Vision Summit 2017 (EVS), taking place from 1-3 May in Santa Clara. EVS is a great industry event for all those involved with vision and surrounding technologies. From the IP creators like Imagination, through the semiconductor companies, algorithm developers and equipment OEMs; everyone who has an interest in computer vision is likely to be in attendance.

PowerVR continues to develop its vision processing capabilities, both in terms of GPU-based vision processing and dedicated hardware, including our ISP family. Many of our customers are seeing it as a real benefit to be able to perform vision processing tasks on the GPU, thus avoiding the need for separate, expensive, dedicated hardware. In many applications, the GPU would typically be underutilised while performing vision tasks, such as in the case of computational photography where the GPU may only be required to draw a simple UI.  In these cases, the spare GPU capacity can be put to work to provide the main vision processing capability of the SoC. This year’s launch of the PowerVR Series8XE Plus GPUs gave a significant boost to this vision processing capability in the entry- to mid-range by increasing the FLOPs, while still offering a significant area saving vs. competing solutions.

Just recently, we have also announced our new PowerVR Furian architecture, which has many architectural enhancements for better vision processing, such as the multi-thread (scatter/gather) type access to local GPU memory, DMA access to the GPU, and lower overhead GPU compute access without requiring kernel mode and separate compute data path – to list but a few.  Of course, for some operations, such as processing the raw data from a CMOS sensor, this is better performed using dedicated hardware, and this is where we rely on our ISP family.

Imagination is again event sponsor of Embedded Vision Summit this year, and as such, we will be exhibiting in the Technology Showcase. Our engineers are creating some exciting new demonstrations of Imagination’s leading vision processing technologies. As ever with technology demos, these are likely to be only fully ready days hours before the event kicks off, and for that reason, I don’t want to give too much away in terms of spoilers. We are also working on having a third-party demo from a licensee of ours who have some exciting new silicon utilising multiple IP cores from Imagination. If we get confirmation in time, we’ll be sure to update this blog post and give some more details.

I can say that there will be several demos based around CNNs running efficiently on PowerVR GPUs. Imagination continues to be a great supporter of Khronos and its open standards and will be demonstrating OpenVX.  Imagination offered the first OpenVX 1.1 conformant solution late last year when the PowerVR GPU passed the conformance test. We are building on this implementation with the adoption of the OpenVX CNN extensions and will be showing this in action at Embedded Vision Summit. OpenVX is gathering momentum as the preferred framework for the real-world deployment of vision applications, and the adoption of CNN extensions will further this.  There is a full training day on 3 May during which I will give a short implementer presentation.

The presentations are always highlights of Embedded Vision Summit.  This year’s program looks to be very interesting, with a wide variety of topics covered.  It’s difficult to highlight the most interesting from the comprehensive list, but here’s a couple that I am really looking forward to:

The first presentation I’ve highlighted is by Jeff McVeigh of Intel. He will address the important parts of where the vision algorithms/apps come from – will they be off the shelf, or in house?  I’m interested to see what Jeff’s perspective is, given the recent massive investments into vision (Altera, Movidius, Mobileye) that Intel has made and are continuing to make.

I’ve highlighted this second talk for a number of reasons, but primarily because it manages to combine computer vision with a traditional British pub activity – no, not beer drinking, but playing darts!!  This looks to be a really interesting talk in particular for the use of the vision only cameras. It also highlights the relative accessibility of computer vision compared to only a few years ago. If my few words have not sold this talk to you, then surely the YouTube video clip will.

And of course, I couldn’t leave the discussion on presentations without highlighting the presentation due to be given by Imagination’s Paul Brasnett. Paul’s talk will look at the choices we can make when training for neural networks, and how that can impact the performance, bandwidth and power for the inference of the network.

We would love to meet you and talk about your vision processing needs, chew the fat discussing OpenVX, or just generally talk about whether pool or darts is the greatest pub game in the world!  You can contact us and we will be pleased to set up a meeting, or just drop by the stand in the exhibition hall and say “hi”.

By Chris Longstaff
Senior Director of Product & Technology Marketing for PowerVR, Imagination Technologies

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top