Fewer Than 10,000 Pixels is Enough for Autonomy

This blog post was originally published at Opteran Technologies’ website. It is reprinted here with the permission of Opteran Technologies.

We’ve read with interest the posts Geoffrey Barrows at Centeye has been publishing on some of the challenges of developing autonomous drones. This more recent one underlines how many factors influence a drone’s ability to successfully avoid obstacles. Clearly, how it views the world is integral to its success and as Geoffrey rightly highlighted back in March there are real challenges with producing the right technology to enable intelligence and vision, especially in nano drones.

This is, in part, because it is human engineers that are trying to solve this challenge using traditional technology approaches. To enable the drone to process enough information to see and sense the environment around it a standard approach is to adopt expensive GPUs. To help the drone make decisions deep learning has become the de facto standard for most autonomous robotics and vehicle projects. However, as the salutary lesson of Starsky Robotics shows, supervised learning is not all it’s cracked up to be. Having to train a drone on a sufficiently representative sample is inefficient and costly due to the data collection and processing power required – never mind the potential environmental impact.

That is why Geoffrey is right to point to nature as a far more effective source of solutions for the intelligence and vision challenges posed by nano drones. And it is why Opteran has taken inspiration from our research into insects and their use of optic flow to develop our Natural Intelligence technology. Insects have developed a variety of flight strategies to understand the world around them, using the equivalent of only a couple of  low-resolution cameras. Optic flow is key to their approach, but as we highlighted above the standard AI approaches today struggle to replicate nature’s robustness and efficiency.

We’re delighted to say we’ve patented our first product, Flow, based on reverse-engineering the honeybee visual system and its use of optic flow. It is both much more effective and efficient in terms of performance, energy consumption and accuracy than existing approaches. Our algorithms can be deployed on FPGA or custom silicon, run up to 10,000 frames per second, all for less than a 1/2 watt of power draw. As a result, we can control a sub-250g drone, with complete onboard autonomy, using fewer than 10,000 pixels from a single low-resolution panoramic camera. By comparison, even a very old-technology VGA camera uses almost 300,000 more pixels, over a narrower field of view.

What does this mean for autonomous drones moving forward? We believe the priority today is understanding and mimicking brain functions to empower machines to solve practical tasks, such as observing, avoiding and fetching. Our aim is to deliver real value, which has genuine commercial use. So rather than discussing abstract theoretical ideas, like Artificial General Intelligence, we believe the more immediate and exciting opportunity is to embed natural intelligence into robotics enabling machines to see, sense, navigate and even make decisions with greater confidence. We are confident this is possible, because our demo flights are showing it can be done now. Of course, the “brainpower” we are bringing to nano drones needs to be combined with improvements in other fields, such as camera hardware, but we are delighted to have broken through the limitations of today’s deep learning technology, as it will greatly expand the potential for commercial autonomous systems.

James Marshall
CSO and Co-founder, Opteran Technologies

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top