This blog post was originally published by Opteran Technologies. It is reprinted here with the permission of Opteran Technologies.
We have just published more of the fundamental research into honeybee brains that led to the formation of Opteran, the Natural Intelligence company that reverse-engineers algorithms for autonomy from the brains of insects. In this work we showed that the decisions honeybees make during foraging, deciding when to land on a flower or not according to whether or not it should be rewarding, have much of the richness we see in other animals such as ourselves. Yet they do this, as well as solve complex problems such as visually navigating up to 10km out and back to a flower patch, with a brain containing fewer than 1 million neurons (our own brain, in comparison, has around 86 billion). You’ll see some of this basic research applied in the upcoming Opteran Mind Decision Engine.
In the study, we tested bees presented with different coloured artificial flowers that were rewarded with sugar solution, or punished with a bitter quinine solution. In some experiments we changed the probabilities of the outcomes, and in others we presented ambiguous colours in between the rewarded and punished colours. By videotaping and analysing their behaviour, we found that bees were risk-averse, preferring to incorrectly reject a rewarding flower than incorrectly land on a bitter flower. We also found that when bees correctly identified a sweet flower they were faster than when they incorrectly landed on a bitter flower – this appears surprising since it seems to contradict the fundamental speed-accuracy trade-off of decision theory, that slower decisions are more accurate ones. We were then able to build a computational model of the honeybee brain regions involved in the decision-making process, that was able to reproduce the patterns of behaviour we observed in the real bees.
While the science of how bees, and other insects such as ants, do what they do is what first drew me to study them, it’s the technological potential they hold that now fascinates me. This is what led me to co-found Opteran in 2020, based on over 9 years of basic research that I led into how insect autonomy works. The prevailing paradigm for machine autonomy is based on a combination of technologies, including deep learning, deep reinforcement learning, and photogrammetry (which I will be writing about in due course). The breakthrough performances of deep learning on the ImageNet classification challenge occurred just as we were beginning our research journey, and still represent the state-of-the-art in rivaling the performance of animal perception, even if they remain surprisingly fragile. Shortly after, deep reinforcement learning was applied to Atari video games, and once DeepMind were acquired they announced that Google had tasked them with ‘solving autonomy’. But deep reinforcement learning’s successes for autonomy are mainly in so-called ‘perfect information’ games such as chess and Go. The real world, however, definitely does not present agents with perfect information. And while DeepMind have pushed their approach to video games out to those with partial information, the admittedly impressive results usually simplify the problem of perceiving the world, and can still be shown to be vulnerable to very simple tactics; in fact, even in Go, unorthodox play that would be punished by even a human amateur can win against one of the leading Deep Learning Go players.
All of these results highlight how bad deep nets are at doing what biological systems have long ago solved, generalising to novel scenarios. This is because of structural inadequacies in how they represent the world, and draw inferences from it – fundamentally they are generalised learning machines that work from a blank slate. But brains are evolved to work with data generated by the world, so come with a sophisticated set of priors that mean they work much more consistently and reliably – these priors are not just data priors, but ‘computation priors’, reflected in the complex architecture of real brains that far exceeds that of deep nets. It is these ‘computation priors’ that, at the same time as allowing much greater robustness, also allow much greater compute and data efficiency. To a large extent brains do not need to be trained to function in the real world, they are designed to, rapidly learning only minimal information along the way. This is in stark contrast to deep nets, whose training requires tens of thousands of images, terraflops of computation power, generates massive CO2 emmissions, and where linear increases in performance come with exponential costs of inputs.
No brain works the way that deep (reinforcement) learning and photogrammetry do – if they did, brains would not have come to dominate the planet the way they have. Yet technologists broadly remain wedded to these approaches, despite their obvious limitations. This is reasonable, because they represent the best solution available. Until now, that is – Opteran is now bringing its Natural Intelligence algorithms, based on a genuine understanding of how brains really solve the autonomy problem, to market – our first product, visual navigation using algorithms reverse-engineered from insect brains, is already in the hands of robotics companies, and will be in consumer products in 2024. And we’re just getting started; there’s a wealth of algorithms we can mine from brains to develop novel, robust, efficient solutions to the autonomy problem. Watch this space.
James Marshall
Chief Science Officer and Co-Founder, Opteran Technologies