Skip to main content
European Commission logo
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Ant navigation: how complex behaviours emerge from mini-brains in interaction with their natural habitats

Periodic Reporting for period 3 - EMERG-ANT (Ant navigation: how complex behaviours emerge from mini-brains in interaction with their natural habitats)

Berichtszeitraum: 2021-01-01 bis 2022-06-30

A fundamental endeavour of science is to understand how brains produce complex behaviours in the wild. Behaviour is the fruit of not only the brain, but also the natural environment in which the animal has evolved. Thus, we have much to gain by seeking to understand brain processes in the light of natural behaviours. Insects provide a uniquely powerful system to investigate this question because: 1- they display exquisitely sophisticated behaviours, particularly when it comes to navigation; 2- they do so with a nervous system numerically much simpler than vertebrates, and 3- their behaviour can be studied directly in their natural environment, rather than in small artificial setups where animals are usually forced to do specific ecologically irrelevant behaviours. However, despite considerable advances in our understanding of both the natural behaviour (studied in the field) and the neurobiology of insects (studied in the lab), a huge gap still exists between these approaches and we are still far from understanding how their neural circuits actually underlie their real, sophisticated behaviours. This problem is equally true for vertebrates, including humans, but can nowadays be tackled with insects.

The idea of the project is to bring the field and the lab together by using a new experimental tool enabling the full control of the sensory-motor experience of ants as they navigate in virtual-reality reconstructions of their natural environments (WP1). This tool enables us to manipulate the virtual world in any possible way, opening the door to a vast amount of new experimentation to answer questions that cannot possibly be tackled in the real world. With this tool, we seek to characterise 1- how insects encode the complex scenes of their natural world (WP3), 2- how they integrate multiple sources of information (WP3) 3- how they store and combine visuo-motor memories (WP4) and 4- what are the rules underlying their motor control (WP4).

Our experimental results are systematically interpreted in the light of the insect brain circuits. To do so, all our hypotheses are implemented as neural models embedded in a simulated agent navigating in the same reconstructed virtual environment as the ants. Our agents are subjected to the same manipulations as the ants and the resulting behaviour can directly compare to the ant data. This modelling effort enables us to pinpoint the gaps in our understanding of the mechanisms, as well as make specific predictions, and thus drive our experimental questions. Together, experimentation and modelling enable us to actually understand how the insect’s brain neural processes underlie their navigational behaviour in the wild.

The brains of insects may look very different in scale and shape than the brains of vertebrates, but the actual neural circuitry can be bafflingly similar. This suggests that similar computations are at play between animals’ brain, and thus that understanding one can help understand another. Studying navigation has another advantage: going from A to B without getting lost is a task shared by most animals, including humans. Therefore, this project may help us identify universal neural rules which underlie also our own behaviours.
We first developed a handy system to attach ants on top of an air floating ball, in other words, a treadmill for ants. After a few improvements, we were very happy to see that the ants, whether in the field or in the lab, could be easily mounted and were remarkably at ease on the setup: they keep their motivation to navigate and display their usual motor behaviours, as if unperturbed. This is key as it implies that we can now control the visual input of the ants and record in detail their motor output while they are doing their ecological tasks! Field experiments using these trackball systems enabled us to ask novel questions, such as, for instance: can an experienced ant visually recognise its familiar route when its gaze is not aligned towards the goal direction? Results showed that not only the ants could recognise their route no matter the direction faced, but they knew immediately which was the correct direction. Also, we revealed that the navigating insects are actually using some two-step strategies, they use visual recognition, not to drive their behaviour directly, but to update a neural representation of the goal direction, based on celestial cues, which they use to guide their movements. These results are refuting previous insect navigation models, and pushed us to understand how their neural circuit could underlie these feats.

Our subsequent modelling effort revealed how the insects brain circuitry could naturally achieve this. What’s more, when embodied in a simulated agent navigating in reconstructed natural worlds, our novel neural model achieves now amazingly robust navigation! In parallel, our modelling effort also revealed how very simple neural process in the insect early visual system could enable to strongly improve the recognition of these complex scenes, which happen deeper in the brain. Further experiments in the field provided also the behavioural insight necessary to understand how ants learn aversive memories so as to avoid regions associated with danger, and our neural models show how such aversive memories can be combined with appetitive memories during navigation. Again, adding these principles to our navigating agents strongly improved their navigational efficiency!

Regarding virtual reality (VR), we used in Canberra a prototype of the LED virtual reality system designed by our Australian collaborators for testing ants in the field. This prototype enabled us to perform pilot experiments with ants trained in their natural environment and tested in the virtual world. The results obtained showed that ants trained in the wild could orient in the VR when presented with reconstruction of their visual scene. This shows that the ants can recognise their familiar, real environment, in the VR! This breakthrough is remarkably promising regarding our ability to answer some fundamental questions using this method. Unfortunately, the VR worked only for one nocturnal species of ant, and not with our desired diurnal species. This is likely because the LED wavelengths used in this prototype VR does not fit the ant’s visual system very well. We thus designed a second, improved version of the LED arena that, hopefully should work with diurnal species too. The building of this second version is currently being delayed by administrative considerations.

In parallel, we developed in Toulouse a second VR system using three video projectors. These project reconstructed natural worlds on a cylindrical screen, in the middle of which the ant is navigating on its trackball. It is necessary that from the navigating ants’ perspective the projected world makes sense. The geometrical transformations of the required images are complex, but we managed to build a code that combines previously available freeware (freemoVR) with the videogame engine Unity, providing us with an intuitive software suite to design and run VR experiments. All the code necessary to build such VR system will be released soon. This VR setup is intended to be used with lab-reared ants, and the first conducted tests in Toulouse showed that ants can be trained to learn and find their nest within a complex virtual world. Unfortunately, the Covid-19 pandemic prevented us to collect new ant colonies during the time window that is optimal to our species. We are thus restricted to wait for the next field season to collect ants and run the first complete experiments with this system.
This first part of the project led us to a breakthrough regarding the way we understand – and implement – insect navigation mechanisms. We now understand how cross-talks between two brain regions – the Mushroom Bodies and the Central Complex, which were mostly studied independently beforehand – enable ants to combine visual memories and representation of directions in a most efficient manner. In addition, we understood how aversive and appetitive memories are combined during navigation. Our advance in understanding is further evidenced by our modelling efforts: agents equipped with these novel neural principles achieved remarkably robust and rapid navigation in complex visual world. The models also revealed something unexpected: the neural architecture produces an insect-like signature when using modalities other than vision, such as olfaction. This shows that we stumbled on general principles efficient for navigation, and provides insights into the evolution of insect navigation.

This breakthrough results from a combination of experimentation in the field and neural models, and notably the use of two new tools: on the one hand, we have developed an efficient way of mounting ants on a trackball directly in the field, and on the other hand, we have developed a convenient Python platform to customise our neural models and run visual navigation simulation with high throughput.
In parallel, we manage to have ants navigating in virtual reality (VR) within natural-like environments. Our preliminary results show that not only we can have ‘wild’ ants recognising in the VR their learnt route from the real world; but we can also train ants reared in the lab to learn to navigate within the virtual worlds over tens of meters in our VR system. This is the first time an insect is shown to do such an ecologically relevant task, which in addition involves complex visual learning in a VR environment, that is, in an environment where we are able to control the visual scene at will.

These methods still require fine tuning before we can run experiments at high throughput, but the desired technical breakthrough is achieved.

Until the end of the project, we should thus be able to run our experimentation not only using the trackball in the field, but using these novel VR methods, which provide a higher degree of freedom regarding the manipulations that we can perform. Our methods will be the same as in the first part of the project. Questions will be based on our now novel neural architecture (see above) with the intention of shedding light on remaining obscure areas such as: what additional types of visual processing enable ants to improve the robustness of their visual memories (to be tackled with the ‘field ant’ VR system in Canberra); or how motor learning is combined with visual learning for guidance (to be tackled with the ‘lab ant’ VR system in Toulouse). Any experimental results should help us to improve our models towards an increasingly complete, efficient, neural architecture.
Ant VR set-up
Ant on trackball
Ant navigation model