Periodic Reporting for period 3 - EMERG-ANT (Ant navigation: how complex behaviours emerge from mini-brains in interaction with their natural habitats)
Reporting period: 2021-01-01 to 2022-06-30
The idea of the project is to bring the field and the lab together by using a new experimental tool enabling the full control of the sensory-motor experience of ants as they navigate in virtual-reality reconstructions of their natural environments (WP1). This tool enables us to manipulate the virtual world in any possible way, opening the door to a vast amount of new experimentation to answer questions that cannot possibly be tackled in the real world. With this tool, we seek to characterise 1- how insects encode the complex scenes of their natural world (WP3), 2- how they integrate multiple sources of information (WP3) 3- how they store and combine visuo-motor memories (WP4) and 4- what are the rules underlying their motor control (WP4).
Our experimental results are systematically interpreted in the light of the insect brain circuits. To do so, all our hypotheses are implemented as neural models embedded in a simulated agent navigating in the same reconstructed virtual environment as the ants. Our agents are subjected to the same manipulations as the ants and the resulting behaviour can directly compare to the ant data. This modelling effort enables us to pinpoint the gaps in our understanding of the mechanisms, as well as make specific predictions, and thus drive our experimental questions. Together, experimentation and modelling enable us to actually understand how the insect’s brain neural processes underlie their navigational behaviour in the wild.
The brains of insects may look very different in scale and shape than the brains of vertebrates, but the actual neural circuitry can be bafflingly similar. This suggests that similar computations are at play between animals’ brain, and thus that understanding one can help understand another. Studying navigation has another advantage: going from A to B without getting lost is a task shared by most animals, including humans. Therefore, this project may help us identify universal neural rules which underlie also our own behaviours.
Our subsequent modelling effort revealed how the insects brain circuitry could naturally achieve this. What’s more, when embodied in a simulated agent navigating in reconstructed natural worlds, our novel neural model achieves now amazingly robust navigation! In parallel, our modelling effort also revealed how very simple neural process in the insect early visual system could enable to strongly improve the recognition of these complex scenes, which happen deeper in the brain. Further experiments in the field provided also the behavioural insight necessary to understand how ants learn aversive memories so as to avoid regions associated with danger, and our neural models show how such aversive memories can be combined with appetitive memories during navigation. Again, adding these principles to our navigating agents strongly improved their navigational efficiency!
Regarding virtual reality (VR), we used in Canberra a prototype of the LED virtual reality system designed by our Australian collaborators for testing ants in the field. This prototype enabled us to perform pilot experiments with ants trained in their natural environment and tested in the virtual world. The results obtained showed that ants trained in the wild could orient in the VR when presented with reconstruction of their visual scene. This shows that the ants can recognise their familiar, real environment, in the VR! This breakthrough is remarkably promising regarding our ability to answer some fundamental questions using this method. Unfortunately, the VR worked only for one nocturnal species of ant, and not with our desired diurnal species. This is likely because the LED wavelengths used in this prototype VR does not fit the ant’s visual system very well. We thus designed a second, improved version of the LED arena that, hopefully should work with diurnal species too. The building of this second version is currently being delayed by administrative considerations.
In parallel, we developed in Toulouse a second VR system using three video projectors. These project reconstructed natural worlds on a cylindrical screen, in the middle of which the ant is navigating on its trackball. It is necessary that from the navigating ants’ perspective the projected world makes sense. The geometrical transformations of the required images are complex, but we managed to build a code that combines previously available freeware (freemoVR) with the videogame engine Unity, providing us with an intuitive software suite to design and run VR experiments. All the code necessary to build such VR system will be released soon. This VR setup is intended to be used with lab-reared ants, and the first conducted tests in Toulouse showed that ants can be trained to learn and find their nest within a complex virtual world. Unfortunately, the Covid-19 pandemic prevented us to collect new ant colonies during the time window that is optimal to our species. We are thus restricted to wait for the next field season to collect ants and run the first complete experiments with this system.
This breakthrough results from a combination of experimentation in the field and neural models, and notably the use of two new tools: on the one hand, we have developed an efficient way of mounting ants on a trackball directly in the field, and on the other hand, we have developed a convenient Python platform to customise our neural models and run visual navigation simulation with high throughput.
In parallel, we manage to have ants navigating in virtual reality (VR) within natural-like environments. Our preliminary results show that not only we can have ‘wild’ ants recognising in the VR their learnt route from the real world; but we can also train ants reared in the lab to learn to navigate within the virtual worlds over tens of meters in our VR system. This is the first time an insect is shown to do such an ecologically relevant task, which in addition involves complex visual learning in a VR environment, that is, in an environment where we are able to control the visual scene at will.
These methods still require fine tuning before we can run experiments at high throughput, but the desired technical breakthrough is achieved.
Until the end of the project, we should thus be able to run our experimentation not only using the trackball in the field, but using these novel VR methods, which provide a higher degree of freedom regarding the manipulations that we can perform. Our methods will be the same as in the first part of the project. Questions will be based on our now novel neural architecture (see above) with the intention of shedding light on remaining obscure areas such as: what additional types of visual processing enable ants to improve the robustness of their visual memories (to be tackled with the ‘field ant’ VR system in Canberra); or how motor learning is combined with visual learning for guidance (to be tackled with the ‘lab ant’ VR system in Toulouse). Any experimental results should help us to improve our models towards an increasingly complete, efficient, neural architecture.