Skip to main content
European Commission logo
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

The role of depth perception during prey capture in the mouse

Periodic Reporting for period 1 - MouseDepthPrey (The role of depth perception during prey capture in the mouse)

Berichtszeitraum: 2018-03-01 bis 2020-02-29

The mechanisms for the generation of depth perception at a neural level are largely unknown. There have been advances made on many model organisms, from the praying mantis to humans, but there is still a large body of knowledge missing. In particular, it is still unknown how the many sources of depth information present in our visual world lead to a unified percept of depth. It is also unclear where this happens in the brain, and what are the time constants and stages involved. In this Action, our aim was to shed light into these unknowns by utilizing the mouse as a model organism, and prey capture as a behavior that involves this capacity of the brain.

Depth perception is a salient and relevant part of our vision, allowing us to turn the 2-dimensional images reaching our eyes into a 3-dimensional percept, which is essential for properly navigating it given space is 3-dimensional. Therefore, when neurological disorders prevent the perception of 3 dimensions, or when attempting to recreate 3-dimensionality artificially in displays for remote work for example, it is essential to understand how depth perception works and how to provide the brain with enough cues to generate it. As home-office and remote working platforms become more widespread, this will become and essential problem to solve.

The study of depth perception requires the use of a model system. We have chosen the house mouse, as it offers the best compromise between being a visual organism and having a large array of methodologies developed to study it at the behavioral and neural level. To be able to extract depth perception information from the animal, we decided to utilize an innate behavior that requires the mouse to see in depth, as this would circumvent the use of training paradigms that would further complicate the experimental design. Our behavior of choice is prey capture, as it has been shown that mice readily hunt crickets even without training, and rely mostly on vision for this. Given the animal needs to estimate prey position in real time, we hypothesized depth perception would be essential for effective prey capture.

Therefore, the overall objectives of this Action are to first develop an assay that allows mice to perform prey capture in a virtual setting. Once this is established, we will use this assay to perturb the different sources of depth information during prey capture and measure their effect. Finally, we will use this knowledge to inform the interpretation of the neural correlates of these sources and therefore identify where, when and how is the depth percept generated in the mouse brain.
The work performed within the project period can be subdivided into two main components:

1) Development of a freely behaving virtual reality assay to present arbitrary visual realities to the animal

Based on published material, we built an instrument capable of recording the position and orientation of a mouse in a 100x50 cm arena as it moves through it. This information is used in real time to generate a virtual environment around the animal, which can contain arbitrary elements and landscapes. The arena was built using the game engine Unity, which allows us to benefit from all the progress made in game development, including artificial intelligence for prey evasion for example. This arena is able to perform a full loop from motion to rendering in approximately 15 ms, with tracking resolution under 200 µm and at 360 Hz. Additionally, the arena records video of the experiment, which in combination with tracking software, can reliably report the position of prey in the arena.

We have recorded around 300 prey capture sequences with real, which will give us unprecedented detail to describe the behavior, something that has not been done at this level yet. As for the virtual reality component, we have used the data generated by the real prey to inform development of a virtual cricket that behaves like a real cricket and evades the mouse.

2) Measurement of the neural correlates of prey capture in the mouse visual cortex
In parallel to the aforementioned measurements, we built a smaller arena to be able to perform neural recordings during prey capture. Although the arena is modelled after published work, there are currently no published results of cortical activity during prey capture. The only existing information pertains to subcortical structures, some related to sensory perception, such as superior colliculus, and some more downstream related to action selection and execution, such as the amygdala and the periaqueductal gray. For the execution of this Action, it is essential that we understand the neural correlates of prey capture in our target region.

We have approximately 200 trials of prey capture paired with neural recordings of calcium activity in the primary visual cortex of the mouse. These cells increase in activity when the cricket is in the field of view of our region of study, as expected, but additionally, we see a very large variety of responses, some with putative distance selectivity, which is directly relevant for the objectives of this project. We are currently still in the midst of analyzing these responses, but these will prove essential in designing the next round of virtual experiments as mentioned in section 1.
There are two main points that set this project beyond the state of the art. The first is the implementation of the freely behaving VR setup on a well-known and widely used game engine, since benefitting from such consumer technologies has proven to be a powerful approach in science. This allows for fast scientific progress given speed of development of these technologies, which allows the scientist to save time and effort. In our case, the current assay has significant flexibility, something not attainable easily by one researcher. Additionally, our prey escape algorithm benefits greatly from the gaming AIs developed for that purpose.

The other large step beyond current state of the art is the measurement of single cell brain activity during prey capture. These measurements have not been published in the visual cortex, and the changes we observe in neural activity during behavior should be informative in understanding not only how the brain operates during prey capture, but also how cortex works under naturalistic behaviors.

We expect to combine the freely behaving VR and the neural imaging during prey capture to deliver a holistic observation of the concomitant brain activity, and during perturbations of the sensory input. We are currently implementing wireless brain imaging for this methodology, and therefore deliver the perturbations we outlined in the project description. Moreover, we are planning to evaluate other visual aspects of prey capture, such as prey selection and prediction of prey trajectory, both of which benefit from the VR capabilities of our assay and cannot be done under other preparations.

These developments should pave the way for, wider usage of freely behaving VR assays in the neurosciences, but also for advancing the quality and accessibility of VR technologies for use by humans, be it in the gaming industry as well as in the professional environment, where VR-based visualization and interaction technologies are making a strong entry into the market.
assay allowing perturbations of depth cues during the performance of prey capture behaviour