Final Report Summary - COLUMNARCODECRACKING (Cracking the columnar-level code in the visual hierarchy: Ultra high-field functional MRI, neuro-cognitive modelling and high-resolution brain-computer interfaces)
Besides technological and methodological advancements, our work has led to a deeper understanding of how the brain and mind work by zooming into the fine-grained functional organisation within specialised brain areas. We have, for example, revealed for the first time columnar-level direction-of-motion and disparity maps in human visual cortical area hMT that help to understand how we see objects moving in specific directions and how we see the world around us in 3D. In collaborative studies, we have also seen for the first time columnar maps of sound frequencies in the human auditory cortex and how top-down expectations about the visual world can be read out from upper, but not lower, layers of primary visual cortex.
One of the most exciting results of the project shows that observed activity changes in columnar-level features can be used to predict the content of conscious experiences. In this study we used ambiguous motion stimuli that spontaneously switch between two clearly separate perceptual states reported as motion in either horizontal or vertical direction. We observed that the amplitude of direction-of-motion selective clusters in area hMT dynamically reflect the perceived direction of motion as indicated by button presses, i.e. when the subject perceived an ambiguous stimulus as moving in horizontal direction, the identified "horizontal" columnar-like features showed increased activity whereas "vertical" columnar clusters showed increased activity when the participant perceived the stimulus as moving in the vertical direction.
Another result may pave the way for new brain computer interfaces (BCIs) that could help selected patients in the future suffering from severe motor impairments being even unable to speak. Our 7 Tesla study demonstrated that imagined letter shapes can be literally read-out as 2D images from self-generated brain activity in early visual cortex providing also new insights in the operation of the "mind's eye". This work may lead to BCIs that could enable locked-in patients to communicate with their relatives by simply imagining letters of the alphabet.