Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Boosting Brain-Computer Communication with high Quality User Training

Periodic Reporting for period 4 - BrainConquest (Boosting Brain-Computer Communication with high Quality User Training)

Reporting period: 2022-01-01 to 2022-12-31

Brain-Computer Interfaces (BCIs) are communication systems that enable users to send commands to computers through brain signals only, by measuring and processing these signals. Making computer control possible without any physical activity, BCIs have promised to revolutionize many application areas, notably assistive technologies, e.g. for wheelchair control, and man-machine interaction. For instance, using a BCI, a tetraplegic user can move a cursor on a computer screen towards the left or right simply by imagining left or right hand movements, respectively. Despite this promising potential, BCIs are still barely used outside laboratories, due to their current poor reliability. For instance, BCIs only using two imagined hand movements as mental commands can recognize, on average, less than 80% of these commands correctly, while 10 to 30% of users cannot control a BCI at all.
A BCI should be considered a co-adaptive communication system: its users learn to perform mental commands using mental tasks (e.g. by imagining movements) that the machine learns to recognize by processing the brain signals measured from the user. Most research efforts so far have been dedicated to improving how the brain signals are processed. However, BCI control is a skill that users have to learn too. Unfortunately, how BCI users learn to produce reliable mental commands is essential but is barely studied, i.e. fundamental knowledge about how users learn BCI control is lacking. Moreover, standard BCI user training approaches are not following human learning principles nor guidelines from educational psychology. Thus, poor BCI reliability is probably largely due to highly suboptimal user training.
In order to obtain a truly reliable BCI we need to completely redefine user training approaches. To do so, this project proposes to study, understand and model how users learn to control BCIs. Then, based on human learning principles and such models, this project aims at creating a new generation of BCIs which ensure that users learn how to successfully control BCIs, hence making BCIs dramatically more reliable. Such a reliable BCI could positively change man-machine interaction as BCIs have promised but failed to do so far.
We first worked on identifying factors (e.g. users’ personality, cognitive abilities or neurophysiological patterns) related to BCI user performance and learning. We have proposed new ways to measure users’ BCI skills, independently of how good the machine is. We have actually identified different types of BCI user learning, associated to different changes in users’ brain activity. We also identified that BCI experimenters, who train BCI users, actually do influence how users learn and perform. Finally, by using Artificial Intelligence (AI) techniques, we could reveal how some users’ personality traits, notably how anxious they are, or their brain activity patterns at rest, could predict how well they will control a BCI. We have identified new such patterns and proposed AI models using them to accurately predict users’ future BCI control performance.
To refine our models, we also worked on estimating users’ mental states (e.g. their mental efforts) during training. We thus designed new AI algorithms to estimate users’ cognitive, affective and motivational states from their brain (electroencephalography – EEG) and physiological (e.g. heart rate) signals. Such algorithms could recognize low or high mental efforts, and negative or positive emotions from EEG, with a better reliability than existing methods. We also conducted experiments to induce various types of attention, e.g. sustained attention or split attention, which our algorithms could recognize from EEG signals. Finally, we also studied curiosity, a mental state that is key to ensure learning. With a new experiment inducing users into various curiosity states (e.g. bored versus curious) and the AI algorithms above, we could discriminate low versus high curiosity from both EEG and physiological signals.
We also worked on optimizing BCI user training. We worked on user feedback, i.e. the information provided about what the BCI has recognized, so that users can learn better. We proposed new BCI feedbacks, including a feedback based on vibrotactile and realistic visual feedback, and a social feedback. For the latter, we designed the first artificial learning companion for BCI, which provides users with support or advices depending on their performance and learning. It can improve performance for the users who prefer to work in group. Finally, we explored biased feedback (making users believe they performed better or worse than what they really did), and showed that it could improve BCI performances and learning if the bias is personalized to each user’s traits, states and skills. We thus proposed an algorithm to do that automatically.
We also designed new robust and adaptive AI tools to deal with the changing users’ EEG signals. Such tools are robust to noise, can identify EEG sensors providing the most stable signals or update their parameters as new data become available. Our studies showed the gains they all offered, notably when we trained a tetraplegic user to control a BCI over 3 months. They indeed enabled him to increase his BCI control performance dramaticall.
Overall, this work led to the first theory and principles of BCI user training, able to explain who can use such BCIs, what kind of learning is involved, or how to optimize this training.
This work was disseminated with scientific publications (20+ journals and 25+ conferences) and talks (40+). Moreover, many of the designed AI tools and BCI feedbacks were shared open-source, as part of the OpenViBE and BioPyC software. EEG data collected during the project were also shared as open data.
Our work enabled progress beyond the state-of-the-art, at different levels, to understand, model and optimize BCI user training. At the level of the understanding of BCI user training, we identified new factors related to BCI performance and learning, including the experimenters themselves, brain activity patterns of users at rest, or the characteristics of the trained machine learning algorithms used. We also identified different types of BCI user learning.
At the modeling level, we proposed new computational tools to estimate, from EEG, user mental states related to learning, e.g. mental workload or emotional valence, and showed that, together with new protocols, they can estimate mental states that could not be estimated from EEG before: attention types and curiosity. Finally, we have proposed computational models that can predict the future users’ BCI control performances from the factors above.
At the level of optimizing BCI user training, we proposed new methods at both the machine and the user levels. At the machine level, we proposed new algorithms to classify EEG in robust way, by making them robust to outliers and to the variability of EEG signals. At the user level, we have proposed new feedbacks, including multimodal vibrotactile and visual feedback, social feedback with an artificial learning companion and biased feedback personalized to each user. All these new methods could improve BCI performance and/or learning, and could enable a tetraplegic BCI user, initially unable to use the BCI, to reach high BCI control accuracy.
Our BCI with multimodal (vibrotactile and visual) feedback, for post-stroke rehabilitation
Our open source BCI software OpenViBE used during real-time BCI experiments
Our artificial learning companion, PEANUT, for providing social feedback during BCI user training