Periodic Reporting for period 4 - BrainConquest (Boosting Brain-Computer Communication with high Quality User Training)
Okres sprawozdawczy: 2022-01-01 do 2022-12-31
A BCI should be considered a co-adaptive communication system: its users learn to perform mental commands using mental tasks (e.g. by imagining movements) that the machine learns to recognize by processing the brain signals measured from the user. Most research efforts so far have been dedicated to improving how the brain signals are processed. However, BCI control is a skill that users have to learn too. Unfortunately, how BCI users learn to produce reliable mental commands is essential but is barely studied, i.e. fundamental knowledge about how users learn BCI control is lacking. Moreover, standard BCI user training approaches are not following human learning principles nor guidelines from educational psychology. Thus, poor BCI reliability is probably largely due to highly suboptimal user training.
In order to obtain a truly reliable BCI we need to completely redefine user training approaches. To do so, this project proposes to study, understand and model how users learn to control BCIs. Then, based on human learning principles and such models, this project aims at creating a new generation of BCIs which ensure that users learn how to successfully control BCIs, hence making BCIs dramatically more reliable. Such a reliable BCI could positively change man-machine interaction as BCIs have promised but failed to do so far.
To refine our models, we also worked on estimating users’ mental states (e.g. their mental efforts) during training. We thus designed new AI algorithms to estimate users’ cognitive, affective and motivational states from their brain (electroencephalography – EEG) and physiological (e.g. heart rate) signals. Such algorithms could recognize low or high mental efforts, and negative or positive emotions from EEG, with a better reliability than existing methods. We also conducted experiments to induce various types of attention, e.g. sustained attention or split attention, which our algorithms could recognize from EEG signals. Finally, we also studied curiosity, a mental state that is key to ensure learning. With a new experiment inducing users into various curiosity states (e.g. bored versus curious) and the AI algorithms above, we could discriminate low versus high curiosity from both EEG and physiological signals.
We also worked on optimizing BCI user training. We worked on user feedback, i.e. the information provided about what the BCI has recognized, so that users can learn better. We proposed new BCI feedbacks, including a feedback based on vibrotactile and realistic visual feedback, and a social feedback. For the latter, we designed the first artificial learning companion for BCI, which provides users with support or advices depending on their performance and learning. It can improve performance for the users who prefer to work in group. Finally, we explored biased feedback (making users believe they performed better or worse than what they really did), and showed that it could improve BCI performances and learning if the bias is personalized to each user’s traits, states and skills. We thus proposed an algorithm to do that automatically.
We also designed new robust and adaptive AI tools to deal with the changing users’ EEG signals. Such tools are robust to noise, can identify EEG sensors providing the most stable signals or update their parameters as new data become available. Our studies showed the gains they all offered, notably when we trained a tetraplegic user to control a BCI over 3 months. They indeed enabled him to increase his BCI control performance dramaticall.
Overall, this work led to the first theory and principles of BCI user training, able to explain who can use such BCIs, what kind of learning is involved, or how to optimize this training.
This work was disseminated with scientific publications (20+ journals and 25+ conferences) and talks (40+). Moreover, many of the designed AI tools and BCI feedbacks were shared open-source, as part of the OpenViBE and BioPyC software. EEG data collected during the project were also shared as open data.
At the modeling level, we proposed new computational tools to estimate, from EEG, user mental states related to learning, e.g. mental workload or emotional valence, and showed that, together with new protocols, they can estimate mental states that could not be estimated from EEG before: attention types and curiosity. Finally, we have proposed computational models that can predict the future users’ BCI control performances from the factors above.
At the level of optimizing BCI user training, we proposed new methods at both the machine and the user levels. At the machine level, we proposed new algorithms to classify EEG in robust way, by making them robust to outliers and to the variability of EEG signals. At the user level, we have proposed new feedbacks, including multimodal vibrotactile and visual feedback, social feedback with an artificial learning companion and biased feedback personalized to each user. All these new methods could improve BCI performance and/or learning, and could enable a tetraplegic BCI user, initially unable to use the BCI, to reach high BCI control accuracy.