Periodic Reporting for period 3 - EAR (Audio-based Mobile Health Diagnostics)
Période du rapport: 2022-10-01 au 2024-03-31
The improvement of care, the scalability of it to populations which can't possibly afford the current expensive standards, and the early diagnosis is important to society for a variety of reasons which can summarised into "better population health": scalable and early diagnosis are key to this.
The overall objectives of the project are and have been to:
-collect data related to audio for diagnostics of health through mobile devices so that models can be trained accurately.
-develop machine learning models for this type of data, also considering uncertainty estimation which would improve the interaction with the clinical practice, avoiding relying only on accuracy.
-improve on device machine learning and audio sensing to improve the possibility of keeping the data close to the individual and helping maintaining needed privacy standards.
-integrate multi modal machine learning, which in addition to audio includes other modalities for the sensing.
In this reporting period the project produced research involving both the development of analytics methods for audio to diagnose respiratory and cardiac pathologies but also research related to the use of audio sensing to monitor behaviour and activities in ear-worn devices and other body worn devices.
In terms of WP1, we have continued the COVID-19 sounds data collection and published a paper describing the dataset (in NeurIPS 2021). As part of this, we have continued to share the data with academic institutions requesting it: we have shared it more than 500 times. We have also collected data of studies involving participants wearing a digestive sound collection belt, as well as participants wearing earables with in-ear microphones for various vital sign data analysis as well as activities. We have also started to work with the digestive sound belt on pregnant mothers to understand foetal health.
In terms of WP2, we have explored the data analysis of COVID-19 sounds further and published several works: notably we have highlighted the realistic performance of such data and have started exploring longitudinal progression of disease diagnostics as well as uncertainty estimation performance. Comparison with clinicians diagnostic abilities also included. We have also analyzed digestive sounds data for stress detection.
In WP3, we have deepened our knowledge in the use of in ear audio for activity recognition, gesture recognition and user identification. We have worked on physiological signal detection such as heart and respiration signals as well as gait. We worked further on on-device machine learning especially in the context of continual learning, on device training as well as uncertainty estimation on device.
In WP4 we have advanced the work, we have worked on sensor fusion to augment the knowledge acquired with audio and improve performance on device. in terms of hearables, we have combined accelerometers with in and outer ear microphones.
In particular, the COVID-19 Sounds work was one of the earliest attempts at providing contactless, automatic and effortless COVID-19 diagnostics through machine learning modelling. Our data collection is possibly the largest crowdsourced one too.
We have been one of the first groups analyzing this type of data with many outputs on techniques for realistically tackle this problem. Our additional longitudinal data collection is unique albeit limited.
The work which we have done on progression of disease forecasting is unique in its kind: we have a unique dataset and we have developed techniques, which based on a single user data can explore how a respiratory disease is progressing, for example if the person is degenerating or if they are improving.
The work on exploring sounds from the wearables for the ear has generated interest in the mobile systems community with two very high impact publications. We have explored how to go beyond activity and look at physiological signals such as heart rate and heart rate variability. We have started working on gait and other activities such as tooth brushing.
We have started working also with abdominal sound belt and analysing how these can be related to stress. We have started to work on foetal heart sound detection in expectant mothers.
Our work on on device machine learning for embedded systems is also very novel: we have worked on continual learning, on device training and uncertainty estimation on device.
In terms of expected results for the rest of the project:
-we are further working on refining the COVID-19 sounds to monitor progression. This approach is very unique as we have collected a very unique progression dataset.
-in terms of abdominal systems sounds devices and analysis: we plan to extract physiological signal from these sounds. We are in the process of using abdominal sound belts for foetal heart rate monitoring.
-we are advancing the analysis of in-ear microphone sounds for vital signs as well as activity.
-we are improving on device training analysis as well as uncertainty estimation on device.