Periodic Reporting for period 2 - SPRING (Socially Pertinent Robots in Gerontological Healthcare)
Período documentado: 2021-12-01 hasta 2023-05-31
What if robots could take on the repetitive tasks involved in receiving the public? We know there are already forms of artificial intelligence which are capable of interacting with humans. However these mediation tools have but rudimentary possibilities, or require remote control by an engineer. While there are “butler” robots which can provide the weather forecast or give geographical directions, they are not able to execute complex social tasks autonomously, such as escorting users around a building. To be able to carry out such tasks, a social robot must be capable of perceiving and distinguishing signals emitted by different speakers, understanding these signals and identifying that they are addressed to the robot, and then react accordingly. This is a daunting challenge, because it requires numerous perceptive abilities and a capacity for automatic learning in order to execute autonomous decision-making. SPRING's overall objective is to answer this challenge.
But how do we enable a robot to identify from a set of conversations which request is addressed to it; to understand that it is being asked where a person may sit; to look around and find a vacant seat, determine the path to accompany the speaker to their seat while avoiding other patients and staff on the premises, and then perceive the relevance of offering distraction in the form of conversation? There are numerous technological difficulties and hurdles to overcome in order to accomplish this type of complex task. With regard to movement, SPRING opted to implement the reinforcement learning method. In order to determine its speed, approach angle and other parameters of movement, the robot is trained through an artificial intelligence system which calculates the adequacy between optimal action and the action actually undertaken, and attributes “rewards” for successful outcomes. This training phase enables the robot to come across a wide variety of possible cases in full autonomy, without human intervention to correct pathways. Once placed in real conditions, the robot continues to learn and identify the optimal action for each situation. This opens up the possibility of its use in a hospital setting. This is the aim of the second phase of SPRING, which started in 2022: to validate the use of the robot in a hospital and to assess its impact on users and their habits, in addition to its acceptability. Entrusting even simple social tasks to a robot is nevertheless far from innocuous and raises numerous ethical and organisational issues, which are also handled within the project.
To develop Socially Assistive Robots with the capacity of performing multi-person interactions and open-domain dialogue.
-->Following delays suffered in RP1 due to Covid, the project was able to come back on schedule and achieve MS4 (early 2022), MS5 (November 2022, 3 months delay), and MS6 (May 2023, on time).
To develop a novel paradigm and novel concept of socially-aware robots, and to conceive innovative methods and algorithms for computer vision, audio processing, sensor-based control, and spoken dialogue systems.
-->The current state of progress is integrated testing of advanced features developed by each partner (navigation, audio signal tracking and cleaning, visual environmental awareness and audio-visual signal fusion, human behaviour understanding, dialogue) all together; continuous cycle of experiments and integration since October 2022 (and ongoing).
To create and launch a brand-new generation of robots that are flexible enough to adapt to the needs of the users, and not the other way around.
-->The robotic platform produced and delivered by PAL Robotics to all partners in 2021 is the perfect low-cost/high flexibility (with ROS) platform for this project and to lay the basis to a new generation of social robots. It has since RP1 been updated (hardware and software) using SPRING’s contributions to better adapt to market needs.
-->On the software side, SPRING's contribution is two-fold. First, on fostering the development of the ROS4HRI standard, specifically designed for social robotic platforms. Second, on enabling various skills necessary for social robotic platform, in the form of dedicated software modules publicly available in our platform.
To validate the technology based on HRI experiments in a gerontology hospital, and to assess its acceptability by patients and medical staff.
-->Validation of current modules is ongoing at the Broca Hospital since October 2022 (ongoing) through continuous experiment cycles. Acceptability studies are running in parallel of experiments, and first results on preliminary cycles have been published in a deliverable, showing on average good acceptability.
- To perform self-localisation and tracking in cluttered and populated spaces
-->Self-localisation achieved in relevant environments (Broca hospital). Optimisation efforts are ongoing in order to increase robustness and computational efficiency.
- To build single- and multiple-person descriptions as well as representations of their interaction
-->All items achieved.
- To augment the 3D geometric maps with semantic information
-->Added the functionality to detect unknown objects from an arbitrary view, and exploited large language models to better exploit relationships between seen objects. Full evaluation and integration with the rest of the pipeline are still missing.
- To quantify the users’ levels of acceptance of social robots
-->We have decided to use a proxy of the robot acceptance by attempting to measure the engagement with the interaction. We are currently devising multi-modal strategies to this aim.
- To endow robots with the necessary skills to engage/disengage and participate in conversations
-->Integrated modules for this feature have been tested in a relevant environment (Broca hospital)
- Empower robots with skills needed for situated interactions
-->Situated interactions on the basis of geometric representation and social representations is achieved, semantic & behavioural are foreseen for the next RP
- Online learning of active perception strategies
-->We continued investigating active strategies to improve the ASR performance, and/or the sound localisation performance. We have also investigated meta-learning strategies allowing the robot to perceive and quickly adapt to the various kinds of environments it could face. Similarly, we are investigating how to take into account human feedback to select the best navigation strategy.
- Demonstrate the pertinence of the project’s scientific and technological developments
-->Demonstration efforts have started in October 2022 in relevant environments (Broca hospital) and have led to preliminary results.