Periodic Reporting for period 3 - MuMMER (MultiModal Mall Entertainment Robot)
Période du rapport: 2018-09-01 au 2020-02-29
The objectives of the project included:
1. Developing an interactive robot for entertainment applications.
2. Involving stakeholders throughout the project in a co-design process.
3. Allowing the robot to perceive the world through its own built-in sensors.
4. Automatically learning strategies for the robot to interact with humans.
5. Moving and navigating safely and naturally in a crowded public space.
6. Developing new business models and opportunities for socially interactive robots in public spaces.
The results of MuMMER include:
- A co-designed interactive mobile robot with entertainment features and behaviours that is able to interact naturally with humans in a public space.
- A set of concrete, detailed, tested use and business scenarios for a mobile entertainment robot in a shopping mall.
- A set of success criteria and evaluation strategies designed to evaluate the success of the robot in its designated tasks.
- A set of publicly available, reusable, state-of-the-art components for audiovisual scene processing, social signal processing, high-level action selection, and human-aware robot navigation.
All partners received the Pepper robot in June 2016. All technical partners then developed initial versions of the components which will combine to create the MuMMER system, and these were integrated into an initial interactive system that supports the target scenario identified by the co-design process. The Pepper robot hardware was evaluated in the context of the project needs, and a concrete plan developed for hardware and software updates to be made to Pepper to allow it to fully support the project research goals.
During Period 2, we identified and refined a concrete scenario relevant to the mall situation which supports the integration of state-of-the-art research from all partners. The scenario is based around guidance – i.e. helping users to find locations in the mall – but also includes aspects of interactive chat and entertainment. We carried out a human-human study to assess how the current mall guides carry out guidance tasks, and implemented a version of the MuMMER system that supports this guidance scenario, deploying it in the mall. Significant effort was made to develop a modified version of Pepper with hardware suitable for the mall environment.
We carried out regular studies in the mall measuring user acceptance of the robot, involving several hundred participants.The primary focus in WP2 was on updating and extending the perception components to support more robust and informative perception modules. In WP3, we developed and evaluated components for generating non-verbal behaviour of the robot designed to produce particular social effects on the user. In WP4, we developed a new dialogue system called Alana, a scalable and highly customizable open-domain dialogue system comprised of several interchangeable components, combined into 4 basic modules. Work in WP5 concentrated on enhancing and integrating all the building blocks involved in the navigation and localisation tasks. In WP8, we outlined four possible use scenarios for a MuMMER-like robot system and discussed possible business advantages of each scenario.
During the final reporting period, development continued on all technical components, resulting in a final set of state-of-the-art components in all areas. However, beyond this, during this period the project achieved its final goal of a long-term deployment in the mall. Specific achievements in this area include the following:
1. A modified version of the Pepper robot, with hardware suitable for the mall environment, was delivered to all partners
2. A final scenario integrating guidance with social chat was developed, and software supporting the guidance scenario was developed by the partners and integrated on the target robot platform.
3. The final, integrated robot system was deployed in the target environment over a 14-week period from September 2019 through January 2020.
At the end of the project, the following represent the progress beyond the state of the art:
1. A prototype modified version of the Pepper robot, developed by SoftBank for the purposes of the MuMMER project.
2. Software for audiovisual sensing and person tracking.
3. Software for social signal recognition, and social signal generation involving novel non-verbal behaviours of the robot.
4. A conversational system that supports multithreaded dialogue on a range of topics, and novel neural techniques for Natural Language Understanding suitable for spoken HRI.
5. A suite of components suitable for human-aware navigation and guidance in the shopping mall context.
6. Recordings and user evaluations from a long-term, autonomous robot deployment in the shopping mall for a total of 49 days over 14 weeks.