Periodic Reporting for period 2 - VRTogether (An end-to-end system for the production and delivery of photorealistic social immersive virtual reality experiences)
Reporting period: 2018-11-01 to 2020-12-31
The grand promise of Virtual Reality is that of a medium which makes you feel like being in a place where you are not, where there is an unfolding plot within which you can take part, and freely navigate within it, as well as interact openly with any element in it, including with virtual characters. In this project, the aim is to radically improve the experience by innovating in how media formats are used (i.e. how audio, video and graphics are captured, delivered and rendered at users’ homes) demonstrating a significant improvement of the feeling of being there together and the photorealistic quality of the content.
VR-Together has produced two platforms and a set of tools to offer photorealistic immersive Virtual Reality (VR) content which can be experienced together with others while apart. The main objective of the project has been to research and develop advanced VR social experiences through the orchestration of innovative media formats. The production and delivery of such experiences and the underlying technology that enables them have been demonstrated along the three years and three months during which the project has been active, and 5 specific objectives have been addressed:
OBJ1. Develop and integrate new media formats that deliver high quality photo-realistic content and create a strong feeling of co-presence in coherently integrated experiences.
OBJ2. Adapt the existing production pipeline to capture and encode multiple media formats and integrate them with state-of-the-art post-production tools.
OBJ3. Re-Design the distribution chain so such innovative content format can be orchestrated and delivered in a scalable manner.
OBJ4. Develop appropriate Quality of Experience (QoE) metrics and evaluation methods to quantify the quality of these new social VR experiences.
OBJ5. Maximize the impact of VR-Together can have on content creators, producers, distributors, tooling companies, service providers and the general audience.
Thanks to the state-of-the-art technology, developed within VR-Together, for the real-time capturing, compression and transmission of volumetric video, the participants of our platforms will be able to feel as if remotely located friends and family were actually sharing the same physical environment.
During three years, we have explored and developed real time volumetric capturing systems, real time low latency transmission pipelines for such type of data volumes, two platforms, for unity and web environments, that are able to orchestrate multiple users in multiple interactive sessions, under different conditions and using heterogeneous user representation formats, as well as open datasets which include valuable capture data and 3D environments among other things.
Hundreds of end users have been involved in the project through three pilots in which specific use case scenarios have been evaluated. Also professionals and stakeholders have contributed to the project, through specific industry events and workshops, in which requirements, results and future steps have been presented and gathered.
The VR-Together value proposition is the real time and realistic interaction capabilities in multi-user environments and offers all the necesary tools to make a deeper immersion and togethernes experience possible.
· Volumetric capturing systems, based on single and multiple (RGB-D) sensors
· End-to-end low-latency pipelines for the integration of live 2D and volumetric streams, including encoding and distribution solutions.
· Orchestration components for session management.
· Multi-Point Control Unit (MCU) components for an optimized in-cloud processing of Point Clouds and RGB+D streams.
· Web and native media clients supporting the integration of heterogeneous media formats for the end-users’ representation and the virtual scenario, and a set of interactivity features (basic voice control, tele-porting, interaction with objects, events’ handling…).
Beyond the technological components, the project has contributed with professionally content scenarios and assets and new evaluation metrics. This has resulted in open-source and licensed software, open-science datasets, evaluation methodologies and resources, and recommendations on how to provide Social VR experiences. These outputs have been reflected in many (up to 40) publications in high-impact conferences (e.g. ACM CHI, ACM MM, IEEE Virtual Reality) and journals (e.g. IEEE ACCESS, Virtual Reality…) and standardization contributions (e.g. MPEG, ITU, W3C…).
Overall, unlike most of the existing Social VR platforms relying on just the use of synthetic avatars and others requiring more expensive and complex setups for enabling realistic representations (like Microsoft Holoportation), VR-Together has been pioneering on enabling and demonstrating distributed multi-party Social VR experiences with realistic end-users’ representations, including self-representations, in a cost-effective and modular manner. Thus, the project has significantly paved the way towards the adoption of this promising medium, in a set of verticals (entertainment, education, corporate meetings, health…) that will propel distant human communication and interaction to the next levels.