Periodic Reporting for period 2 - ImmersiaTV (Immersive Experiences around TV, an integrated toolset for the production and distribution of immersive and interactive content across devices.)
Okres sprawozdawczy: 2017-04-01 do 2018-06-30
In this context, the arrival of immersive head-mounted displays to the consumer market introduced new possibilities, but also new challenges. Immersive displays impose radically different audience requirements compared to traditional broadcast TV and social media. They require a constant, frequently refreshed, omnidirectional audiovisual stream that integrates sensorimotor information. This means that, at minimum, the visual perspective rendered changes consistently with changes in head position and rotation. In addition, immersive displays challenge the conventions of traditional audiovisual language. For example, cuts between shots, which constitute the very basic fabric of traditional cinematic language, do not work well in immersive displays. From a user perspective, omnidirectional TV offers a new user experience and a different way of engaging with the audiovisual content.
As an answer to the need of exploring this new context, ImmersiaTV has explored new forms of digital storytelling and broadcast production by putting omnidirectional video at the center of the creation, production and distribution of content, delivering an all-encompassing experience that integrates the specificities of immersive displays within the contemporary living room. We have proposed a form of broadcast omnidirectional video that offers end-users a coherent audiovisual experience across head mounted displays, second screens and the traditional TV set, instead of having their attention divided across them. This new experience seamlessly integrates with and further augment traditional TV and second screen consumer habits. In other terms: the audience is still be able to watch TV sitting on their couch, or tweet comments about it. However, the audience is also able to use immersive displays to feel like being inside the audiovisual stream.
The primary goal of IMMERSIATV has been to create an end to end toolset for the creation of multiscreen immersive experiences, addressing the different phases of content creation: ideation, production, distribution and consumption. This resulted in the 5 main project objectives described here:
Objective 1, the creation of a new immersive cinematographic language.
Objective 2, to adapt the production pipeline.
Objective 3, to Re-design the distribution chain.
Objective 4, to maximize the quality of the end-user and professional-user experience.
Objective 5, to maximize the market impact of the ImmersiaTV solutions and to ensure ImmersiaTV has a determining impact on the European and global audiovisual market.
-Capture: Videostitch partner delivered one of the first 360 cameras in the market that embedded a stitching process and a binaural audio mic providing standard H.264 and H.265 output streams (opposite to other solutions at that time, like Ozo from Nokia, requiring proprietary software to manage and edit output streams). In addition to that, the other capture partners (IMEC-Uni Hasselt and Azilpix) improved usability, capacity and transcoding features of Studio.One system. Studio.One is a black box video capture solution that can manipulate multiple video streams (conventional, panoramic and 360) providing a set of functionalities (image manipulation, transcoding, camera sync, etc.) that facilitates, among other things, the usage of multiple 360 cameras, not only for the creation of 360 content, but also for the creation of directive content resulting from 360 capture devices.
-Production: i2CAT ,PSNC and CGY have researched and developed new production tools for multiscreen environments that allow the seamless integration of directive and omnidirectional contents in editing or live production scenarios. To the extent of our knowledge there are still no other tools that can achieve such results. Both tools allow content creators to design multiscreen interactive and immersive experiences.
-Encoding: EPFL developed a new methodology for saliency estimation in 360 content that can improve compression efficiency for this type of content by taking into account the relative importance of different areas of a video frame when observed by human subjects. The results were presented at ICME in 2017 (https://ieeexplore.ieee.org/document/8026231/).
-Distribution: i2CAT has analyzed different omnidirectional projections and tested them in different content scenarios. A novel 360 streaming method has been implemented and evaluated. It is based on dividing the cube (in a cubic projection of 360 video) into two (H.264) tiles, adaptively streaming them based on the users' viewpoint, and playing them out in a synchronized manner in web-based players.The results were presented in NOSDAV workshop in 2018 (https://dl.acm.org/citation.cfm?id=3210456&dl=ACM&coll=DL).