Skip to main content
European Commission logo
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Repurposing and enriching images for immersive storytelling through smart digital tools

Periodic Reporting for period 2 - FotoInMotion (Repurposing and enriching images for immersive storytelling through smart digital tools)

Berichtszeitraum: 2019-07-01 bis 2020-12-31

The amount of digital content available to creative industries is growing exponentially, driven by the ubiquitous use of smartphones and the proliferation of social media: (1) there is a tremendous increase in the amount of photographic content (more than 1.8 billion photos are uploaded to social media platforms each day); (2) the ongoing transformation of factual, entertainment and social media publishers and platforms from textual and photo-centric format to video-driven format (more than 400 hours of video are being uploaded to YouTube each minute); (3) the increasing impact of 3D and virtual reality for providing immersive storytelling experiences, offering new ways of audience engagement and monetization for content creators in the upcoming years.
Acknowledging the above, the following critical questions become imminent in both content production and dissemination contexts: how to repurpose this massive amount of content; what kind of innovative tools are most suitable for this process; and finally, how these tools can offer new forms of monetization possibilities for creative industries' professionals.
FotoInMotion, sets to solve these critical questions and provide an innovative solution to the repurposing of content by offering automated tools for innovative contextual data extraction, object recognition, creative transformation, editing and text animation as well as state of the art 3D conversion options that allow content creators to transform their photos into highly engaging spatial and three-dimensional video experiences.
FotoInMotion is focusing on three major creative industries sectors: photojournalism to develop interactive photo driven stories; fashion, by opening up new forms of marketing, product placement and event coverage; and festivals, by enabling PR and publicity managers to communicate the festival experiences and engage audiences through immersive communication and repurposing festival archives. Professionals and experts from these three creative industries will continuously explore and test the FotoInMotion technological outcomes in order to achieve results of highest level in terms of quality, performance and innovation.
The FotoInMotion consortium launched its activities by performing an extensive analysis of the innovation tools and trends in the video and image processing marketplace along with features, pricing and platform analysis, enabling the team to reach to a common understanding of the currents status and, therefore, conclude to a joint vocabulary between end users and technical partners. That extensive analysis allowed the end users to develop a concrete and full set of user requirements, which were then “translated” into technical requirements by all technical partners.
Based on the user requirements and the project’s goals, and the components of the system and the interactions between them were defined, in order to constitute the overall architecture of the system. This architecture follows a distributed pattern where the communication between the components is done through a set of synchronous and asynchronous secured web services.
In parallel with the end users and based on user requirements, technical partners have initiated experimentation of state-of-the-art Machine Learning-based platforms and algorithms for the extraction and identification of visual features. Based on these results, the functional specifications of the visual analysis and classification component (iCAT) were defined and the relevant APIs for its integration within the complete FotoInMotion system architecture, were implemented
In parallel, the technical team selected and adopted a set of neural networks and started to configure, parametrise, train, and fine-tune them, to analyse photographs and identify features relevant for the FotoInMotion use cases. That lead to the development of the FotoInMotion image annotation tool (AAT). Such tool receives the output of the image analysis and features extraction tools and enables the user to enhance and/or augment the automatically obtained tags. In turn, it provides new training material and data to assess, validate, re-train and fine-tune the mentioned image analysis tools.
The technical partners have been also working towards developing and testing the 2D image editing quality assistance tools, used by end users to prepare images to be processed by the FotoInMotion application. This refers to actions like cropping, changing the color balances and apply transformations on images. The team has also developed and tested a set of video and audio effects, like pan, zoom, pitch etc to assist narration and, consequently, produce 2D videos in various video qualities and 3D videos.
All technical work has been capitalized into the development of the web and mobile applications, which were pilot tested by the end users. The web and mobile applications, through continuous iterations based on the user’s feedback were updated and new functionalities were implemented in every iteration, to address all the user needs and specifications. Through the two applications, a user can perform various actions like upload his photos and audios, view his media library, extract templates, apply image, audio and video filters, and eventually create his own story, by generating a 2D or 3D video, which can be stored and shared to social media, based on the selected settings.
The FotoInMotion project outcomes capitalised and advanced on cutting edge technologies based on research and developments in the domain of image analytics, image recognition by INESC and 3D technologies by QdepQ. In the field of creative industries, the FotoInMotion technological outcomes will enhance the way people use photography to portray stories and furthermore all photography related fields. For the art industries, in terms of exhibitions, it will give many options to create multi-media projects from traditional photography – it will enhance the way people interact with art and give creatives the chance to re-enhance old photography projects. Machine and Deep Learning technics for image recognition and analytics elaborated by INESC contributing to the advancement of the FotoInMotion services. FotoInMotion is using contextual information to learn a suggested story, making the workflow easier and more specific to the content, while the user is still in full control. In addition, QdepQ’s algorithm is currently the commercial state of the art in 2D-3D technology, but recently the interest of academic research is peaking in this field.
FotoInMotion logo