Periodic Reporting for period 2 - MeMAD (Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy)
Okres sprawozdawczy: 2019-07-01 do 2021-03-31
The MeMAD consortium has focused especially in TV broadcasting and in on-demand media services. Four main project objectives were:
Objective O1: Develop novel methods and tools for digital storytelling
Objective O2: Deliver methods and tools to expand the size of media audiences
Objective O3: Develop an improved scientific understanding of multimodal and multilingual media content analysis, linking and consumption
Objective O4: Deliver object models and formal languages, distribution protocols and display tools for enriched audiovisual data
The results of MeMAD were well aligned to the action ICT-20-2017, developing tools for smart digital content for creative industries in the European broadcasting domain. The research results were world class as proved by our excellent success, first in various scientific benchmarking challenges, and then by the interesting results in novel real-world tasks. In addition to publishing scientific articles, sharing the software and the results in public, we moved the research field forward. By disseminating the results directly to various European broadcasters and their service suppliers, we worked to maximize our impact in the domain of production and distribution of audiovisual content.
We pushed further the state-of-the-art in the generation of description of audiovisual data jointly by automatic visual analysis, speech recognition, audio event detection, speaker diarization and named entity recognition, as well as an innovative way of ingesting legacy metadata, under the form of a knowledge graph. We contributed to the state-of-the-art in multimodal machine translation, where the output description of multimodal events can be provided in multiple languages to improve cross-lingual search. We studied human annotation of video data and audio description and created a human annotated video database for comparative analysis of human and machine descriptions. We contributed to existing semantic metadata standards and applied Linked Data best practices to publish a MeMAD knowledge graph that provides semantic descriptions of broadcast video data. Finally, we facilitated joint work between media industry and researchers, increasing mutual understanding of typical professional workflows, priorities and user needs in both domains. This lays way for deeper future collaboration and ensures the relevance of the project work.
Public project deliverables and summary of project publications can be found at https://zenodo.org/communities/memad. Software and results created in the project are available in https://github.com/memad-project. We published a semantic data platform at http://data.memad.eu/. By disseminating the results directly to various European broadcasters and their service suppliers, we maximized our impact in the domain of production and distribution of audiovisual content.
After the project, the results will be utilized by business and services provided by the partner companies Limecraft, Lingsoft and LLS and data providers YLE and INA. They will also be utilized in the planned standardization activities and were submitted to EBU, private broadcasters and other agents in the media sector as recommended practices for the use cases we implemented. A significant part of the results were published open access to be exploited by anyone interested. In addition to software and scripts, MeMAD released open benchmark and evaluation datasets for automatic speech recognition, multimodal content analysis and machine translation in the media context.
MeMAD took a great step forward from the state-of-the-art in automatic content description to propose a semi-automatic video content description, which can be applied to different contexts of use. Providing video description creates new audiences by creating a verbal surrogate that anyone, not only the disabled, can benefit from. The automatic analysis techniques detect visual and auditory elements from multimedia and label them with pre-defined concepts, generate textual description of the content and provide speech recognition. Our work on multimodal machine translation resulted in a new state-of-the-art in image caption translation. Furthemore, our approach to document-level translation has become the de-facto standard for discourse-level machine translation and we have released pre-trained models for subtitle translation for focus languages in the project. We demonstrated the added value of developing a Knowledge Graph for integrating heterogeneous legacy metadata with automatic analysis results. We extended the existing EBU standard used in the media industry and we have proposed an extensible set of interchange formats that re-use well-known vocabularies.
We developed multimodal methods for topical segmentation of multimedia content and to align an existing content description with segments thus enabling access to audiovisual content at the fragment level. We developed new methods to predict the memorability of those fragments as a surrogate for assessing their importance, but there is still a significant gap for generalizing them to any type and genre of audiovisual content. Finally, we proposed new methods and systems that perform named entity recognition and disambiguation on noisy transcripts or directly from the speech. We developed innovative explainable methods for extracting topics from audiovisual segments or categorizing and enriching those segments using external information and background knowledge.