Skip to main content
European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Contenuto archiviato il 2024-06-18

Management and Authenticity Verification of multimedia contENts

Final Report Summary - MAVEN (Management and Authenticity Verification of multimedia contENts)

Executive Summary:
Due to the increasing amount of information that is continuously generated, acquired, and shared, the efficient management of multimedia files is a challenging task. The first direct demand regards how to easily search contents within high populated galleries. The second relevant issue concerns how to verify that contents (and the information they convey) are genuine and thus credible. Since digital media assets are extremely volatile and can be easily altered, it is fundamental to verify the integrity of the document for assuring the authenticity of the associated information.

Despite the magnitude of the problem, in the market there are not yet technological solutions available for reliably performing integrity and authenticity verification of multimedia contents whose source is unknown or potentially not trusted. This gives rise to the MAVEN project, focused on the development of a suite of tools for multimedia data management and security. The project objectives were thus centered on two key concepts: “Search and Verify”; MAVEN automatically searches for digital contents containing objects of interest and applies advanced forensic analysis tools to verify their integrity and authenticity. These capabilities have been developed as a unified software framework, and the project also involved the implementation of a prototype demonstrator application. More specifically, MAVEN comprised the development of a set of tools, which can be divided into forensic tools (Image Source Identification, Image Integrity Verification, and Video Integrity Verification), and search tools (Text Localization and Recognition, Spoken Keyword Detection, Face Detection and Recognition, and Object and Scene Recognition).

The full set of tools was developed and integrated into the MAVEN framework, composed by a set of C++ libraries that contain all the developed functionalities. The validation of the different tools was also performed during the project, using both compiled datasets and real data to ensure and evaluate the accomplishment of the requirements, assessing its suitability for real world applications. In addition, a demonstrator application was developed in order to easily show the features present in the MAVEN suite and to demonstrate its integrability and modularity.

MAVEN consortium was formed by a group of four SMEs involved in business areas directly related to the search and verification of multimedia contents: AMPED (Italy), ARTHAUS (Macedonia), PLAYENCE (currently TAIGER, Austria) and its third party (Playence Spain, currently Taiger Spain, Spain) and XTREAM (Spain). The MAVEN consortium also comprised three RTD performers with complementary expertise and a strong background in the technological areas related to MAVEN: CNIT (Universities of Siena and Florence; Italy), the Pattern Recognition and Applications group from the University of Cagliari (Italy) and GRADIANT (R&D Center, Spain).

MAVEN is mainly targeted to two domains: “Security”, concerning the analysis of multimedia content for law enforcement and legal purposes, and “Media”, where the priority is to develop automatic search and categorization functionalities for information retrieval.

As a consequence of the technology provided by the RTDs, MAVEN will significantly increase the competitiveness of the SMEs in the project, both on their respective markets as well as on secondary sectors where the developed technologies can be applied. The project results will allow SMEs, law enforcement bodies, press agencies, insurance companies, and broadcasting companies, among others, to manage their multimedia assets and verify the integrity and authenticity in an efficient and scalable manner.

Project Context and Objectives:
The 21st century society is universally recognized as the information and communication society. Information is continuously generated, acquired, and shared. A large part of this information is stored within multimedia documents generated in a number of different scenarios. It must be also considered that the availability of low-cost, high-capacity storage devices makes easy to quickly accumulate thousands of multimedia files. Thus, the efficient management of large amounts of multimedia files is indeed a challenging task: the first direct demand regards how to easily search contents within high populated galleries. In addition, it is well known that digital assets are extremely volatile, in the sense that digital documents can be easily edited, intentionally or unintentionally, so that their content can be modified and the conveyed information can significantly change. It is incontrovertibly true that digital documents are natively more prone than others to modifications and tampering: thus, in order to make this information valuable, it is fundamental to verify the integrity of the document for assuring the authenticity of the associated information: the second relevant issue concerns how to verify that contents (and the information they convey) are genuine and thus credible. This is especially true in situations where legal decisions might be made based on the content (e.g. in forensics analysis or for the protection of intellectual property rights), but it is in general important if we want to recognize a digital document worth some value. Governments, National and International associations are aware of the fact that the phenomena may also have ethical, social, and cultural implications.

Despite the magnitude of the problem, in the market there are not yet technological solutions available for reliably performing integrity and authenticity verification of multimedia contents whose source is unknown or potentially not trusted. One constraint that must be also considered is cost-effectiveness. Given the large amount of multimedia contents that are circulating daily in personal and industrial environments, it is highly important that integrity and authenticity verification can be performed in an efficient and scalable manner. In this sense, it must be taken into account that not all multimedia contents have the same degree of importance. In fact, when checking the authenticity of multimedia documents, one is mostly interested in focusing the analysis in “interesting” patterns such as people, texts, voice, etc. This gives rise to the concept of “search and verify” that is central to the MAVEN (Management and Authenticity Verification of multimedia contENts) project: search for relevant patterns (e.g. a face appearing on CCTV), and then perform a detailed authenticity analysis through advanced forensic tools. This concept is not supported in an integrated manner by any tool available in the market.

MAVEN addressed these issues by using some of the latest technologies, combining integrity and authenticity verification tools with multimedia analysis algorithms that search for specific contents. The project objectives were thus centered on two key concepts, search and verify, integrated in a coherent manner.
MAVEN’s main goal could therefore be defined as searching and verifying multimedia contents. Within such a broad goal, taking into account the needs and business cases of the SMEs involved, MAVEN focused on some specific technical objectives, aiming at supplying the SMEs with a set of tools that will improve their products from the point of view of forensic analysis and media content-based analysis. These objectives can be grouped into three categories:

> Verify tools objectives

>> Technical Objective 1: acquisition device identification

The objective consists of identifying the acquisition device of a given image, in order to link a given image to a given camera, as demonstration of origin authenticity. Result #1 obtained is the forensic tool #1: Image source identification

>> Technical Objective 2: forgery detection

The objective consists of the trustworthiness verification of image and video documents, in particular through the detection of doubly encoded contents and the exploitation of a decision fusion framework for summarizing different outputs from different integrity verification algorithms. Results #2 and #3 obtained are: the forensic tool #2 Image integrity verification, and the forensic tool #3 Video integrity verification. Forensic tool #2 is split into forensic tool #2a: informed image integrity verification, and forensic tool #2b: blind image integrity verification.

> Search tools objectives

>> Technical Objective 3: text detection and recognition

The objective consists of the detection and recognition of a text within a scene, in order to automatically search and annotate text within image and video galleries. Result #4 obtained is the search tool #1 Text localization and extraction

>> Technical Objective 4: human trait analysis

The objective consists in analysing human traits for the automatic detection and recognition of faces in image and video galleries and for the spoken keyword detection from audio tracks. Results #5 and #6 obtained are the search tool #2 Spoken keyword detection and the search tool #3: face detection and recognition

>> Technical Objective 5: object and scene recognition

The objective consists of the detection of a particular content within image and video galleries, in particular for the recognition of company-logo and the automatically categorization of a scene within image and video galleries accordingly to a given content. Result #7 obtained is the search tool #4: object and scene recognition

> MAVEN Suite objective

This objective mainly consists on the integration of the individual MAVEN tools into a unique suite for management and authentication of multimedia content. As a result, an SDK and a MAVEN demonstrator have been achieved.

Taking into account the aforementioned technical and global objectives of the project, a set of milestones for assessing project progress were established:

> Milestone 1: perform a complete market analysis and background review.
> Milestone 2: perform a complete specification of requirements and overall architecture.
> Milestone 3: perform a viability assessment of the objectives established for the MAVEN tools.
> Milestone 4: completion of the different MAVEN tools and integration into the MAVEN demonstrator
> Milestone 5: final validation of project results.

The first three milestones were planned and achieved within the first reporting period, while the final two milestones were planned and completed during the second reporting period of the project.

In order to achieve the technical objectives and milestones, and to ensure a correct alignment of the project and its results with the market and the needs of the companies, the MAVEN consortium comprised four SMEs involved in business areas directly related to the search and verification of multimedia contents: AMPED (Security market), ARTHAUS (Media market), PLAYENCE (now TAIGER, Media market) and its third party Playence Spain (now Taiger Spain) and XTREAM (Security and Media markets). The MAVEN consortium also comprises three RTD performers with complementary expertise and a strong background in the technological areas related to MAVEN: CNIT (forensic tools), the Pattern Recognition and Applications group from the University of Cagliari (search tools) and GRADIANT (search tools).

In addition to the technical objectives defined for the project, it should be remarked that two side objectives -dissemination and exploitation of MAVEN progress and results- have been pursued throughout the project both by SME and RTD partners: besides the project website, recording of videos describing MAVEN and results, presentation of the "Search and Verify" concept in international events, publication in major conferences, and attendance to industrial events are some of the main dissemination activities that have been considered within MAVEN.

Project Results:
The S&T results obtained in MAVEN fulfil the objectives established at the beginning of the project (see previous section). The main results description are enumerated below.

> Result #1: Image source identification: A software tool for identifying the acquisition device of a given digital image, in order to link it to a given camera model, as demonstration of origin authenticity. This tool improves the accuracy and reliability of the identification by the inclusion of an invisible watermark into the image.

> Result #2: Image integrity verification: This result is composed by two sub-results:

>> Result #2a: Informed image integrity verification: A software tool aiming at localizing retouched parts of an image with respect to the unmodified one. This tool combines a keypoint detection module for the alignment of the original and retouched image and a change detection module to detect those regions that might have been altered.
>> Result #2b: Blind image integrity verification: A software tool for assessing integrity of digital images through detection of double encoding and use of a decision fusion framework combining outputs from different integrity verification algorithms. The fusion of different methods in one single tool increases the detection capabilities of the developed framework.

> Result #3: Video integrity verification: A software tool for assessing integrity of digital videos through detection of double encoding and use of a decision fusion framework combining outputs from different integrity verification algorithms. As for the Image Integrity Verification tool, the developed fusion framework increases the detection performance compared to the independent application of the used algorithms.

> Result #4: Text localization and extraction: A software tool for detection and recognition of text within a scene, in order to automatically search and annotate text in image and video galleries. The combination of the developed text detection and the improved text recognition tools increases the performance of the system in unconstrained scenarios.

> Result #5: Spoken keyword detection: A software tool for the spotting of keywords in audio tracks. This tool allows a fast and accurate detection of spoken keywords in English and Spanish audio, allowing the addition of other languages through the model training utility.

> Result #6: Face Detection and Recognition: A software tool for the automatic detection and recognition of faces in image and video galleries. The developed tools ensure an improved face detection and recognition performance, even in unconstrained settings, while meeting real-time processing requirements.

> Result #7: Object, logo, and scene recognition: A software tool for detection of a particular content within image and video galleries, in particular for recognition of company-logo and automatic categorization of a scene within image and video galleries. This tool allows object detection and scene recognition in unconstrained settings, allowing the addition of new object types and scene categories.

In order to achieve MAVEN objectives and obtain the expected results, the project was scheduled in a detailed work plan comprising 8 work packages. Project activities started with specifications definition from the SMEs (WP2), followed by the technological developments and integration tasks needed to achieve MAVEN objectives (WPs 3-6). The RTD activities in these WPs were carried out by the RTD performers, although SMEs contributed to ensure that the progress and results are in agreement with the initial requirements. Finally, validation of the MAVEN technology in realistic scenarios took place in WP7, with major involvement of the SMEs. Besides the mentioned WPs 2-7, a specific work package (WP1) was allocated for project management, and a specific work package (WP8) for exploitation and dissemination activities. Below it is described in more detail each of the technical WPs (2-7), and the results obtained.

**WP2

The overall objective of WP2 was to define in detail the requirements that would drive the future research work, establishing qualitative and quantitative criteria for verifying the fulfilment of project objectives. Thus, the main goals of this WP can be summarized as:

> Exhaustive review of scientific, IPR and market background of the project (T2.1)

> Formal specification of functional and technological requirements (T2.2)

> Formal definition of system architecture (T2.3)

T2.1 Background review and analysis

The background review and analysis was mainly an update of the work done in the preparation of the DoW. The preparation of D2.1 helped all the partners to layout the overall situation related to the scientific field and the current market. All partners contributed to this task providing their knowledge in their respective fields, both for the market analysis and the background review. More specifically, SMEs mainly worked on the market analysis, further reinforcing the initial plan regarding a real need for the MAVEN tools. RTDs updated the scientific literature and patent search, confirming their in-depth knowledge of the involved technologies.

The result of this task was deliverable D2.1 which includes a comprehensive review of scientific results, intellectual property and commercial products in the area of MAVEN, a SWOT for each tool, and public databases available for development and testing.

T2.2 Requirements specification and analysis

The preparation of the requirements specification was the main activity of the WP. Partners put a lot of effort in clearly defining use cases and requirements, as well as their priority and acceptance tests for all the tools to be developed in the project, using a common methodology which involved requirements specifications templates and dedicated meetings among SMEs and RTD performers. Given the very good knowledge of the market by the SMEs, the specification of requirements was done mainly by the partners, without involving too much third parties and end-users as planned.

T2.3 Architecture definition

An initial architecture was proposed and refined in several iterations, each iteration involving discussion among all the partners and re-assessment of the requirements. The architecture definition needed a shared vision by the partners because the development environment, programming language and operating system (OS) were very important for a correct technical development of the project, and initially all the partners were interested in different technologies, standards and environments. Nevertheless, after several technical discussions the partners agreed on a common architecture. Also AMPED provided a starting point for the cross build system (adapted to that used internally) that was used as a reference for preparing the development environment of MAVEN. The final architecture specification served as the main reference for the implementation of the MAVEN tools and their integration in the MAVEN demonstrator during WP6.

The main result of tasks T2.2 and T2.3 was deliverable D2.2 which comprises the following information:

- A definition of the use cases, functionalities, interfaces, and technical specifications of the different MAVEN modules. To achieve this goal, the MAVEN team has followed the recommendations of the ISO/IEC/IEEE standard for software requirements engineering. As a result, a list of prioritized requirements (both technical and functional) has been gathered, along with acceptance tests for validating project results.

- A definition of the MAVEN overall architecture and corresponding API specifications, based on the requirements provided in T2.2 as well as the individual components, their relations, and input-output formats. The API has been designed following well-known software principles such as modularity and decoupling, abstraction, maintainability, understandability, and user friendliness, in order to ensure a correct development of the MAVEN results and ease their maintenance. The API has also been designed following a multi-layer scheme comprising 3 layers: a bottom layer formed by the different processing modules, a mid-layer encapsulating such modules into libraries, and a top-level API which provides an additional level of abstraction towards the demonstrator and future applications to be developed.

**WP3

In WP3, a fully functional set of software modules for the forensic analysis of image and video objects was designed and implemented, with the particular goal of addressing data authenticity in the sense of both origin and trustworthiness. In particular, WP3 comprised the development of modules for Identification of the acquisition device of a given images (T3.1: Image Source Identification), Verification of image trustworthiness, including the detection and localization of modified images or image parts (T3.2: Image Integrity Verification), and verification of video trustworthiness, in particular detection of doubly encoded video sequences and identification of GOP parameters (T3.3: Video Integrity Verification).

Task T3.1 Image Source Identification

The first mentioned module was faced by considering a watermarking-based approach (even if it was not originally foreseen in the work plan); this was due to the specific requirements coming from the scenario in which the forensic software module for image source identification would have to work, as appeared in the various discussions between CNIT and ARTH during the scenario definition phase.

The task of Image Source Identification (T3.1) included the design and development of: 1) a device registration module, based in the extraction of the Photo Response Non-Uniformity (PRNU) pattern from a specific camera device and the calculation of its associated hash string, 2) a watermark embedding module, which inserts the PRNU hash as an imperceptible binary watermark into the photos taken by the photographer with his/her registered device; and 3) a watermark detection module, which analyses photos to determine whether they contain a specific previously embedded watermark.

During the first Reporting Period, a preliminary implementation of the image source identification module was implemented (including PRNU fingerprint construction, watermark embedding and detection for images of about 1000x1000 pixel dimension) and tested. During the Second Reporting Period, several improvements were carried out: a solution to increase the robustness of the watermarking to image cropping was devised and implemented, the watermarking module (embedding and detection) was reworked in such a way to deal with images of any size (both large and small dimensions); preliminary tests were carried out on images provided by ArtHaus (to verify whether the new version of watermark embedding and detection work as intended), the Implementation of a control on file size was included (in such a way that the original and watermarked image have approximately the same file size) and other enhancements were added to the module (PRNU fingerprint hashing, robustness to JPEG compression, robustness to resampling, robustness to cropping, watermarked image quality, improved processing times, …).

Task T3.2 Image Integrity Verification

The task of Image Integrity Verification included two variants: Informed Image Integrity Verification (T3.2a) and Blind Image Integrity Verification (T3.2b).

Informed Image Integrity Verification (T3.2a) comprised the development of two different modules, aiming at localizing retouched parts of an image with respect to the unmodified one: 1) a registration module based on keypoint detection, which aligns the retouched image and its original version, by means of keypoint detection; and 2) a change detection module, which identifies the regions of the to-be-analysed image which are significantly different from the corresponding original image, producing a change map. Regarding the second modules, different change detection approaches were implemented, including image difference, and colour difference. During the first Reporting Period, a preliminary implementation of the informed image integrity verification module was implemented, including a basic image registration algorithm and comparison measures for computing the difference between images. During the second Reporting Period, many improvements were integrated: improvement of accuracy of the module to reduce false positives and false negatives in the identification of retouched regions, testing of the module on the ground truth provided by ArtHaus to assess whether the adopted change detection algorithms provide satisfactory results, development of new change detection algorithms performing retouch detection, preliminary testing of the module on localisation accuracy and the implementation of new maps as output of the module.

Blind Image Integrity Verification (T3.2b) considers the image trustworthiness verification in a blind fashion, when only the to-be-analysed image is available for the forensic analysis. Since a tampered image can be subject to a high number of modifications, it is essential to count on a number of different tools to perform the integrity verification. Furthermore, the vast majority of digital images available nowadays are stored in JPEG format. Hence, this task comprises the development of the following tools:

- Cut-and-paste forgery localization based on traces of double-aligned JPEG compression
- Cut-and-paste forgery localization based on JPEG ghosts
- Copy-move detection based on patch-match algorithm

During the first Reporting Period, a preliminary implementation of the blind image integrity verification module was developed. During the Second Reporting Period, several improvements were carried out, including the implementation of improved versions for the existing modules (e.g. clone-detection algorithm), and the implementation of new ones (e.g. decision fusion engine). The analysis and development of forgery localization modules based on JPEG features was also addressed during this Second Reporting Period, along with the preliminary testing of the new modules.

Task T3.3 Video Integrity Verification

The task of Video integrity verification (T3.3) is focused on the detection of inter-frame forgeries, relying on a double compression detection algorithm. The functionalities implemented include the detection of double encoding using a method called Variation of Prediction Footprint. Preliminary tests on synthetically generated double compressed videos were carried out during the First Reporting Period, measuring the AUC values as metrics for double compression detection and the accuracy for GOP estimation. During the Second Reporting Period, implementation of the video forgery localization algorithm was completed and tested.

Task T3.4 Ethical Issues Monitoring

The SME with most expertise in forensic technologies (AMPED) has been in charge of monitoring the performed work and preliminary results obtained in this WP to be aware of potential dual use issues. AMPED has monitored the possible ethical issues during the WP3. The main point investigated is the potential dual-use of the technologies. New potential dual use conflicts have not been identified, besides those already spotted in the DoW.

In practice, the only potential issue with can imagine with our technology, albeit a very remote one, is related to a potential use of the results in an anti-forensics key. Let's suppose, for example, that some government intelligence agency releases on the web some fake images as testimony of an event. If they have our tools available, they can try to post process the image until they cover all traces detected by our tools and then make the forgery more difficult to discover.

**WP4

In WP4, a series of modules for the automatic analysis of a scene (taken from an image or video sequence) have been designed and implemented. In particular, two tasks have been carried out: Text Detection and Recognition (T4.1) and Object & Scene Recognition (T4.2). The form of the generated implementation distinguish 4 components: Text Detection, Text Recognition, Scene Categorization and Object and Logo Recognition.

Task T4.1 Text detection and recognition

In Text Detection and Recognition (T4.1) preliminary versions of the text detection and recognition modules were implemented and validated during the first Periodic Report. Preliminary version of the Text Detection Module, fully functional, was tested (according to the requirements specification) with the ICDAR 2013 database. This preliminary version of the algorithm achieved an F1-score value of 53% whereas the target was 70%. During the Second Reporting Period, the final version of the text detection was implemented: a significant number of modifications and improvements of the original methodology were carried out and the validation scheme was reviewed to evaluate the performance of the implementation more accurately. As stated above, the target score was set to 70% at F1-Score for Text Localization Task on the database “ICDAR 2013 Robust Reading Competition Challenge 2(Reading Text in Scene Images)”, and the final average detection success rate obtained was 78.53%.

Regarding the text recognition module, a preliminary version was tested (according to the requirements specification) with the ICDAR 2013 database, achieving a Word Recognition Rate of 50.6%. It is important to remark that the target figures (45%) for this module were already achieved during the first reporting period. The final version of the Text Recognition module achieved a WRR of 56.9% with the standard configuration. Also, by using the slower combined mode of the Tesseract OCR engine it is possible to reach a WRR of 66%.

Task T4.2 Object and Scene Recognition

Object and Scene Recognition (T4.2) comprises a series of features that include: recognition of a company-logo within image and video galleries, recognition of a certain object within an image or video gallery and automatic categorization of the scene shown in an image or video gallery. Preliminary modules for Object and Scene Recognition were developed during the First Reporting Period. Object and Logo Recognition module was initially evaluated, according to the requirements specification, on the Flickr32 database. The preliminary version of the module achieved a recall of 59.37%, (not far from the state of art algorithms that are around 61%). Scene Recognition module was evaluated, according to the requirements specification, on the SUN database, achieving an accuracy on the worst category which is around 40% (target figure was 50%). The average accuracy of this preliminary version was been also measured and is around 68.6%. During the second Reporting Period, both modules were reviewed and completed, including the following achievements:

> The final version of the Scene Recognition Algorithm was implemented. The module is fully functional and the implementation has been tested using the 15 classes of the SUN Dataset, where the target performance was achieved. The algorithm was also tested on two separate datasets provided respectively by ARTHAUS and PLAYENCE. Both datasets included both INDOOR as well as OUTDOOR scenes. After the tests, several improvements were made on the algorithm. In particular, alternative configurations of the classification algorithm were evaluated (to improve the results of the classification) and optimizations were introduced on the code to allow a faster execution of the scene recognition algorithm.

> The final version of the Object and Logo Recognition module was implemented. The module is fully functional and it is actually composed by two sub-modules: one for Logo detection and the second for Object classification. The module implementing the Logo detection functionality can be also used for the detection of specific objects (e.g. a particular model of chair or sofa). The Logo module has been tested on the Flickr32 database, whereas the Object classification module has been tested on the PASCAL VOC database.

> Regarding the Object Classification module, a manual pruning was made on the PASCAL dataset due to the existence of images where the target objects are smaller than the minimum object size specified in the requirements for this module. This did not even allow a direct comparison with the SOA performance, because the performance on the test dataset can be obtained only by means of the PASCAL VOC Evaluation Server. After the test, a mean average performance of 50% among all the classes, and 81.83% on the best class was reported. Additional tests were carried out by splitting the image in blocks and using the ones containing the target object as positive samples to train the module (the frames actually not containing the object are used as negative samples). The positive outcomes of the evaluation confirmed that this approach, which only relies on the way the operator uses the module and does not affect the algorithm implemented, makes the module actually more effective in recognising objects which are not in foreground and generally small in size with respect to the overall size of the picture.

> In the final version of the Logo Detection algorithm, a rejection mechanism was added. During evaluations on the Flickr32 database, the implemented algorithm exhibited a rejection rate around 42.8% (versus 37.5% of state of the art algorithms) which leads to a final error rate of 2.7%. The error rate in state of the art algorithms is around 1.5%. The algorithm has been also evaluated on a dataset provided by PLAYENCE. A manual inspection of the dataset made evident an unbalance between the very high resolution of the logos provided as models and that in the test images which appeared instead very low. In order to cope with the problem, an alternative configuration of the module (relying on a different keypoint extractor) has been added to the module to try to meet these new demands.

> The interfaces of the modules completely abide the specifications provided in D2.2 (Requirements and Architecture specification).

**WP5

In WP5, a set of tools and algorithms for the automatic analysis of human traits in images and videos have been implemented. In particular, this work package includes the development of a face detection and recognition module (T5.1 and T5.2) and a Spoken Keyword Detection module (T5.23)

Tasks T5.1-T5.2 Face Detection and Recognition

These tasks address the analysis of images and video frames using computer vision techniques to detect the presence of faces and assess their similarity with respect to a set of face models. During the first Reporting Period, the databases needed to train the Face Detection module were analysed and acquired. A testing environment and the preliminary module was also developed. The module was designed to use a cascade of classifiers: starting from the first classifier, if the input image is identified as a face by a classifier of the cascade then the image proceeds to the next one, otherwise, it is classified as a “non-face” and the evaluation of that image ends. When the input image reaches the last classifier of the cascade and it gives a positive result, the image is classified as a face. Each of these classifiers in the cascade is a boosted classifier, which is an ensemble of “weak” classifiers that, combined, form a robust detector. This cascade of classifiers has been trained using the FDDB database. Testing experiments were conducted to estimate the performance of this preliminary module by obtaining the ROC for the FDDB database.

Also during the First Reporting Period, different types of features were tested for the Face Recognition module, such as LBP and BSIF, and the preliminary module for face recognition was developed. The module estimates the similarity between the input face and a set of stored templates. Testing experiments were conducted to estimate the performance of this preliminary module by calculating the Rank-N rate on the NIST Special Database 32. In addition, during this first period, evaluation experiments were conducted: the Face Detection module obtained a True Positive Rate of 0.53 for 500 False Positives in the target datasets, almost reaching the target figures.

During the Second Reporting Period, a series of improvements were carried out in both Face Detection and Face Recognition: in the first case, the same scheme of a cascade of “weak” classifier was used, but with added complexity in order to achieve a better performance. In particular, a more complex structure has been used by increasing the number of classification layers in the aforementioned cascade. To be able to do so, and to ensure a richer training, a varied set of databases (as specified in deliverable D5.2) was selected. The evaluation of the final version of the detector was carried out following the FDDB procedure, achieving a figure of merit of 0.69@400FP which is beyond the objective of 0.65@500FP. In addition, the final tool’s implementation was optimized to increase its processing speed, and was completely integrated into the MAVEN framework and the Demonstrator Application. Regarding Face Recognition, during the Second Reporting Period the same recognition scheme was used (BSIF-based). In order to increase the recognition capabilities for the final version of the module, a better tuning of the parameters was performed. A normalization size of 128x128 pixels was maintained as well as, in the pre-processing stage, a gamma correction and then a Difference of Gaussian (DoG) filtering. The evaluation of the final version of the recognizer was carried out following the FRGC protocol, achieving more than 80% of Face Verification Rate at 0.001 False Alarm Rate, which was over the target figures set for the module.

Task T5.3 Spoken Keyword Detection

Spoken Keyword Detection (T5.3) comprises the search and detection of selected keywords in audio sequences, using low level features extracted from audio segments as well as complex phonetic models.

During the first Reporting Period, the databases needed to train the module were analysed and acquired (in particular, TIMIT for English language and Albayzin for Spanish language). A preliminary module for spoken keyword detection was developed during this period, able to train acoustic models from a phonetically annotated audio database and to look for single keywords in an audio file. In order to add support to audio streams in the successive versions of the module, HMM (Hidden Markov Models) handling software was adapted for handling data streams instead of finite known size streams. In addition, evaluation experiments were conducted to test the performance of the module: a set of acoustic models were trained using the training subset of TIMIT and the FOM for all the words in the testing subset of TIMIT was calculated using different configurations for the audio parameterization. In addition, testing experiments were conducted to estimate the performance of the preliminary module by obtaining the Figure of Merit (FOM) for all the words in the test subset of the TIMIT database. The average FOM for all the words present in the testing subset of TIMIT is 0.4682 for the best audio configuration, which was comparable to the results of other spoken keyword detection systems in the literature for this database. In addition, the preliminary module meets the processing time requirement with a RTF (Real Time Factor) for the best configuration of 7.4694.

During the Second Reporting Period, the detection scheme was refined, taking into account the discrimination capability of the phonetic models in the final decision. Support for Spanish was added as well and the performance of the module for this language was tested using the Albayzin database. In addition, support for MP3 files and audio streams was included in the final version of the module. Finally, an optimized method to model words with multiple phonetic transcriptions was implemented to significantly decrease the search time for this kind of words. At the end of WP5, a complete spoken keyword detection module was developed with support for both files (WAV and MP3) and audio streams. Searches for Spanish or English words can be conducted several times faster than real time. The average Figure of Merit on the tested databases (TIMIT for English and Albayzin for Spanish) are 0.5754 and 0.6438 for all the words in the corpora respectively (0.7082 and 0.7235 respectively for words longer than 4 phonemes).

T5.4 - Ethical issues monitoring

XTREAM and PLY, the SMEs which will exploit the tools produced in this work package, have been revising the work performed by GRAD and UNICA in order to ensure compliance with ethical regulations. During the First Reporting Period the deliverable D5.3 “Ethical clearance” was completed. It comprises the authorizations provided by the national data protection authorities to the partners of MAVEN involved in spoken keyword detection, and face detection and recognition.

The possible collaboration with the FP7 project PRIPARE (PReparing Industry to Privacy-by-design by supporting its Application in REsearch) identified during RP1 did not finally take place. After receiving the cooperation proposal submitted by MAVEN’s Project Coordinator at the end of the first reporting period, PRIPARE coordinators did not encourage the selection of MAVEN for collaboration.

**WP6

The overall objective of WP6 was to build the MAVEN framework integrating all the different modules, and to design and implement the MAVEN prototype, with the purpose of demonstrating the different capabilities developed in WP3, WP4 and WP5. The particular objectives include developing the necessary interfaces to use and validate the system, integrating all the modules developed in the different work packages and perform a preliminary validation of the MAVEN tools, prior to exhaustive testing in WP7 (from the point of view of integration).

During the first reporting period, the activity in WP6 was planned to deal only with the first objective (development of interfaces, T6.1). Although it was planned to begin in M9, the activities in WP6 started earlier (in M6) in order to coordinate the development of the different modules in WPs 3, 4 and 5, and allow a proper and easier integration. More specifically, during the First Reporting Period an API was designed following the specifications provided in “D2.2 – Requirements and architecture specification” to be used as a basis for all modules from the beginning of the development; and also early support was provided for the future integration of the forensic tools. The API implementation was shared with all the partners as a repository in a server managed by GRAD in order to make it available to all the partners, who could both download and update its contents to include the possible needs of all the modules. Repositories for the different modules were created and the development framework were established.

During the Second Reporting Period, the design proposed during T2.3 was tested and the interfaces included in the API of the MAVEN framework were modified and enhanced according to the different needs of both the integration of the different tools and the potential client applications. With the prototype presented in D6.1 as a starting point, the final version of the MAVEN demonstrator application was designed and developed during this reporting period. This final version meets the requirements elicited during T2.3 and has been validated accordingly. During the design and implementation of the demonstrator application, the different tools implemented by the RTDs were integrated in a single library. This integration process involved some discussions and analysis, and small modifications of the different APIs were carried out. In addition, using the successive versions of the MAVEN demonstrator application as a validation platform, the overall capabilities of the different tools (integrated in a single library) were tested. The validation also detected some unpredicted issues with some of the operations and helped the integration process (T6.2-4).

At the conclusion of this work package, a complete, fully functional version of the MAVEN SDK has been released, starting from the early versions advanced during RP1 and according to the specification presented as part of WP2; and the final version of the MAVEN demonstrator application has been successfully implemented, meeting the requirements specified during T2.3. The demonstrator application includes most of the MAVEN tools functionalities and operations, allowing to process image and video contents.

**WP7

The objective of the WP7 was to conduct an exhaustive testing of the MAVEN tools once the prototype was finalized, and to evaluate the test results in order to ensure that the final system was satisfactory in terms of functionality and usability, according to the requirements defined and agreed during WP2.

Task T7.1 Definition of the test plan

In this task a methodology for the realization of the tests and the evaluation of the results was defined. The requirements produced in WP2 were the starting point for the test plan, which had to evaluate the performance and the usability of the MAVEN system.

In addition, a detailed Test chronogram was assembled and agreed to guide the SMEs test execution and the related RTD support, with a clear identification of consortium members to be responsible for the execution of every defined task.

Detailed test plans were created for each tool. Such test plans included validation protocols, specific datasets for the evaluation, definition of training and testing data (whenever necessary), and key performance measures.

T7.2 Test performance

In this task the SMEs tested the different MAVEN modules, according to the Test Plan defined in T7.1:

> AMPED
* Forensic tool #2b: Blind Image Integrity Verification module
* Forensic tool #3: Video Integrity Verification module

> ARTHAUS
* Forensic tool #1: Image Source Identification
* Forensic tool #2a: Informed Image Integrity Verification module
* Search tool #4: Object and scene recognition

> PLAYENCE
* Search tool #1: Text Localization and Extraction module
* Search tool #2: Spoken Keyword Detection module
* Search tool #3: Face Detection and Recognition module
* Search tool #4: Object and Scene Recognition module

> XTREAM
* Search tool #1: Text Localization and Extraction module
* Search tool #2: Spoken Keyword Detection module
* Search tool #3: Face Detection and Recognition module
* Search tool #4: Object and Scene Recognition module

The RTD partners (GRADIANT, CNIT and UNICA) focused their efforts in supporting the SMEs in the deployment of the test beds environments.

T7.3 Test evaluation

The results of the performed tests were analysed internally by every SME, assessed against the requirements defined in D2.2 and reported in deliverable D7.1. This task was crucial for the correct evaluation by the SMEs of the commercial potential of MAVEN suite, modifying or confirming the strategies to be followed in the final Exploitation Plan.
As a result of this validation stage, the developed modules were updated in order to incorporate the necessary improvements and corrections, resulting in the final set of MAVEN tools that meets the project requirements and the SMEs needs.

Therefore, D7.1 main conclusions support, in general, the fulfilment of the project requirements defined in WP2, having improvement potential for future releases but meeting the minimum functional and performance thresholds that were established at the moment.

**Milestones

The progress of the different Work Packages and achievement of results within MAVEN was evaluated with regard to a set of milestones established in the DoW. With the achievement of such milestones, a continuous validation of the project’s objectives was possible, as explained below for each of the two reporting periods:

RP1 milestones

During the first reporting period (M1-M9), the project had three main objectives which correspond to the first three milestones established in the planning. The first two objectives were addressed in WP2, whereas the third objective was addressed in WPs 3, 4 and 5. These objectives are explained below.

* Milestone 1: perform a complete market analysis and background review. Through the work planned in task 2.1 a thorough review of existing scientific results, intellectual property and commercial products in the area of MAVEN was provided, for completing and updating the analysis provided in sections B1.1 and B1.2 of the DoW. A SWOT for each MAVEN tool was produced. The final result was collected in Deliverable D2.1 “Background review and market analysis”, which served as the basis for the subsequent work.

* Milestone 2: perform a complete specification of requirements and overall architecture. This objective is directly related to the work planned in tasks 2.2 and 2.3. Task 2.2 dealt with the specification and analysis of the practical requirements to be pursued for the MAVEN tools, comprising both functional and non-functional (e.g. technical) requirements. A crucial part of this objective was to clearly prioritize the requirements and specify the target performance values (Key Performance Indicators) and acceptance tests that were used later in WP7 to validate the results of the project. On the other hand, task 2.3 dealt with the definition of the overall architecture of MAVEN based on the requirements provided in T2.2 as well as the individual components, their relations, and input-output formats. This architecture was specified at a high-level, as the low-level architecture of each component was addressed in WP3, WP4, and WP5. The final results of tasks 2.2 and 2.3 were collected in deliverable D2.2 and it constituted the benchmark document against which the project progress and achievements were referenced.

* Milestone 3: perform a viability assessment of the objectives established for the MAVEN tools. The objective was to establish a checkpoint during the project at which the performance of the MAVEN tools could be initially evaluated, verifying the degree of fulfilment of the requirements, or the viability of fulfilling them during the project timeframe, for those which had not been yet fulfilled. Such checkpoint was planned for M9, at the end of the first reporting period, since by that time preliminary versions of all MAVEN modules would have been produced and tested. Hence, the work packages concerned by this objective were WP 3, 4 and 5, where the MAVEN tools were developed by the RTD performers. The corresponding deliverables where the results must be checked are D3.1 D4.1 and D5.1.

In order to ensure the overall success of the project, the achievement of milestones 1 and 2 required a strong involvement of the participating SMEs, since the output of the related tasks set the bases for the work that would be performed by the RTD performers during the remaining of the project. These were the main responsibles, in fact, of achieving milestone 3.
RP2 milestones

During the second reporting period (M10-M24), the project had two main objectives, which corresponded to the two final milestones (MS4 and MS5) established in the planning. The first objective was addressed in WPs 3-6, whereas the second objective was addressed in WP7. These objectives are explained below:

* Milestone 4: completion of the different MAVEN tools and integration into the MAVEN demonstrator. To this aim, the preliminary developments accomplished during RP1 for forensic analysis (WP3 – deliverable D3.1) object and scene recognition (WP4 – deliverable D4.1) and human traits analysis (WP5 – deliverable D5.1) were used as a starting point, and the encountered issues in these preliminary developments were taken into account to improve the final tools. Final MAVEN tools, completed in tasks T3.1 T3.2 T3.3 T4.1 T4.2 T5.1 T5.2 T5.3 have been described in the corresponding deliverables (D3.2 D4.2 and D5.2 respectively).

Following the recommendations made during the first review meeting, each of these deliverables includes a section describing which modules within the MAVEN tools rely on open-source external libraries and which ones correspond to own developments of the RTD partners.

Besides their completion, Milestone 4 also comprises the integration of the final MAVEN tools into a functional demonstrator. The work carried out towards this objective was framed within WP6 (tasks T6.1 T6.2 T6.3 and T6.4). Throughout RP2, two major goals had to be achieved: 1) the development of the MAVEN demonstrator (T6.1) and 2) the integration of MAVEN tools into such demonstrator (T6.2 T6.3 and T6.4) with two deliverables describing the preliminary and final versions of functional demonstrators (D6.1 and D6.2 respectively).

* Milestone 5: final validation of project results. Validation of the MAVEN tools and system prototype was mainly conducted in WP7 (results included in deliverable D7.1) although some preliminary testing was also carried out in the development and integration WPs (WP3, WP4, WP5, and WP6). Within WP7, taking into account the requirements defined in WP2 and the SME use cases, test plans for each MAVEN tool and the system prototype were proposed by the involved SMEs (task T7.1) with collaboration from the RTDs. Given such validation plans, performance and encountered issues were analysed and discussed, and fulfilment of the technical and functional requirements defined in WP2 was assessed (tasks T7.2 and T7.3).

** Dissemination and exploitation

Besides the S&T results of MAVEN, two important side objectives of the project should be also remarked: dissemination (T8.2) and exploitation and IPR management of project results (T8.1). The outcome of such tasks was presented in deliverables D8.4 (MAVEN videos) and D8.3 and D8.5 (list of dissemination and exploitation activities). The most important dissemination actions achieved in WP8 are listed in the following:

* Presentation and demonstration of MAVEN in a major international conference: EU Project Track and demo session at ICME 2015.
* Publication of a co-authored paper in international conference promoting the concept of “Search and Verify”: MAVEN paper at ICME 2015.
* Preparation of MAVEN videos, featuring an overall description of the project and an overview of the MAVEN suite.
* Participation in industrial events, where MAVEN project and results have been presented (among others: Meeting of ENFSI, LawTech Europe Congress, S-FIVE Workshop, V Argentinean Technology and Justice Congress 2015, South Summit 2015).

Regarding exploitation, it should be highlighted that image integrity verification tools and the decision fusion framework developed within MAVEN have been already integrated into AMPED’s AUTHENTICATE product.

Potential Impact:
** Summary of MAVEN results

The MAVEN solution is composed by seven different results, which can be divided into forensic and search tools: 1) Forensic tool #1 Image source identification, 2) Forensic tool #2 Image integrity verification, 3) Forensic tool #3 Video integrity verification, 4) Search tool #1 Text localization and extraction, 5) Search tool #2 Spoken keyword detection, 6) Search tool #3 Face detection and recognition, 7) Search tool #4 Object and scene recognition.

MAVEN results are therefore a combination of tools that can help individually or as a whole to reduce the gap of finding content and verifying its integrity. On the one hand, it provides tools for searching interesting patterns in multimedia assets. This capability allows: 1) reducing the time spent looking for interesting patterns, 2) performing statistics analysis, and 3) determining trends, among others. On the other hand, the forensic tools contribute to improve the quality of information considered by detecting manipulated files, and determine the source of origin.

The main benefit of the MAVEN suite over existing solutions is its comprehensiveness, since MAVEN will integrate 7 different technologies into a unique “toolbox”, providing both efficient search capabilities and solutions for guaranteeing the authenticity and integrity of digital contents. In addition, the use of robust technologies at the core of both search and forensic tools contributes to improve the competitiveness of the SMEs in the consortium.

The output of MAVEN is a set of C++ libraries that can be easily integrated and used into a final application, as shown with the development of the MAVEN demonstrator application featuring a simplified client-server architecture based on web-service communication. The main features of the MAVEN solution are the following:

> Simplicity of integration and ease of use. MAVEN solution was specifically designed with a strong focus on ease of integration and flexibility. Due to its simple API, different tools and operations can be used and combined to create a higher level API that allows more complex processing. In addition, it can be integrated as a part of a stand-alone application or provide its functionalities as a service by incorporating them in a server-based application (as it was shown with the demonstrator application). MAVEN tools can be easily integrated into the current systems of the organization; this has been especially true for the forensic tools which have been already integrated and released with AMPED AUTHENTICATE.

> Applicability to a large amount of multimedia content. Due to its scalability and ease of use, MAVEN tools can be integrated as a part of data processing chains in order to perform fast and batch processing of large amounts of multimedia information.

> Capability for detecting content modifications and alterations in specific patterns (e.g. faces) contained in multimedia contents by combination of the different search and forensic tools.

> Applicability to different types of multimedia data. MAVEN can be applied to image, audio and video contents provided in different formats.

** Potential impact

The main outcome of MAVEN is to strengthen the competitiveness of the SME consortium by developing innovative technologies for authenticity verification and search of multimedia contents, overcoming the performance limitations of currently available technologies. The MAVEN tools contribute to improving the competitiveness of the European Media and Security industries, which have available a toolset of technologies for optimizing their business processes. Moreover, through the forensics technologies that have been developed in MAVEN for estimating the reliability of multimedia contents, the project has started its contribution to improve the transparency and effectiveness of communications in our society.

The technology already provided by the RTDs will improve the software solutions owned by each SME by integrating features from the MAVEN suite that are in general more advanced and more powerful with respect to the current ones. This will definitely lead to a better positioning of the SMEs on their respective markets. This is actually the case for AMPED’s AUTHENTICATE: image integrity verification and the decision fusion framework have been released in the official version of the product, being currently available in the market. Furthermore, MAVEN will the SME members to expand their market fields and expectations.

MAVEN has therefore the potential to significantly change the way that many companies and organizations manage their multimedia assets. The results of MAVEN will allow SMEs, law enforcement bodies, press agencies, insurance companies, and broadcasting companies, among others, to manage their multimedia contents and verify their integrity and authenticity in an efficient and scalable manner. The SMEs participating in the project will play an important role in maximizing the impact of the developed technologies, as explained below.

* MAVEN targeted primary markets

Results obtained in MAVEN are mainly targeted to two domains:

- “Security domain” concerning the analysis of multimedia content for law enforcement and legal purposes. One of the main priorities is to prove (or, eventually, disprove) the authenticity of the documents under analysis. Law-enforcement bodies and courtrooms represent the two main end-users.

- “Media” domain. The priority is to develop automatic search and categorization functionalities for fast information retrieval from large digital archives. End-users are represented by companies and bodies that need to manage large digital libraries.

Consequently, the SMEs participating in MAVEN plan to disseminate and exploit the acquired results in their respective primary markets: Security (AMPED and XTREAM), and Media (ARTHAUS, PLAYENCE, and XTREAM):

- AMPED will disseminate the results mainly to law enforcement labs and government agencies, which will benefit from MAVEN advanced tools for forensics and intelligence activities. In particular, the verification of video integrity is expected to gain wide acceptance, since video authentication is a very difficult topic without tools currently available in the market. AMPED will integrate forensic tools #2 and #3 into AMPED’s AUTHENTICATE.

- PLAYENCE (now TAIGER) will explore the application of MAVEN technologies in new markets where the technology has the potential to open new opportunities. In a first step, MAVEN search tools will be integrated with products already available at TAIGER (in particular, the product iSearch). In a second step, TAIGER will explore the exploitation of MAVEN under a SaaS model, which has the potential to reach massive audiences bringing the benefits of MAVEN tools to virtually all organizations, in particular small companies.

- ARTHAUS will disseminate the results in professional communities of web application developers dealing with large image databases. The first use case to be tested will be in an application for capturing, processing and delivery of professional, high quality, real-estate images. Such an application faces the problems of resolving image ownership, image cataloguing, and in particular conformance checking of image retouching. The MAVEN tools successfully address all those problems. Consequently, ARTHAUS will integrate forensic tools #1 and #2, and the search tool #4 into TopSnap application.

- XTREAM will focus on the dissemination and exploitation of MAVEN technologies in market segments where it already possesses a well-developed distribution and partnership network, in particular Government and Homeland Security, where MAVEN addresses real market demands. Such dissemination and exploitation will be facilitated in a first step by integrating MAVEN modules in products of XTREAM portfolio: CICERO and BROADVIEW (Security), and MEDIABOX (Media).

GRADIANT is also looking forward to exploiting search tool #2, with an initial focus on the integration into its proprietary mobile speaker recognition technology.

* Secondary markets

However, besides these two primary markets, MAVEN results are also of interest in secondary sectors, such as Insurance and Public Administration, Biometrics, HCI, e-Commerce, e-Learning, Marketing and Retail, among others.

Regarding the Insurance industry, companies in this field would benefit from the exploitation of image/video integrity verification capabilities (forensic tools #2 and #3). These modules are necessary for providing multimedia content with full legal validity, so that they can be used as proof in a court of law. Overall, MAVEN results are of clear interest wherever efficient and trustworthy Multimedia Content Management is needed. In this sense, Public Administration services would definitely benefit from the developments undertaken in the project, including for instance integrity verification of aerial imagery in soil management applications, and integrity verification of multimedia materials (e.g. videos, photos) received with complaints made by citizens, etc.
Regarding search tools (results #4-7), these are of clear interest in the fields of Human-Computer Interaction (HCI), Ambient Assisted Living (AAL), e-Commerce, Biometrics, Marketing and Retail among others: voice-based control, face-based personalization and recognition, assistive robotics, and technologies for inclusion (e.g. exploiting text detection/recognition for the visually impaired), are just a few examples where MAVEN could have a positive impact.

* Project impact at European level

The MAVEN tools can have several direct impacts at the European level. Probably the highest impacts will be the creation of a product perfectly fitted for the European judicial context, hence providing an increased reliability for trials, and an economic export benefit since there are not currently alternative software available worldwide.

Given the outcomes of the project, it is expected that MAVEN will have significant industrial, commercial and societal impacts, given the importance of multimedia search and retrieval. MAVEN has started to contribute to support the growth of the European role in Digital Content search, and will enable European citizens to efficiently access vast amounts of multimedia data and novel entertainment, educational and cultural services that will be supported by MAVEN technologies, thus addressing citizenship information needs.

Moreover, through MAVEN forensics technologies for estimating the reliability of digital contents, the project will contribute to improve the transparency and effectiveness of communications. The research by RTDs and the dissemination activities by SMEs and RTDs have contributed to the wide diffusion of multimedia security, a concern for a growing number of European citizens. MAVEN has contributed to actively foster advances in this area, and it is expected that the project outcomes will constitute a significant milestone.

** Disemination activities

* Common Activities

- Video of the project

As part of the general dissemination activities, the coordinator and industrial partners of the project have produced and released a video explaining the reasons behind the MAVEN project. Within the dissemination activities of MAVEN, the video clip about the project has been distributed through participants’ social networks and MAVEN website (http://maven-project.eu/videos).

Coordinated and produced by PLAYENCE, all participants recorded and edited a segment of the total footage, including contents about its participation in the project and organizational aspects within MAVEN. The different interviewees are: Dr. Luis Pérez (Coordinator, GRADIANT), Mrs. Irena Josevska (Project Manager, ARTHAUS), Dr. Carlos Ruiz (R&D Director, PLAYENCE), Mr. Maximino Álvarez (CEO, XTREAM), Eng. Martino Jerian (AMPED)

Additionally, other videos have been released:

> Gradiant recorded a videoclip featuring MAVEN description and details of its participation within the project, which is available at Youtube’s Gradiant R&D channel: https://youtu.be/1soBu9IfcpE.

> Gradiant recorded a videoclip with some technical details about the usage and benefits of the MAVEN suite.

All the footage seen in these videos is completely original and is accessible on the website of the project.

- Public website

The project's website, which can be accessed at http://www.maven-project.eu was created to publish information about the project itself and raise awareness about the problems related to data management and security in an ever-increasing amount of multimedia files. The language used in the website is non-technical since the target audience includes non-specialists but people belonging to the Security and Media sectors.

In agreement with the recommendations provided during the 1st review meeting, the MAVEN website was moved to Tumblr in September 2014, in order to ease the inclusion and management of new contents. As of September 2015, the website was moved to WordPress to improve its appearance.

* Activities by RTD partners

- ICME 2015: Demo Session and EU Project Papers

MAVEN was presented at the 2015 IEEE International Conference on Multimedia and Expo (ICME; http://www.icme2015.ieee-icme.org/) that took place in Torino (Italy) between June 30th and July 2nd 2015. Two papers were presented during the demo and European Projects sessions:

> The maven project: Management and Authenticity Verification of multimedia contents. Ruiz, C., Arroyo, S., Krsteski, I., Dago, P., Sanchez, J., Perez-Freire, L., & Jerian, M. (2015, June). In Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on (pp. 1-4). IEEE.

> The MAVEN FP7 Project Demonstrator Application. Dago-Casas, P., Sánchez-Rois, J., De Rosa, A., Fontani, M., Costanzo, A., Ariu, D., Piras, L., Krsteski, I., Jerian, M., Ruiz, C., Ahumada, R., Álvarez, M. In Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on (pp. 1-4). IEEE.

These papers are available in the documentation section of the project website (http://maven-project.eu/documentation).

- Workshop on Web Multimedia Verification (#WeMuV2015)

MAVEN contributed to the organization of the Workshop on Web Multimedia Verification (#WeMuV2015; https://sites.google.com/site/wemuv2015/) held in conjunction with the 2015 IEEE International Conference on Multimedia and Expo (ICME), that took place in Torino (Italy) on 29th June 2015.

In addition, Alessandro Piva (CNIT) presented the paper “Unsupervised fusion for forgery localization exploiting background information”, by P. Ferrara, M. Fontani, T. Bianchi, A. De Rosa, A. Piva, M. Barni, with the acknowledgement to the MAVEN project.

- ICPRAM-ICORES-ICAART 2015 – European Project Space

UNICA presented the MAVEN project at the European Project Space (http://www.icpram.org/EuropeanProjectSpace.aspx?y=2015) which was organized in conjunction with the ICPRAM Conference and the other two co-located conferences ICORES (International Conference on Operations Resarch and Enterprise Systems - http://www.icaart.org/?y=2015) and ICAART (International Conference on Agents and Artificial Intelligence (http://www.icaart.org/?y=2015).

The event, which took place in Lisbon, Portugal (January 10-12, 2015) was supported by various FP7 Projects, including MAVEN, and featured a demo from the UNICA’s staff.

- European Researchers Night 2014

MAVEN has been presented during the European Researchers' night 2014 (September 26, 2014 - Nuoro, Italy). Gian Luca Marcialis and Pierluigi Tuveri explained the visitors the goals of the project MAVEN.

- Gradiant publications in the Web and Social Media

- Publications and presentations by CNIT

> Pasquale Ferrara, Marco Fontani, Tiziano Bianchi, Alessia De Rosa, Mauro Barni, Alessandro Piva, “Unsupervised Fusion for Forgery Localization Exploiting Background Information”, presented at Workshop on Web Multimedia Verification (#WeMuV2015), held in conjunction with the 2015 IEEE International Conference on Multimedia and Expo (ICME), that took place in Torino (Italy) on 29th June 2015 (https://sites.google.com/site/wemuv2015/).

> Presentations in Scientific Conferences and Events

>> Alessandro Piva, Mauro Barni, Massimo Iuliani, “A Video Forensic Tool for Double Encoding Detection and Forgery Localization”, Demo presented at 5th GTTI (National Telecommunications and Information Technologies Group) Thematic Meeting on Multimedia Signal Processing 2015, Bardonecchia (Torino), Italy, 8- 10/03/2015, http://gttimultimedia2015.polito.it/

>> Mauro Barni, “From Single Object to Contextual Authentication - A New Challenge in Multimedia Forensics and Beyond”, keynote Lecture at VISIGRAPP 2015 10th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Berlin, Germany, 11-14/03/2015, http://www.visigrapp.org/KeynoteSpeakers.aspx?y=2015#4

>> Massimo Iuliani, “Image Forensics”, poster presented at Second IEEE SPS Italy Chapter Summer School on Signal Processing, Frascati, Roma, Italy, 7-11/07/2014, http://spss.uniroma3.it

> Presentations in Law Enforcement Agency Events

>> Alessia De Rosa, “MAVEN Project: aims and objectives”, oral presentation at 14th Meeting of ENFSI - European Forensic Science Institutes - Digital Imaging Working Group, Bucharest, ROMANIA, 2-5/09/2014, http://www.enfsi.eu/aboutenfsi/structure/working-groups/digital-imaging

>> Marco Fontani, “A (wider) methodology for Forensic Image and Video investigation”, oral presentation at S-FIVE Workshop organized by ENFSI – European Forensic Science Institutes, Brussels, Belgium, 15-18/06/2015, https://www.s-five.eu/mediawiki/index.php/Workshop

> Presentations in Law Enforcement Agency Meetings

>> Alessandro Piva, Alessia De Rosa, Marco Fontani, Massimo Iuliani: Meeting with the Staff of Polizia di Stato, Servizio di Polizia Scientifica, held in Roma, Italy, 12/11/2014

>> Alessia De Rosa, Marco Fontani, Massimo Iuliani: Meeting with the Staff of Polizia di Stato, Questura di Prato, held in Prato, Italy, 18/03/2015

>> Alessandro Piva, Alessia De Rosa, Marco Fontani: Meeting with the Staff of Carabinieri, Reparto Operativo Speciale (ROS) of Roma, held in Firenze, Italy, 08/07/2015

* Activities by SME Partners

- Publication of MAVEN results on the AMPED blog

The goals and results of MAVEN have been published on the AMPED Blog (http://blog.ampedsoftware.com):

> MAVEN Project Researches New Image and Video Authentication Technologies (September 5, 2014).
> Amped Software to Reveal New Plans for Image and Video Authentication at the 2015 Forensics Europe Expo (April 9, 2015).
> Amped Authenticate Update: New Filters, Batch Processing and Automatic Warnings (September 17, 2015).

> Press Release by Amped (April 11, 2015):

Title of the release: Amped Software to Reveal New Plans for Image and Video Authentication at the Forensics Europe Expo and Teach Investigators How to Productively Analyze Digital Multimedia
Link: http://www.pr.com/press-release/614349

> Industry events attended by AMPED:

AMPED attended the main events for the digital forensics and security community:

> Digital Experience 2014. 01/10/2014 Ede (NL)
>> Link: http://www.dataexpert.nl /digital-experience-2014-English
> Presentation and demo: The latest tools for forensic video enhancement and photo authentication.

> LawTech Europe Congress 20-21/10/2014 Prague (CZ)
>> Link: http://www.lawtecheurop econgress.com/

> S-FIVE Workshop 15- 18/06/2015, Brussels (BE)
>> Link: http://www.s-five.eu
>> Presentation and demo

> Forensics Europe, 21- 22/04/2015 London (UK)
>> Link: www.forensicseuropeexpo.com
>> Presentation and demo

> Digital Experience 2015. 30/09/2015 Ede (NL)
>> http://www.dataexpert.nl /digital-experience-2015-english
>> New Frontiers in Image and Video Tampering Detection and Camera Ballistics

Industry Events attended by Arthaus

> ARTHAUS presented MAVEN tools and MAVEN suite to the members of Macedonian Chamber of Information and Communication Technologies - MASIT in which ARTHAUS is also a member.

> ARTH will take participation in the “ICT Mission to Japan” event which will be held from 26 to 30 of October 2015 in Tokyo. ARTH will present the company, its services and products and of course MAVEN tools being new products that ARTH could offer to the market.

- Publications by Xtream in web and social media:

- Industry events attended by XTREAM

In 2015 the MAVEN outputs have been presented to Argentinean End users and specialized IT partners in the JuFeJus Argentinean Justice & Technology Congress, celebrated midSeptember 2015 in Iguazú (Argentina).

XTREAM also participated in 2015 Business Mission to Portugal with the Spanish Chamber of Commerce, celebrated in Lisbon between the 26th and 28th of May, 2015. During the event one-to-one presentations were held with Portuguese IT partners, Justice customers and potential Security customers.

In parallel, XTREAM has developed multiple commercial meetings with key customers (over 50 in 2015) in Latin America (Mexico, Colombia, Peru, Argentina, Uruguay, etc), Spain and Portugal, where MAVEN value proposition has been presented as of next incorporation to XTREAM portfolio.

- Publications by Playence in Web and Social Media (twitter, linkedin, facebook)

- Industry Events Attended by PLAYENCE
>> Entrepreneurial panel at University of Chicago - Booth School of Business
>> Title: Enterprise Search - The MAVEN case
>> Organizer: Prof. Waverly Deutsch, University of Chicago - Booth School of Business

> 1st International Conference on Predictive APIs and AppsName of the Session: Entrepreneurial panel
>> Organizer: bigml

> IATA People Symposium Premier Conference for Aviation Executives & Human Resource Professionals
>> Title: Transforming Information into Knowledge
>> Organizer: IATA

> South Summit’2015
>> Title: Booth; Panel “IoT and Artificial Intelligence”
>> Organizer: IATA

** Exploitation of the results and project impact on project partners

In the following, the main expectations on potential impact, and the exploitation actions carried out by the project partners are listed:

- AMPED

The expectations of AMPED regarding the tools developed in MAVEN have been fully satisfied. As described in the DOW, the exploitable results by AMPED are the tools for:

> Result #2b: Blind image integrity verification (new algorithms)
> Result #2b: Blind image integrity verification (decision fusion framework)
> Result #3: Video integrity verification

The main expected exploitation for these tools was their integration in the product (http://ampedsoftware.com/Authenticate) AMPED AUTHENTICATE, to improve its features and performances. In fact, the image integrity verification tools and decision fusion framework have been already integrated in the official release of AMPED AUTHENTICATE. Regarding the video integrity verification, the exploitation is planned for the next 6-12 months,

A proof of how well the MAVEN project worked for AMPED, is the fact that one of the results of the project (namely the filter for Clone Detection), has been officially released during the first reporting period. Since then, thanks to the application of the tool on many real cases, AMPED managed to release several improvements to that specific filter (see http://blog.ampedsoftware.com/2015/06/16/amped-authenticate-update-improved-clones-blocks-visualization/).

Before the end of the second reporting period, all the other image integrity verification tools and the decision fusion framework have been released in the official version of AMPED AUTHENTICATE and is thus available to all the new and existing users of the product with an active maintenance plan (see http://blog.ampedsoftware.com/2015/09/17/amped-authenticate-update-new-filters-batch-processing-and-automatic-warnings/).

The most important aspect of the tools developed in MAVEN, and especially of the decision fusion framework, is the fact that they noticeably ease the task of the analyst using the software. Most of the other tools in AMPED AUTHENTICATE provide an output which needs to be interpreted by the user, while the new ones developed in MAVEN gives a much more automated output.
In the near future AMPED plans to also release the video integrity verification features developed in MAVEN. The topic of video authentication is very complex, a lot of customers request it and there is no product on the market which performs video authentication. Thus for AMPED it is strategically of main importance to be the first to release such a tool, thanks to the results of MAVEN. AMPED does not know yet if will be able to integrate video integrity verification features into the framework of AMPED AUTHENTICATE (probably increasing the price of the product up to 50%) or if it will release a separate product for them.

- ArtHaus

The exploitable results by ARTHAUS are the following MAVEN tools:

> Result #1: Image source identification
> Result #2a: Informed image integrity verification
> Result #7: Object, logo, and scene recognition

Arthaus has already started the integration MAVEN modules in the existing software application under its maintenance, Top Snap, a software solution that ARTH has developed and is maintaining for over than 10 years now.

ArtHaus will add MAVEN modules to its portfolio as completely new, sophisticated, innovative, high technology products on the market. That will enable ArtHaus to strengthen the relationship with existing clients as well as attracting new clients looking for software solutions where MAVEN modules will make the distinction from the existing less sophisticated products on the market.

- XTREAM

The exploitable results for XTREAM will be the following modules developed in MAVEN project: Result #4: Text localization and extraction, Result #5: Spoken keyword detection, Result #6: Face Detection and Recognition, Result #7: Object, logo, and scene recognition.

This ample technology contribution of MAVEN project will improve XTREAM portfolio value proposal by complementing our core technology assets with market appreciated capabilities, increasing XTREAM offer competitiveness and making possible a sustained revenue increase for the company.

At this point of the project XTREAM’s revenue model for MAVEN project is to implement these new technology assets as innovative and advanced features of our main portfolio products:

> CICERO. Advanced SW solution for the recording and archiving of judicial oral proceedings. Implemented in more than 10 European and Latin-American countries.
> BROADVIEW. Advanced solution for the global management of security systems. Implemented in the Spanish Airport network and some other transportation infrastructures.
> MEDIABOX. Advanced Digital Library solution for the recording and archiving of digital audiovisual contents. Currently implemented in the national and regional parliaments of more than five European and Latin-American countries.

The participation of XTREAM in the MAVEN project will bring a number of benefits to the company through the exploitation of the results, once the new technologies are integrated with the existing and firmly established XTREAM products. MAVEN technologies will provide XTREAM solutions an additional differentiation, making them more competitive to keep and gain market share; they will increase in the perceived added value for XTREAM Solutions and enabling the raise of the solution price. The new features improvement will also prevent churn from existing customers.

> Increase Value

MAVEN search tools will increase the added value of XTREAM portfolio solutions through the integration of highly appreciated features.
Considering this point it is estimated that the solutions license price could be increased between 10% and 20%, depending on the product line maturity:

* CICERO: +20%
* BROADVIEW: + 15%
* MEDIABOX: + 10%

> Increase Market share

It is expected that MAVEN technologies can provide additional differentiation to the XTREAM portfolio and let the company become more competitive, increasing this way the market share in those countries where the XTREAM powerful partner network is already present.
This consideration could be translated in an expected improvement of up to 3% in the market share for those niche markets already opened.

> Increase Customer base loyalty

Once MAVEN technologies are implemented in the XTREAM portfolio, it is expected that the customer base of the improved XTREAM solutions will be better preserved. The new features for the products are high-end technology that will be well appreciated by XTREAM customers, avoiding their moving to other less sophisticated products.
Current estimation for the impact of this revenue pillar is an expected churn reduction up to the 20%, being even higher in those national markets where product competition is scarce and XTREAM portfolio value proposition stronger.

- Playence

The exploitable results by PLAYENCE (TAIGER) will be based on the following project’s results: Result #4: Text localization and extraction, Result #5: Spoken keyword detection, Result #6: Face Detection and Recognition, Result #7: Object, logo, and scene recognition.

Although the level of needs from PLAYENCE’s (TAIGER’s) customers may vary from each use case, these new technology assets have improved the technological company’s level of expertise, open new opportunities, and in the long run, will increase revenues made. The revenue model for these technology assets will have three flavours: As part of the products already available at PLAYENCE (TAIGER) (in particular, the new rebranded product called iSearch), as an off-the-shell product available for sale, lease, or license offering integration or tailoring with specific custom developments or as SaaS with a monthly/yearly fee depending on number of users, media assets, and features.

The participation of PLAYENCE in the MAVEN project has brought a number of benefits to the company around two dimensions: on the one hand, in terms of new technology assets to be exploited, which bring new opportunities to our current and future customers; on the other hand, in terms of current and new markets where that technology might be exploited. Both dimensions are related to our current and future exploitation actions.

The MAVEN project has contributed to overcome the previous limitations of PLAYENCE’s portfolio by extending previous technology and incorporating a fundamental set of features identified in our business plan and providing a competitive advantage over our competitors. With all those new features, and as part of the rebranding process, the former product Playence Enterprise has been renamed to TAIGER iSearch.

It is estimated that these new technology assets will directly and indirectly help PLAYENCE (TAIGER) to sustain their revenue growth in the next 2-3 years after the end of the project and increase a 15% price of the different licenses of our products and technologies. But, most importantly, the project has provided a strategic advantage in some markets such as Singapore.

- Gradiant

The exploitable result by GRAD is "Result #5: Spoken keyword detection". After reviewing the possible options for exploitation of such result, it turned out that detecting spoken keywords has a clear interest for improving GRAD's mobile speaker recognition technology; by incorporating this keyword detector, the biometric system is able to be more robust against fake access attempts, therefore paving the road towards a real liveness detection feature in a mobility scenario.

For the time being, GRAD has started the integration of the Spoken keyword detection tool in its proprietary speaker recognition technology, specifically designed for mobile scenarios.
The most important and immediate exploitation activities where GRAD plans to showcase this improved biometric technology are two major events in the Biometrics and Security Sectors: Biometrics and Identity 2015 (London, October 2015), and Cartes Secure Connexions (Paris, November 2015). In both of these events. GRAD will attend with an own stand.
GRAD has traditionally focused on licensing models for exploiting its biometric technology, and such model will be maintained for the speaker recognition improved with spoken keyword detection

* Exploitation of the MAVEN Suite

It is worth noticing that the MAVEN tools can be exploited as a complete suite in a “Search and Verify” scenario. This approach has been evaluated during the project, reaching an agreement upon the possible proposal for exploitation of MAVEN suite in the following terms:
1) The organisation that promotes and closes the opportunity for exploiting the MAVEN suite gets 50%.
2) The remaining 50% would be distributed among SMEs (and GRAD) according to the individual investments, leading to the following share: AMPED: 22,72%, PLY: 27,35%, ARTH: 15,33%, XTREAM: 27,35%, GRAD: 7,25%.

In addition, some of the SMEs involved in the project have already indicated their interest in combining MAVEN search and verify tools for improving and complementing their current solutions.

List of Websites:
The MAVEN Project’s website is allocated in the following address:
www.maven-project.eu

The main contacts of the different partners in the MAVEN project are listed below:

> Luis Pérez Freire, Gradiant, lpfreire@gradiant.org
> Daniel González Jiménez, Gradiant, dgonzalez@gradiant.org
> Martino Jerian, AMPED, martino.jerian@ampedsoftware.com
> Davide Ariu, UNICA, davide.ariu@diee.unica.it
> Fabio Roli, UNICA, roli@diee.unica.it
> Marco Fontani, CNIT, marco.fontani@gmail.com
> Alessia de Rosa, CNIT, alessia.derosa@unifi.it
> Alessandro Piva, CNIT, alessandro.piva@unifi.it
> Mauro Barni, CNIT, barni@dii.unisi.it
> Carlos Ruiz, PLAYENCE (now TAIGER), carlos.ruiz@taiger.com
> Igor Krsteski, ARTHAUS, Igork@arthaus.mk
> Borjanka Nikolova, ARTHAUS, borjanka@arthaus.mk
> Raimundo Ahumada, XTREAM, rahumada@xtreamsig.com
> Maximino Alvarez, XTREAM, malvarez@xtreamsig.com
final1-attachment_publishable-summary_final_report.pdf