Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Cloud Large Scale Video Analysis

Periodic Reporting for period 2 - Cloud-LSVA (Cloud Large Scale Video Analysis)

Okres sprawozdawczy: 2017-07-01 do 2018-12-31

The Cloud-LSVA project followed a plan to advance technology and performance in the key automotive industry by employing Semi-Automated Video Annotation, Scene Recognition, Object Recognition and Deep Learning, in conjunction with vehicle sensor data, on a petabyte scale leveraging the elasticity of computing resources offered by Cloud Computing.
Further advances in automotive sector require tools that can manage the extremely large volumes of data and provide support in the annotation task. Video analysis technology is required that is capable of exploiting the computing resources and adaptability offered by cloud architectures to create uploading and processing policies for the anticipated data volume and growth. Tools are also needed to fuse video data with other data sources and to share, analyse and apply such fused data effectively.
The capability for efficiently and effectively annotating such data can enable a number of functionalities derived from two main goals:
● Create large training datasets of visual samples for training models
● Generate ground truth scene descriptions based on objects (spatial-temporal) and events (temporal logic actions) to evaluate the performance of algorithms
The aim of this project was to develop a software platform for efficient and collaborative semiautomatic labelling and exploitation of large-scale video data that solves existing needs for ADAS and Digital Cartography industries. This platform needs to deal with diverse structured and unstructured data sourced from different sensors. The main objectives of the project were to design tools deployed on a Cloud platform that can:
● Effectively handle and exploit large amounts of data to fulfil the ultimate goals of building and validating ADAS systems and creating scene descriptions for system validation and cartography.
● Provide a framework for sharing and combining scene analysis results, including for benchmarking applications, and update capabilities for in-vehicle ADAS systems.
● Fuse video data analysis with data from other sources such that video annotations can integrate with and reference across the entire data corpus.
● Support annotation tools capable of learning from human generated relevance feedback, in the form of corrections, verifications and specializations.
● Automate as far as possible the video annotation process to minimise human workload and improve system scalability and feasibility.
● Balance the computational and network load of the automatic labelling algorithms so that part of the processing or annotation can be done at the remote data sources (i.e. on board vehicle computers).
The main objective and thus all the activities carried out during the Project have been focused to integrate big data, video annotation and cloud based technologies for improved ADAS and Digital Mapping. Cloud-LSVA has generated three prototypes during its three development cycles.

Alpha Prototype (M1-M12). The focus was in completing the first iteration of the development and integration. Short definition stage was accomplished generating the specs, requirements and architecture. Then the RTD activities started developing the main modules of the overall Cloud-LSVA platform: the in vehicle architecture for data recording, the cloud architecture, the computer vision modules, the semantic modules and the annotation frontend. Then the first testing and validation process was conducted. The activities carried out during this period have been focused on defining the testing and validation methodology and generating the first prototype.

Beta Prototype (M13-M24). This prototype was made available in M24 of the project and contains the project results developed until that point. It implements basic connection between the considered modules, following a specific protocol. The prototype includes, video annotation analytics, searching engine and develops the first annotation pipeline for learning and refining from user corrections. This platform serve as basis for testing activities with end users. During this second cycle, special emphasis has been dedicated to produce guidelines on data legal requirements.

Gamma Prototype (M25-M36). The following work has been added to the existing Beta prototype:
- Creation of new GUI for additional annotation capabilities (3D lidar, multi-view for surround cameras)
- Integration of DL techniques for additional annotation capabilities
- Platform scale-up using Kubernetes
- In-vehicle integration of automated functions including real time vision algorithms (Valeo, TUE, TOMTOM, Vicomtech)

During the 3rd integration period a last Annotation Workshop was celebrated in the form of Final Event, which gathered developments and live demonstrations of all elements of Cloud-LSVA: platform, analytics, in-vehicle and simulation.
Cloud-LSVA has contributed to the processing and analysis of the huge quantities of sensor generated automotive data, encompassing the whole pipeline: from sensor data fusion, efficient cloud storage, time and cost effective generation of accurate annotations for machine learning processes; to development of adaptive large scale online machine learning processes, for automatic detection and recognition of objects and events in the cloud, along with exportation of local analysis models, deployable directly into on-board ADAS components.
• Ability to manage and process large datasets with a short response time to make a group of persons operate with the data.
• Shorten time to market and reduce development costs of advanced computer vision based ADAS systems
• Optimising human and machine coordination in large scale video annotation.. It will reduce human and machine introduced error in the training process and the reliability of the technologies created by the tool thus lessening safety risk.
• Contributions to standardisation of data format for large datasets and video annotations for the automotive industry.
• Interfaces for import/export data, metadata or models from/to third parties. The innovation of Cloud-LSVA solution relies on the ability to be used as a repository, where several other datasets or initiatives can upload their content to add variance and samples to certain scenarios of interests.
• Advancing vision-based ADAS and automated driving technology by providing extensive labelled training and testing datasets. The availability of big-data for training and testing will allow for the improvement of actively used computer vision technologies.

This innovative results are reached by specific research and innovation actions:
- Edge/Data Generation: The vehicles have been key elements for data recording during first cycle and for performing pre-analytics in the second phase of the project producing not only data but metadata to reduce backend processing.
- Analytics: Responsible of generating metadata from the petabytes of video data to be used for ADAS and Cartography. Includes SW developments in computer vision, machine learning and simulation.
- Cloud: central branch responsible of provisioning infrastructure, platforms, databases and deployment and orchestration of all resources. Deals with Big Data management and Cloud Computing for extracting valuable data.
- Web UI: Interfaces for human annotators which will only supervises the automatic annotations provided by the cloud backend.
approach.jpg
cloud-lsva-primary-logo3.jpg
cloud-graph.jpg