Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Zawartość zarchiwizowana w dniu 2024-06-18

Planetary Robotics Vision Data Exploitation

Final Report Summary - PROVIDE (Planetary Robotics Vision Data Exploitation)

Executive Summary:
The international community of planetary science and exploration has successfully launched, landed and operated about thirty human and robotic missions to the planets and the Moon. They have collected differing numbers of surface imagery that have only been partially utilized throughout these missions and thereafter for further scientific application purposes. Very few attempts have been made so far to bring these data into a unified spatial context (including the orbiter data available for most of them, in various quality and resolutions), or to exploit spatial relationships implicit in these images but not designed in the immediate scope of the individual missions.
PRoViDE assembled a major portion of the imaging data gathered so far from vehicles and probes on planetary surfaces into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. The 3D vision processing products are visualized by a multi-resolution visualisation engine that combines various levels of detail to realize a seamless and immersive real-time access to dynamically rendered 3D representations of the captured scenes.
PRoViDE was intending to
• Complete relevant 3D vision processing of planetary surface missions, covering all relevant robotic sites of recent and ongoing missions such as Surveyor, Viking, Pathfinder, Mars Exploration Rovers (MER), Mars Science Laboratory (MSL), Phoenix, and Lunar ground-level panoramas & stereoscopic & multiscopic images from Apollo and Russian Lunokhod and selected Luna missions, as far as available.
• Provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these mission sites to embed imagery and products into spatial planetary context.
• Collect 3D Vision processing results and remote sensing products within a single coherent spatial data base.
• Realize the seamless fusion between orbital and ground vision data of recent, ongoing and future missions (e.g. MSL and HiRISE).
• Demonstrate the potential of existing and forthcoming planetary surface vision data by maximising image quality visualisation, based on a 3D publishing platform
• Collect and formulate use cases for novel scientific application scenarios making use of the newly introduced spatial relationships and presentation means.
• Demonstrate the processing & data base concept during independent MSL data evaluation & visualisation campaigns.
• Exploit & bundle recent European (including Russian) and US initiatives in the field of Planetary Robotics Vision.
• Realize on-line dissemination of key data & its presentation by means of a web-based GIS and rendering tool.

The major PRoViDE output are processed higher-level results (Digital Elevation Models – DEMs & related ortho texture) from each of the covered celestial bodies in a coordinate system unique to the respective Planet or Moon, and additional textured point clouds on top of it from close-up vision data. These results were embedded in a visual & geometric context derived from Orbit imagery, which can be visualised by a rendering tool that allows seamless zooming from the whole Planet to the generated high-resolution 3D vision products right down to sub-mm level.
Further PRoViDE output includes a data catalogue of relevant source images stemming from the mentioned missions used for 3D reconstruction & panorama generation, including a set of relevant Lunar surface images. The technical and scientific outcome of the Project is made available to the academic community by means of a web-GIS system and a sophisticated real-time hierarchical rendering tool that can also be used remotely over net. A Summer School close to the end of the Project allowed students and members of the vision & Planetary scientific communities to learn about the PRoViDE project, evaluate its results and, most importantly, gain a new insight into past, recent and contemporaneous robotic missions throughout the whole Solar System.

Project Context and Objectives:
Much image data has been collected already on planetary surfaces and the Moon. It has been used for operational decision making, science and exploration purposes and provided mankind with an outstanding visual impression of ancient landscapes. However, only a small fraction of the data has been fully processed to the highest possible quality, exploiting most of the existing data signatures, and making use of available cues (such as views of the same area from different vision sensors, different positions and time of day). The processing of such data has only been performed as and when deemed necessary for immediate operations and science return of the particular mission. A large number of these images remain unexploited in terms of 3D data extraction, particularly the comprehensive data sets from MERs, MSL, Apollo and Lunokhods especially with large baselines. Within the EC FP7-SPACE Project PRoViDE many of them were collected in a unique spatial and temporal manner and processed as 3D vision products in order to enable a comprehensive overview over the existing data. Orbiter imagery covering these sites exists to a sufficient quality that allows a seamless embedding of the surface data. University College London (UCL)-MSSL processed Digital Terrain Models (DTMs) and orthorectified images for all lander sites (including Gale crater for the NASA MSL Curiosity) at HRSC, HiRISE and CTX scales. Processing frameworks, data base and GIS technology and a 3D rendering system were developed to combine the captured imagery into a unique set of 3D products for each site. In that sense, PRoViDE represents a significant step into a new and sustainable method for better exploiting planetary science & exploration data.
PRoViDE has brought together major groups currently working on planetary robotic vision, leading experts in planetary surface operations, and experienced planetary scientists, consisting of research institutions all over Europe, and Moscow State University of Geodesy and Cartography in Russia (Table 1).

www.provide-space.eu
©PRoViDE ▪ Joanneum Research (JR), A
▪ University College London (UCL), UK
▪ Czech Technical University (CTU), CZ
▪ University of Nottingham (UNOTT), UK
▪ Technical University Berlin (TUB), D
▪ Moscow State University of Geodesy and Cartography (MII), RU
▪ VRVis Centre for Virtual Reality & Visualisation (VRVis), A
▪ Aberystwyth University (AU), UK
▪ Imperial College London (ICL), UK
▪ German Space Center (DLR), D
Table 1: PRoViDE Beneficiaries.
PRoViDE’s main objectives are summarized below:
• Complete 3D vision processing of planetary surface missions, covering all relevant sites of recent and ongoing missions (Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Apollo, Russian Lunokhod and selected Luna missions, as far as available).
• Provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these mission sites to embed the robotic imagery and its products into spatial planetary context.
• Collect 3D Vision processing results & remote sensing products in a coherent spatial data base.
• Realize the seamless fusion between orbital and ground vision data of recent, ongoing and future missions (e.g. MSL and HiRISE, and towards ExoMars Rover and ExoMars Trace Gas Orbiter).
• Develop high quality data visualization and analysing 3D publishing platform.
• Collect and formulate use cases for novel scientific application scenarios making use of the newly introduced spatial relationships and presentation means.
• Demonstrate the processing & data base concept during independent MSL data evaluation & visualisation campaigns.
• Exploit & bundle recent European (including Russian) and US initiatives in the field of Planetary Robotics Vision.
• Realize on-line dissemination of key data & its presentation by means of a web-based Geographical Information System (GIS) and rendering tool.
Planetary surface Image data: Hundreds of thousands of images
More than twelve years after landing on Mars, one of the Mars Exploration Rovers (MERs), Opportunity, is still in operation: As of March 24, 2015 Opportunity had traveled the Marathon distance of 42.2 km. Before becoming stuck in sand in May 2009 and ceasing operations March 2010, Spirit had traveled 6.59 km. The Mars Science Laboratory (MSL) Curiosity Rover has already traveled more than 10 km. Topographic maps, rover traverse maps, and updated rover locations have been produced with tremendous manual and processing effort and distributed to the MER science and engineering team members. However, only a small fraction of this data has been actually processed for targeting a more comprehensive 3D view of the traversed landscape. The same applies to past missions to the Lunar surface, be it hand-held imagery by the Apollo astronauts or images stemming from the robotic probes of the Russian Luna missions (including the two highly successful Lunokhod missions).
In 2008 the US successfully completed the Phoenix mission, a stationary lander that analyzed the surface and subsurface soil and ice searching for volatiles and organic compounds. Phoenix used cameras to view the underneath of the lander which led to the discovery of water ice just beneath the surface.
In order to ensure that we learn from past NASA, Russian and other international efforts and to ensure that Europe is able to produce unique new scientific information, we need to understand what can be obtained from such imagery particularly that obtained from rover missions providing stereo image capabilities. We therefore started to exploit the full range of information available within the MERs Spirit and Opportunity as well as MSL missions from the imaging instruments. Most of the analysis to date within NASA has been focused on navigation and planning on a short-term “day-by-day” basis. NASA has no emphasized plans for more strategic or scientific analysis as the push is to keep the current rovers operational and then move onto the next mission. PRoViDE took a “step back” and placed all of the relevant imaging data collected to date within the same GLOBAL coordinate framework such that we are no longer concerned with the original imaging geometry or the local rover coordinate system.
Critical to this goal was the use of highest-resolution imaging systems. Using 300 mm imaging (and stereo) capability, the NASA Mars Reconnaissance Orbiter HiRISE instrument began acquiring data in October 2006. Likewise, the LRO (Lunar Reconnaissance Orbiter) narrow-angle camera (NAC) is routinely taking images at 0.5 m ground resolution. The camera allows us to image all Lunar landing sites, including those, where rovers have been operating (Lunokhod 1,2, and Apollo rovers, Figure 1).

Figure 1: The Apollo 15 landing site, including the ALSEP (Apollo Lunar Surface Experiment Package), the LRRR (Lunar Laser Ranging RetroReflector), the LRV (Apollo Lunar Roving Vehicle) and astronaut and rover tracks (Image credit: NASA/Goddard/Arizona State University - ASU)
The role of Vision Based Systems: Science and Operations
The MER mission represents the first implementation of a so-called ‘tele-robotic field geology’ on Mars in which geologists, geochemists and mineralogists on the ground use a remotely operated mobile asset on another planet to systematically study the geology at the rover site in much the same way as they would do when in the field themselves but with a significant (usually a day or more) time difference. This includes:
• The ability to move around (provided by the rover mobility system)
• The ability to observe, survey and map the scene to establish scientific context and for planning of movements, of sampling locations and close-up investigations by cameras
• The ability to approach, ‘touch’ and sample targets of interest identified from stand-off remote observations (provided by robot-mounted in situ analysis and microscopic instruments that determine elemental and mineralogical composition as well as close-up texture.
In a scientific rover mission, imaging therefore has a two-fold function:
• An engineering function: providing a basis for building three dimensional (3D) models of the rover surroundings to support path planning onboard the vehicle
• A scientific function: providing high fidelity imaging in selected wavebands (or other signatures such as 3D surface roughness) and at appropriately high resolution – including a precise fusion of different sensors – to support interpretations of the local geology as well as to support the selection of promising targets for close-up study.
The majority of such processing was dedicated to the (almost) real-time motion of the Mission in order to plan short-term actions. In the case of Lunar exploration as conducted by more interactive missions (e.g. Lunokhod), but also the Apollo missions, decisions during the mission were taken in real-time by humans, while scientific imagery was recorded (mainly by analogue means) to be evaluated later. Science was performed on these image data, but a formal unique embedding into spatio-temporal context never took place.
Processing frameworks & Visualisation heritage
FP7-SPACE supported a pan-European & US project on Planetary Robotics Vision Ground processing (PRoVisG GA 218814). Amongst other objectives, PRoVisG aimed to develop a system (PRoViP – Planetary Robotics Vision Processing) ready for large-scale processing of planetary surface vision data to 3D vision products. It is designed as an ideal platform to process the aforementioned many thousands of images in a user-friendly and efficient way. In terms of visualisation & rendering, the Aardvark framework developed by VRVis has already been successfully used for a number of ExoMars visualisations, and is continually being developed to include the latest state-of-the art visualisation features.
Vision Strategies so far: Single-Site imagery emphasized
For tactical operations, MER & MSL work mainly with a single site at a time, using a coordinate system unique and local to that site. On the rare occasions where the rover has doubled back or revisited old terrain, using the old data in the context of the new coordinate system has been problematic, relying on ad-hoc and largely manual methods. The telemetered rover positions are stored in a combination of the NASA – Planetary Data System (PDS) labels and ancillary files in PDS, but there is no systematic storage of updates in operations or PDS such as those produced by OSU.
MSL addresses this deficiency by creating a database (named “PLACES”) of rover positions and orientations ("localization" data), storing both telemetered and updated data. This served both as a source of localization data for PRoViDE, and as an avenue for updates created by PRoViDE. MSL has no current plans to systematically re-create products using updated localization data, leaving this to individual users. Therefore the services of PRoViDE are useful in this regard also.
Local coordinate systems have also been established on the Apollo mission by simple measurement means, and occasionally also post-mission by photogrammetry. It was one of the PRoViDE targets to utilize such data and bring it into the unique context of a Lunar Image Catalogue – this target however was not reached. On the other hand Lunokhod 1 and 2 stations could be reconstructed by comparison of the recovered panorama images with LROC images.
Multi-Site imagery: On-Demand only
Ohio State University (OSU) has been mapping Mars for the Mars Exploration Rover (MER) 2003 mission since the initial landing of the two landers/rovers in 2004. During the mission, OSU has utilized ground images taken by the rover at different locations i.e. multi-site imagery, to map distant targets. Since accurate terrain maps are essential to design and plan a rover’s route and help it access targets efficiently, OSU applied long baseline stereo (LBS) mapping for the mapping of common features in stereo images captured from different rover positions. Compared to the tens of metres of effective mapping area for a single site mapping, multi-site mapping provides a better solution for mapping hundreds of metres from the rover. Throughout the 12 years of mission operations to date, many wide baseline maps have been generated and provided to NASA scientists for research, rover route planning and winter haven selection. These products with high resolution and quality are adopted in PRoViDE and provide useful information for research. In particular, there is a need, articulated in PRoViSG, that in order to develop a comprehensive web-GIS, all the rover positions need to be in a global coordinate system, which can be co-registered with high resolution orthorectified HiRISE stereo imagery. This will require in addition the fusion of HiRISE imagery with these multi-baseline orthorectified stereo images all in the same coordinate system.
Besides such “controlled” imagery of distant objects and landscapes by LBS, large parts of Martian and Lunar landscapes (particularly from MER, Apollo and the Lunokhods) have been incidentally viewed from more than one position. Exploiting “serendipitous” coverage, hidden beneath the confines of the PDS catalogue, reveal a tremendous amount of additional scientific content due to new knowledge about morphology and formation processes.
Contemporaneous to PRoViDE: MSL as an interactive example
MSL was launched in the fall of 2011 and arrived at Mars in August 2012. MSL is operated much like MER, with similar processes and data products, updated based on 12 years of operational experience. MSL as an active mission during the PRoViDE time frame was used to demonstrate the capabilities of the developed processing and visualization software.
By involvement of JPL and Arizona State University (ASU; Prof. Jim Bell) in the Advisory Board (AB) and the participation of MSL science planner Sanjeev Gupta in the project the PRoViDE results are of direct use for MSL scientists.
ExoMars Rover as the first European opportunity
In Europe, ESA is planning a number of planetary robotic missions, including ExoMars 2018/19 as first rover on Mars. ESA will execute the mission together with the Russian Agency Roscosmos. PRoViDE Partners JR, UCL and AU support ExoMars with related vision components (PanCam, 3D Vision & Robotics). In such a way, the connection between past and forthcoming missions can be closed for the robotic’s vision part, and PRoViDE builds upon up-to-date mission design issues. The ExoMars Rover instrument suite includes the scientific camera system PanCam, developed and operated by a European Consortium led by UCL-MSSL and DLR. ExoMars is going to be the first European rover mission, to be followed by further unmanned landing missions in the AURORA programme, including the possibility of the Mars sample-return to collect samples from the cache onboard ExoMars. ExoMars will strongly profit from data processing and analysis functionalities developed in PRoViDE.

Project Results:
4.1.3.1 Global PRoViDE System Design
The PRoViDE building blocks are shown on Figure 2. PRoViP (Planetary Robotics Vision Processing; the processing component) harvests image data from PDS, and stores the results. As a central building block, The PRoDB (Planetary Robotics Data Base) holds information of relevant source images stemming from a variety of missions to be used for 3D reconstruction & panorama generation. The PRoViDE data catalogue (DC) is a central component that gathers the PDS data, PRoViP processing chain and processed products, including relevant information. PRoDB provides an overview of available data to be considered for processing within the project, easy access to the data itself, and a link between several other components of the PRoViDE overall system. This is providing the Batch processing component with relevant input information. Furthermore, PRoDB forwards information about finalized products to a separate database that can be queried by the viewing components, PRoGIS (Planetary Robotics GIS) and PRo3D (Planetary Robotics 3D Viewer) which are the main user platforms for data access, exploitation and visualization in 3D space using the PRo3D viewer running on a remote rendering machine also remotely via the web.

Figure 2: Abstract PRoViDE System components and interfaces (processing and data presentation environment)
Following this scheme, the main use pattern of the PRoViDE software environment as illustrated on Figure 3 is using the four main components: the PRoViP processing environment, the PRoDB Database, the PRoGIS web system and the PRo3D Viewer for geological assessment.
PRoViP is the PRoViDE processing core, a Framework for planetary robotic vision processing. It integrates a variety of 3D Vision algorithms in modular processing chains (workflows / steps), supports parallel processing, PDS data processing (including the usage of Spice kernels and realizes a scheduler – based Processing directly from / to the data base PRo3D. PRoViP functions are called in batch by the scheduler, launched by the products as given in PRoDB in a priority list.
PRoGIS is a web-GIS system designed to give access to rover image archives within a geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means of interacting with and exploring the archive.
PRo3D provides the ability to directly and interactively explore the co-registered product datasets and perform scientific analysis for geologists. The server-side 3D viewer (PRo3D®) applies advanced real-time rendering methods to enable smooth navigation through 3D reconstructions of planetary terrains. A very high degree of realism is important to allow geological assessments. PRo3D is an important tool in PRoViDE for seamless fusion of data and products assembled in the project and provide scientific analysis of planetary terrains.
The scientific evaluation workflow can be defined as follows.
• The scientist opens up PRoGIS and uses the tool to select data (see Figure 3 top left).
• A preview of the 3D data in scaled down ply format is available already through the PRoGIS system as displayed in Figure 3 bottom left (this functionality was demonstrated using sample data but is not fully implemented for all products as the main focus was the remote rendering system based on PRo3D.
• The selected data can loaded into PRo3D running on a remote rendering system and evaluated via a remote desktop session (Figure 3 right)

Figure 3: (top left) PRoGIS data selection, (bottom left) PRoGIS web-based 3D data preview, (right) Geologic analysis using PRo3D remote rendering system invoked via PRoGIS.
The following important design decisions were made:
• PRo3D is kept as a single entity with the remote viewing part realized as remote rendering solution accessible via the web
• Moon & Mars are kept separate as the nature of the derived products are too different to those from Mars
• PRoGIS does not directly call PRoViP based processing. Instead, the PRoDB acts as main driver for product generation. All products are defined there and are processed in batch from PRoViP & Scheduler. This supports ongoing & future missions‘ automatic processing
4.1.3.2 Data Catalogue / PRoDB
To keep track of the input data and all the metadata that was harvested as well as to define products to be processed and keep track of the processing history a data base had to be developed. This database was initially designed and described as “data catalogue” and was extended into the relational database termed PRoDB over the course of the project.
The data catalogue was designed, technically implemented and filled with data from MER A and B, MSL, Pathfinder and Phoenix. A list of default products was defined and integrated into the DB for automated processing. Interfaces to PRoViP and PRoGIS were defined and verified to run the full processing chain including Data Catalogue readout, scheduling, processing by PRoViP and insertion of products back into the Data Catalogue. Based on this design and these interfaces the batch based processing of all defined products was executed within the project.

Figure 4: Data catalogue (left) database relations and (right) data erxample
Data from multiple PDS releases for MSL and the MER missions was incrementally harvested and inserted into the respective tables of the PRoDB including data, metadata and product definitions. Table 2 shows the final amount of products in the PRoDB at the end of the project. The complexity of the structure of the database can be seen in the structure chart in Figure 4 left. Figure 4 right shows an excerpt of a database query of relevant surface images.
Table 2: Amount of images in PRoDB
Mission Sensor 1 Sensor 2
MER 1 Pancam 32553 Navcam 16208
MER 2 Pancam 26510 Navcam 10175
MSL Mastcam 16711 Navcam 2368
Mars Pathfinder (MPF) IMP 5511
Phoenix (PHX) SSI 3251

4.1.3.3 Serendipitous Multiview Search
In addition to the fixed stereo products defined by the left and right camera of the individual stereo instruments there are multiple possibilities to produce wide baseline stereo reconstructions. These consist of images from multiple different rover positions and can be calculated via multi-image matching and additional bundle adjustment. Images that can be used for such products are called “serendipitous” stereo configurations within PRoViDE.The following points state the task and the used approach to solve this problem (see Figure 5).
Goal: Find which images look at a common 3D structure
Approach: Image matching verified by n-view image geometry
Discovery: The standard approach does not work ... a challenge
Developed: SMVS – Serendipitous Multi-View Stereo, a new image-matching & 3D reconstruction pipeline
+ view overlap from prior image orientations
+ image similarity based on image search
+ geometric verification for image matching
Result: Robust image matching for planetary data
Standard methods such as CTU CMPSFM, Bundler (University of Washington), Visual SFM (University of Washington), Capturing Reality (commercial), AgiSoft (commercial) fail in this environment as they do
• not take into account the camera priors
• match all images with all images,
• do verification solely by epipolar geometry,
• need “good” baselines which is especially a problem in panoramas.
The solution and main innovation developed in the PRoViDE project are:
1) View overlap from prior image orientations (see Figure 5 right)
a. Reduces the number of mismatched views
b. Reduces computation time
2) Image similarity based on image search
a. Pre-filter – find 10 most similar images, O(N*N) → O(10*N)
b. Makes feasible to match large (>1000) image sets
3) Geometric verification for image matching
a. Tests all hypotheses, grows, similarity → homography
b. Deals with multiple motions in images (ground & rover)

Figure 5: (left) Serendipitous Multi-View Stereo process, (right) view overlap graph from prior image orientations
The first applied method was the computation of a matrix showing which images see the same area based on prior information derived from PDS image orientation and global SPICE based coordinates. This greatly reduces the amount of images which have to be matched (see Figure 5 right). This was followed by an Image similarity measure based on image search and a geometric verification for image matching. The approach results in an elimination of false matches and enhanced 3D camera pose and orientation results.
Results examples of SMVS are shown on Figure 6 left. The resulting camera poses can further be used for reconstructions by loading them into PRoViP processing. This was accomplished particularly for wide-baselength stereo reconstructions for the MER mission and for an image based reconstruction of the MSL descent trajectory (Figure 6 right).

Figure 6: (left) Results of SMVS, (right) 3D reconstruction of MSL descent trajectory and calculated surface structure, overlaid with texture from MARDI images using SMVS
Lunokhod multiview candidates could not be found despite search in all processed panoramas. This was mainly caused by the small image overlaps compared to imagery from Mars rover missions.
4.1.3.4 Processing Framework
The processing framework PRoViP was initially developed within the EU FP7-SPACE Project PRoVisG (GA218814). Various improvements have been implemented in the course of PRoViDE. Both Windows & Linux versions for PRoViP were developed, since Linux was the target platform for mass processing. The processing hardware was set up at MSSL. The interfaces between the individual components are shown in Figure 7 and can be described as follows:
A Job Scheduler is running on the Processing Job Scheduling Server, connects to the PRoDB Database on the Database Server, creates processing jobs and hands them over to the Task distribution daemons running on the Server Blades, monitors processing status of jobs on server blades, updates database with processing results.
Task Distribution Daemons on each Server Blade compete for processing jobs created by Job Scheduler, download input data if not already locally available, start processing jobs, invoke instances of PRoViP processing framework, retrieve results & processing logging information from processing framework and store them in raid storage, create product description file including software version and all processing parameters and notify Job Scheduler that processing is done or has failed
After successful processing results can be displayed through PRoGIS 3.0 which can access the data in the raid storage system and link to the remote rendering machine that can directly give access to 3D data in an immersive global context.
The individual software components were developed to be used specifically for the hardware and software scheme at MSSL as shown in Figure 8. However the individual components, namely the PRoDB database and the job scheduler cooperating with the vision processing pipeline controlled and invoked by the Task Distribution Daemon consisting of the PRoViP processing framework can also be used outside of the MSSL infrastructure and even be built and run on different software platforms with different operating systems. Another instance of the Job Scheduler and multiple Task Distribution Daemons with the PRoViP processing framework were installed on servers running Windows 7 at JR. Every processing job can in theory also be carried out in this environment. However the sheer value of data to be processed from the different ongoing and finished rover missions made the use of the MSSL Server Blade system an absolute necessity.

Figure 7: PRoViP mass processing based on Job Scheduler and TDDs
The processing chain produces DEMs (Digital Elevation Models) and true-orthophotos (incl. Panoramas) in various geometries (Cartesian, spherical, general plane) from PTU-acquired stereo imagery. 3D data export is provided to known standards (GeoTiff, vrml) as well as JR custom formats for visualization by 3D Viewer PRo3D by VRVis. PRo3D is used in conjunction with 3D Scene Reconstruction. It is a real-time rendering solution for scientific exploration and analysis. The visual quality is sufficient for geological assessment. Versatile measurement and annotation tools make it a flexible application for any scientific assessment of 3D surface data. Skylight models, BRDF including physical surface models and AOD (Aerosol Optical Depth) will be available as a separate demonstration. Both SW components together provide valuable information during planetary robotics missions; especially during the process of scientific target determination and rover operations planning.

Figure 8: PRoViDE Software Structure installed at MSSL (Overview of interdependencies)
In the PRoViDE context the processing is carried out at MSSL using the infrastructure illustrated in Figure 7 and Figure 8. The Job Scheduler is executed once on the machine called MSSLU1, whereas the TDD instances are running on each server blade (12 blades named MSSLUE-P). These instances invoke up to N parallel PRoViP processing instances. The amount of allowed parallel jobs per server blade depends on the hardware of the blade and can be set individually for each TDD instance. The batch processing version of PRoViP is used in the PRoViDE processing context for automated processing of jobs defined in the PRoDB database. The individual instances of PRoViP are invoked from the TDDs running on the server blades. The input image lists are assembled by the Job Scheduler that also supplies the correct workflow configuration files as needed by the PRoViP batch processing instance. The Job Scheduler is used to connect to the PRoDB database and supply the TDDs running on the server blades with processing jobs. These jobs are then monitored and the result is committed into the database.
4.1.3.5 Super-resolution restored (SRR) Ortho Images (ORI)
Within the project some components were developed that had not been originally proposed. One of the most important added value components of the project was the super-resolution ortho image generation method developed by UCL. It uses 25 cm per pixel ground resolution HiRISE images as an input. These 25cm HiRISE images allow you to see details of the surface, but 25cm is not a high enough resolution to view features such as individual rocks with diameter less than 75cm that can be found in rover imagery. There is still a huge resolution gap between HiRISE orthorectified images and rover orthorectified Navcam mosaics.
Nevertheless, for several sites on Mars, i.e. MER-A, MER-B and MSL, there exist repeated HiRISE views. UCL has developed the Gotcha-PDE-TV (GPT) based super-resolution restoration (SRR) technique [Tao & Muller, PSS, in press 2015] to be able to restore higher resolution from non-redundant sub-pixel information and undistorted multi-angle information contained in multiple raw HiRISE images. Each view is subject to different atmospheric blurring and scattering but as long as the atmospheric transparency is sufficiently high, Gotcha-PDE-TV SRR can be applied. In the PRoViDE project, brand new SRR datasets have been created for MER-A, MER-B (partially) and MSL, at a resolution of 5cm, 12.5cm and 6.25cm respectively. The SRR products are integrated into PRoGIS, PRo3D and used in ground-to-orbit co-registration for further optimisation of rover positions. The developed GPT algorithm takes multiple aligned Low Resolution images as input, generates very accurate tiled motion vectors, resolves PDE-TV prior in segmented tiles and collects the results for all HR segments and reconstructs the full SR grid (see Figure 9).

Figure 9: SRR generating GPT algorithm
The resulting increase in resolution and visible details can be seen in Figure 10.

Figure 10: Example of Results of SRR (top left MER-A orignal HiRISE, top right MER-A SRR result, lower left MSL original HiRISE, lower right MSL SRR result)
4.1.3.6 Shape from Shading DEM generation
Another important added value component was the development of Shape from Shading methods to generate DEMs and the increase the quality of existing DEMs using SfS. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DTMs. Fine depth detail may also be lacking due to limited precision. Shape from shading (SFS) uses the properties of light reflecting off surfaces to build up localized slope maps to extract topography. This works very well on homogeneous surfaces and can recover fine-scale detail [O’Harra & Barnes, 2012].
The Large Deformation Optimization Shape From Shading (LDOSFS) algorithm calculates deformations at a succession of scales to the initial surface, gradually refining the level of detail in the resulting DEM. Using an initial coarse DTM generated by stereo matching as a “seed” surface for the SFS algorithm, together with a higher-resolution image, it is possible to refine the original DTM and bring out fine surface detail. These methods were developed by Aberystwyth University. Results of generated DEMs can be seen in Figure 11.

Figure 11: Shape from Shading DEM models (top left: lunar boulders, top right Apollo 17 landing site, lower left: Laboratory test DEM, lower right Comet 67/P – Agilkia - non science data)
The method was also applied to generate a DEM of the newly discovered Beagle 2 landing site (Figure 12).

Figure 12: Single HiRISE image as input (left), Height map (middle), Slope map (right) generated with SfS
The second application of the developed SfS method was as already mentioned the enhancement of existing DEM models. This was demonstrated on a DEM model of Victoria crater as illustrated in Figure 13. The developed software uses DEM or stereo depth maps as an input, supports multiple camera models (perspective and orthorectified are currently implemented) and multiple BRDF models (Lambertian, Oren-Nayar, Simplified Hapke). Also multiple input and output formats are supported such as TIFF, GEOTIFF, PNG, GIF, JPG, Raw float32, PGM, ASCII and VRML.

Figure 13:Victoria crater DEM enhancement using SfS and spatial filtering.
4.1.3.7 Multi-sensor close range data fusion
The close range fusion of different sensors such as MSL Navcam, Mastcam and Handlens imager was requested by the planetary scientists in the project (Sanjeev Gupta and Rob Barnes) and demonstrated at a drill site of MSL in the Shaler region. The selected MAHLI (Mars Hand Lens Imager) target was called Howells (MSL Sol 323).

Figure 14: MSL Navcam panorama of drill site (Sol 317)
Figure 14 shows a panorama from the MSL Navcam. This instrument was used as the lowest resolution sensor in the sensor data fusion. The whole stack of available data at different resolution levels is illustrated in Figure 15. The field of view of the respective next higher resolution instrument is marked in each image, starting with the site in a MSL Mastcam Mosaic as well as in 3D as a close-up zoomed in and displayed using PRo3D. Figure 16 illustrates the fusion of Navcam with Mastcam data in 3D loaded in the PRo3D viewer.

Figure 15: (top left) Overview of drill site in Mastcam image Mosaic and (lower right) zoomed in PRo3D viewer, (right) Different sensors with marked field of view of next higher resolution instrument

Figure 16: 3D Fusion of Navcam and Mastcam data at drill site
Figure 17 shows the process of automated feature matching between Mastcam and MAHLI 25 cm data. The results were then used to project the texture from the MAHLI image onto the 3D structure calculated from Mastcam stereo pairs as can be seen in Figure 17. The resolution difference between MAHLI 25 cm and MAHLI 2 cm data is too large to be able to do a meaningful registration.

Figure 17: (left) Automated fusion of Mastcam and MAHLI 25 cm data, (middle) result 2D view and (right) 3D view
4.1.3.8 Orbital data processing (Leader UCL)
Orbital data (DEMs and orthorectified images in the same co-registered planetary geographical coordinate system) was processed for the planetary and lunar PRoViDE target sites. The main objective is to organize, acquire, and process orbital data (DEM and orthorectified images) to act as context for the following surface missions: Phoenix, MER1, MER2, Viking1, Viking2, Mars Pathfinder (using HiRISE, CTX and HRSC), Apollo 11, 12, 14-17, Luna-9, Luna-13, Luna-17 (incl. trajectory of Lunokhod-1), Luna-20, Luna-21 (incl. trajectory of Lunokhod-2), and Surveyor 1-7 using LROC.
UCL received HiRISE DEMs and orthorectified images for MER1 and MER2 from OSU. UCL produced corresponding CTX DEMs and orthorectified images for Phoenix, MER1, MER2, Viking1, Viking2, Mars Pathfinder and co-registered them with the ESA/DLR HRSC v50+ orthorectified images, which are co-registered with MOLA (Mars Orbit Laser Altimeter). UCL then further co-registered the OSU HiRISE and USGS/UoA HiRISE products for MSL, Phoenix, and Viking1 to CTX orthorectified images, which are co-registered with HRSC. For Viking 2 and Mars Pathfinder where there is no DEM available from UoA/USGS, UCL processed the DEM and orthorectified images. The co-registered multi-resolution datasets (HiRISE, CTX, HRSC) have been used to update rover positions and hence provide more accurate rover traverses (MER1, MER2, and MSL) after localisation.
UCL has also developed a novel GPT Super-resolution restoration algorithm, which can restore up to 5cm resolution from multiple overlapped 25cm HiRISE images. This has been applied to MER1, MER2, and MSL covering the entire traverse to date. UCL also received SFS DEMs for MER2 Homeplate area and MER1 Victoria crater, which increased the HiRISE DEM resolution from 1m to 25cm. JR converted the registered orbiter products provided from UCL and converted them from the sinusoidal map projection into the IAU Mars 2000 Cartesian Mars centric coordinate system. These Cartesian models were then converted into the OPC format required for the PRo3D viewer. The registered 3D data could therefore be visualized in highest quality facilitating geological scientific analysis.
A large amount of work was done by UCL in the field of Multi-resolution orbital co-registration. When taking a close look at the HiRISE ORI/DTMs (from the UoA/USGS/NASA/JPL HiRISE sites) and comparing these locations with the HRSC ORI/DTM (DLR processed v50+ products), we found that they are not well co-registered to each other. These mis-registrations are about 100m between HiRISE and HRSC for MER-A, 100-150m for MER-B between HiRISE and HRSC, and 100m to 200m for MSL HiRISE and HRSC, according to manually selected control points on obvious landmark features, such as crater edges (see Figure 18).

Figure 18: HiRISE and HRSC mis-registration
After selecting homologous tiepoints and applying a second order transformation, the mis-registration can be reduced to a pixel level. However, this had the unintended consequence that the rover traverses, which were provided by OSU using the IBA method, no longer fitted on the HiRISE map. It became obvious when trying to place rover traverses (MER and MSL) in context that such traverses did not match with known landmarks visible in the orbital images (see Figure 18). This mis-registration is negligible from a global perspective but results in huge offsets when trying to locate rover positions in a global context. Even manually registered HiRISE/HRSC dataset show up to a 10-50m offset of the initial landing site location. The offset accumulates along the rover traverse as it moves further away from the original landing site location. Therefore, we take CTX ORI as a resolution bridge, applying an automated tie-pointing method to co-register the HiRISE to CTX and CTX to HRSC datasets. The final result of the co-registration of the whole stack including the registered rover tracks of MER-B Opportunity can be seen in Figure 19.

Figure 19: Final product: HRSC, CTX, HiRISE registration including corrected rover traverse
4.1.3.9 Apollo landing sites DTMs
Due to the differences in the type of data and meta-data between Lunar and Mars missions Lunar data is imported and processed separately. Lunar data import is again separated into Russian missions (established by MII), and the US Apollo missions (established by TUB). TUB has created LROC orthorectified images and DTMs using an in-house stereo processing chain for the Apollo landing sites. After the identification of image sets that cover the area of interest, these image pairs are matched and dense depth maps, which are defined for all pixels in the overlapping areas, are produced. The resulting depth maps are later used as input for the object point calculation tool and 3D object point coordinates are estimated by applying the collinearity equations. The large numbers of 3D object point coordinates are fed into the DTM interpolation tool to produce the map-projected DTMs. The next step is to project the images onto the derived DTMs to produce map-projected or orthorectified-images (also known as ortho-images). After this step, the DTMs and ortho-images of the area of interest are achieved. However, due to the uncertainties in the spacecraft attitude, position, or instrument mounting the derived surface models are likely displaced with respect to the global reference shape models. To detect and minimize these misalignments, the resulting DTMs are co-registered to Lunar Orbiter Laser Altimeter (LOLA) tracks that intersect the study area. LOLA is a precise and accepted global geodetic grid of the Moon and provides a good reference shape model. The shift values from the co-registration result are applied to the DTMs and ortho-images to achieve accurate global positions for them. Finally, the shifted DTMs and ortho-images are merged and the DTMs and ortho-image mosaics are produced. These tools were used to process the LROC images, which contain the Apollo landing sites.
4.1.3.10 Lunokhod-1/-2
Due to the heterogeneous structure of the Russian Lunar surface image data MII had to more or less manually harvest the respective data bases. Lunokhod localization has been established by MII using newest LRO NAC data. The basics of panoramic processing were already known but large progress was made with an evolution of Luna-9, 13 and 20 Lander image mosaics, descriptions of photogrammetric processing prospects and the reconstruction of Lunokhod traverses. The imported and reconstructed panoramas were made accessible online via a Planetary Geoportal (Figure 20).
MII have created a 1m DEM from stereo photogrammetric processing of two LRO NAC stereopairs (M150756018 and M150749234) [Zubarev et al., 2012]. The DTM covers the entire area of the Lunokhod-1 track. MII have orthorectified (using the created DTM) LRO NAC images with the best resolution and illumination conditions for this region.

Figure 20: Web access to reconstructed lunar panoramas via Planetary Geoportal.
These products (as well as DEM and orthomosaic obtained by DLR) were used to carry out studies of the Lunokhod-1 traverse and morphological assessments of the area [Karachevtseva et al., 2013]. MII have also created a DTM for the Lunokhod-2 area but its resolution is reduced over that for the Lunokhod-1 area. This is because the Lunokhod-2 route is 4 times longer and elongated from West to East. For DTM and ORI production, MII selected 57 LRO NAC images (http://wms.lroc.asu.edu/lroc) with appropriate illumination and resolution, with average resolutions of about 1 m/pixel. The results of photogrammetric image processing provided a mutually consistent coordinate system. MII concluded that the produced DTMs and ORIs have a relatively high internal co-ordinate accuracy, derived from the absolute accuracy of orientation parameters of LRO NAC cameras in ME coordinate system, and used within the GIS as a fundamental basemap for spatial measurements and for the determination of rover positions.
The challenges of Lunar surface Processing can be summarized as follows:
• Unknown camera parameters (interior orientation: principal point, distortion, other parameters are defined not precisely).
• Unknown exterior orientation;
• Unknown dates of surveying;
• Distortions caused by non-uniformity in the rotation of the scanning mirror and scanning;
• Distortions caused by scanning of the original films.
Outcome: New methodology and special software modules, developed for lunar image processing and recovering lost exterior orientation of Soviet lunar data.
Lunar surface data was imported & processed leading to new orbital images and rover’s archive panoramas. Around 300 panoramas were generated allowing a scientific exploitation of the data. Access to the data is given via the MExLab Planetary Geoportal over the web access to Lunar data via MExLab Planetary Geoportal. Adjusted traverses of Lunokhod-1 and 2 were generated. The accuracy of the approach could be shown by rendering synthetic panoramas from an LRO NAC DEM and comparing it to actual panoramas from the rover (see Figure 21).

Figure 21: Adjusted Lunokhod-1 traverse (left) and comparison of actual Panorama (lower right) with rendered Panorama at a position along the adjusted traverse (upper right).
4.1.3.11 Mars Pathfinder Processing
The Mars Pathfinder was the second of NASA's low-cost planetary Discovery missions to be launched. The mission consisted of a stationary lander and a surface rover and arrived at Mars on July 4, 1997. The Imager for Mars Pathfinder is a stereo imaging system with color capability provided by a set of selectable filters for each of the two camera channels. It consists of three physical subassemblies: camera head (with stereo optics, filter wheel, CCD and pre-amp, mechanisms and stepper motors); extendable mast with electronic cabling; and two plug-in electronics cards (CCD data card and power supply/motor drive card) which plug into slots in the Warm Electronics Box within the lander.
The sensor orientation was calculated by TUB making use of the available instrument deployment meta-data values using the elevation and azimuth values to calculate the rotation and position matrices for each camera relative to the lander fixed coordinate frame. The instrument calibration routines developed by TUB were integrated into the mass processing engine PRoViP to allow an operational „mass“ processing of Pathfinder data directly through the scheduler and task distribution processing chain reading products to be produced from the PRoDB.
The products consist of Panoramas, Mosaics, Stereo Panoramas and Stereo Mosaics. Figure 22 shows a full 360° monoscopic Panorama of the surroundings of Pathfinder. Stereo Panoramas were also automatically processed. Figure 23 shows the result of Stereo processing of a full 360° Stereo Panorama. The result is displayed as a spherical distance map centered in between the two cameras. The deployed rover can also be seen on the reconstruction. As part of the data processing workflow the 3D data was exported into the OPC format that can be read by the PRo3D viewer. Figure 24 shows the 3D data visualized in the PRo3D.

Figure 22: Full 360° Panorama automatically generated by processing chain connected to PRoDB

Figure 23: 360° Stereo Panorama, top: Ortho image of Spherical Distance Map, bottom: Spherical Distance Map

Figure 24: 360° Stereo Panorama exported in OPC Format viewed in PRo3D
4.1.3.12 Viking 1 & 2 Lander
The Viking Lander camera design was very different from framing or CCD array cameras. The lander carried a facsimile camera with a single, stationary photosensor array (PSA), and azimuth and elevation scanning mechanisms. A lander image was generated by scanning the scene in two directions (elevation and azimuth) to focus light onto the photosensor array.
Viking processing was done in close cooperation between JR and TUB: TUB had methods and software for sensor orientation and forward intersection for DEM generation, and JR did the identification of stereo pairs and disparity generation. TUB analysed all the available Viking Lander images and created a list of stereo candidates. This search was performed by processing the footprints of images and detecting the overlapping areas. This list contains all the images pairs which are theoretically suitable for stereo processing. Additional information like image PDS paths, azimuth and elevation values are also included. In total 1505 image pairs for Viking 1 and 4426 image pairs for Viking 2 lander were detected. JR performed feature point matching on the stereo candidates followed by a Homography estimation for outlier detection to reject stereo candidates without visual matches. This was followed by dense matching using JR’s HFVM matcher and a consistency evaluation of the calculated disparities. The final disparities where exported into a TUB format only for those disparities with high consistency values. Once the matching was performed by JR, TUB processed the matching results and calculated the 3D object points. Some of the resulting constructions can be seen in Figure 25.

Figure 25: (left) example left input IMP image, and (right) the resulting 3D stereo reconstruction
4.1.3.13 MER Processing
PRoViDE has used Pancam and Navcam images for processing of MER image data. PDS entries of these instruments were inserted into the PRoDB, and standard queries provided by TUB were used to extract all meaningful standard products to be fed into the scheduler. Mosaics, full panoramas and stereo panoramas (for generating OPC – ordered point clouds as needed for 3D visualization by PRo3D) were targeted and processed. For image orientation, the latest SPICE kernels as generated by DLR from MSSL co-registration to HiRISE were used. Table 3 lists the processing statistics. A specific script was used to calculate the successful and failed processing threads and gives information about the percentage of successful results, contains links to the output folder, temporary folder and log folder, and (in case of correct processing) the percentage of invalid pixels. Figure 26 shows an example of a full 360° Navcam Panorama. Figure 27 displays the processing result of a 360° Navcam stereo panorama.
Table 3: MER standard products
Instrument Product Type Number of Products Successfully produced by 2015-11-27 Percentage of sucess
MER 1 navcam Mosaic 413 411 99.52
MER 1 navcam Panorama 227 225 99.12
MER 1 navcam StereoMosaic 863 849 98.38
MER 1 navcam StereoPanorama 245 243 99.18
MER 1 pancam MER_Multispectral_Mosaic 158 158 100
MER 1 pancam MER_Multispectral_Panorama 3 3 100
MER 1 pancam Mosaic 2845 2837 99.72
MER 1 pancam Panorama 44 38 86.36
MER 1 pancam StereoMosaic 672 610 90.77
MER 1 pancam StereoPanorama 4 4 100
MER 2 navcam Mosaic 209 204 97.61
MER 2 navcam Panorama 203 196 96.55
MER 2 navcam StereoMosaic 291 285 97.94
MER 2 navcam StereoPanorama 124 120 96.77
MER 2 pancam MER_Multispectral_Mosaic 185 102 55.14
MER 2 pancam MER_Multispectral_Panorama 1 1 100
MER 2 pancam Mosaic 833 833 100
MER 2 pancam Panorama 7 6 85.71
MER 2 pancam StereoMosaic 607 570 93.9
MER 2 pancam StereoPanorama 5 5 100

Figure 26: Panorama of Product ID 39403

Figure 27: MER-A Navcam DEM and Ortho image, Product ID 38958
Figure 28 shows a chart of all Panoramas produced for MER-A and MER-B along the respective traverses. The size of the shown blobs corresponds to the azimuth extension of the produced products.

Figure 28: Panorama Mosaics displayed on trajectory. Size of blobs corresponds to azimuth extension of panoramas
4.1.3.14 Phoenix Processing
PRoDB queries for stereo pairs found 171 stereo products from the Phoenix SSI (Surface Stereo Imager). Those were processed by standard PRoViP processing which resulted in 108 successfully produced stereo products. Figure 29 shows thumbnails of stereo disparities of 170 pairs. Figure 30 shows two views of one of the OPCs, namely Sol50_pid66750. This demonstrates that PRoViP is also able to process Phoenix SSI stereo pairs without modification of the software core and only minor changes in the parameter configuration.

Figure 29: Thumbnails of Phoenix stereo disparities

Figure 30: Example OPC: Sol50_pid66750 (contrast-mitigated screen shot to mitigate the shadow)
4.1.3.15 MSL Processing
PRoViDE has used Mastcam and Navcam for processing of MSL data. All available PDS entries of these instruments were inserted into the PRoDB, and standard queries provided by TUB were used to extract all meaningful standard products to be fed into the. Mosaics, full panoramas and stereo panoramas (for generating OPC – ordered point clouds as needed for 3D visualization by PRo3D) were targeted and processed. For image orientation, the latest SPICE kernels as generated by DLR from MSSL co-registration to HiRISE were used.
MSL data is in parts very similar to MER data as well as completely different depending on the instrument the data is coming from. The MSL Navcams are identical to the ones used for MER, therefore the Navcam data can basically be processed using the same parameters as the MER Navcam data. The only difference is the different geology and structure of encountered rocks which are taken into account by adjusting match configuration files.
Processing Mastcam data is however completely different to anything from the MER rovers especially when trying to obtain stereo products. The MSL Mastcam stereo set-up consists of two cameras with a magnification factor of more than 3 from the right eye in relation to the left one. This asymmetric set-up makes stereo matching and reconstruction a particular challenge (see Figure 31 for an example of the used pre-registration approach). Another issue especially for Mastcam processing is the thermal variation of camera intrinsics and relative orientation of the stereo system. This was mitigated by the pre-coregistration which lead to row disparities close to zero . An additional challenge of Mastcam processing is certainly the sheer amount of data needed to cover larger regions of interest. There are multiple Mastcam Stereo Mosaics within the database that consist of almost or more than 100 stereo pairs. Processing these is on the one hand a much greater challenge but on the other also leads to 3D products of an accuracy and richness that has never been seen before from Mars data.

Figure 31: Pre-registration by feature matching and up-sampling of Mastcam stereo images for dense matching
Based on the external camera calibration derived from the PDS headers and the disparity maps derived by dense matching the scene can be reconstructed in 3D. The 3D points are projected onto a sphere to create a spherical Digital Elevation Map (DEM) and Ortho image. The DEM and Ortho image is then exported into the OPC format readable by the PRo3D viewer. This allows the 3D model to be displayed as shown in Figure 32.

Figure 32: Reconstruction (OPC) of Mastcam Stereo panorama patch shown in PRo3D viewer
Table 4 lists the products as found automatically and defined in the PRoDB.
Table 4: Products defined in the PRoDB and processing results
Instrument Product Type Number of Products Successfully produced Success rate
Percent
MSL mastcam Mosaic 1158 1144 98.79
MSL mastcam Panorama 31 18 58.06
MSL mastcam StereoMosaic 301 264 87.70
MSL navcam Mosaic 65 32 49.23
MSL navcam Panorama 9 6 66.67
MSL navcam StereoMosaic 576 514 89.24
MSL navcam StereoPanorama 99 99 100
MSL mastcam Mosaic 1158 1144 98.79

Examples are depicted on Figure 33 to Figure 36:

Figure 33: Thumbnails of some MSL Mastcam panorama patches (left) and DEM ortho images (right)

Figure 34: Panorama of MSL Mastcam ProductID 47909

Figure 35: Example for DEM (in spherical space around the center between the two stereo cameras of Mastcam) and Ortho image from MSL Mastcam one image pair

Figure 36: MSL Mastcam Panorama Mosaics displayed on trajectory. Size of blobs corresponds to azimuth extension of panoramas
4.1.3.16 Coordinate System Unification
PRoViDE has processed a major portion of image data obtained by landed planetary exploration missions. The final products are visualized in a regional to global context for better orientation and scientific analysis. Processing image data of different planetary missions and various sensors of one mission involves the usage of a variety of defined coordinate frames.
Coordinate transformation is performed within a hierarchical manner – subsequent transformation from one lower level frame to the next higher level frame or vice versa. Following the branch of the frame tree (Figure 37) to the first common coordinate frame and subsequently descend the necessary branch to meet the desired frame.
As the Mars rover missions provide a very good and representative example for coordinate systems and frame definitions, focus is set onto coordinate systems used and applied within PRoViDE for these missions. For global context representation orbital image data is used. These are available as map projected image and Digital Terrain Model (DTM) data which introduces a map reference frame to be handled when geo-referencing rover visual data and bringing them into a regional context.
Coordinate frames in planetary science are right-handed and orthogonal coordinate frames and coordinates can either be expressed as Cartesian or spherical coordinates. When dealing with rover data only, the common reference frame can be the planetary body fixed frame as the spatial relation between rovers and the surface can be described directly in this frame. However, for context imagery based on orbital observations the highest level frame, the ICRF (within SPICE also referred to J2000) frame, is the only common frame between e.g. a spacecraft and the planetary body reference frame.

Figure 37: Schematic frame tree. The body fixed frame represents the common reference frame for products within the PRoViDE project.
Hence, to transform from an orbital camera to the body fixed frame or even a rover observation involves the transition to the spacecraft (s/c) frame, from s/c to ICRF frame, from ICRF to the IAU body fixed frame, to the landing site, and from landing site to rover and through the rover structure to the desired camera. The SPICE software and library performs these steps implicitly based on information for each of the separate transformations stored in Kernel files.
4.1.3.17 Ground / Orbit Fusion
UCL has developed an automated rover localisation tool to improve rover position knowledge by creating a traverse in Mars GLOBAL co-ordinates either from scratch using “raw” rover and orbital image data or by updating a pre-existing derived rover traverse (IBA - Incremental Bundle Adjustment) to provide corrections for systematic biases leading to improved global geo-co-ordinate accuracy w.r.t MOLA using HRSC-CTX-HiRISE and a reduced accumulated error for network based (IBA/BA) methods. See Figure 38 for an example for MER-B localization.

Figure 38: Example of MER-B localisation results showing co-registered HRSC-CTX (left), co-registered HiRISE-CTX (left middle), and comparison of rover traverses (right middle): IBA (red) and our method (green). And Registered HiRISE 3D Model in PRo3D (Right).
The HiRISE DEM and Ortho images are registered into unique MarsExpress HRSC Geometry and the MER Positions are brought into the same coordinate context (Figure 38). This allows us to fuse ground based 3D data with orbiter based 3D products directly in 3D on a coordinate basis. The fusion of data is done in the PRo3D viewer as shown in Figure 39. Low-resolution and high-resolution surfaces are contained in different OPCs which are added to one PRo3D scene. Priority Rendering helps to avoid, that high- resolution data is hidden behind low- resolution surfaces.

Figure 39: MER Pancam Stereo Mosaic fused with HiRISE Orbiter DEM on coordinate basis visualized in PRo3D
Figure 40 shows a final registration and fusion result of orbiter data, rover data and the registered traverse displayed in PRo3D.

Figure 40: Coregistration and fusion result displayed in PRo3D
4.1.3.18 Wide baseline stereo
The distance from the rover to the opposite wall shown in Figure 40 is about 50 meters. Using fixed stereo at such a distance is problematic as the ratio of the stereo base line which is around 0.3 m for MER Pancam to the distance leads to large errors in the viewing direction of the rover due to glancing angles. This is also visible as 3D noise on the crater wall in Figure 40. To avoid this issue it is possible to generate 3D models from images taken at different rover locations. That way the base line that can be used for 3D reconstruction can be increased leading to a better geometric set-up and less noise in the reconstruction in larger distances.
The disadvantage of this approach is the fact that the known base line and calibration between the left and the right camera of the rover scales the resulting 3D reconstructions from fixed (small baseline) stereo. This means that the resulting models are represented in world coordinates and metric distances can be measured. When using images from different rover positions the stereo baseline is given by the movement of the rover. This information is only approximately known. The 3D configuration has to be calculated using a bundle adjustment process based on multi image matches. The resulting model is in a local Cartesian coordinate system and has to be transformed back into the global mars Cartesian system to allow measurements of distances and a fusion with orbiter data as shown for the fixed stereo case in Figure 40.
To illustrate this process an example will be shown in more detail. The used input data (Figure 41) shows the same wall as Figure 40. A larger baseline will reduce the noise in the reconstruction. The rover moved 2.39 m between the two positions shown in Figure 41. Hence, the baseline was increased by almost a factor of 8 leading to a much better geometry for the triangulation.

Figure 41: Monoscopic Mosaic from PanCam Sequence P2350 (left) and P2351 (right), each consisting of 6 images. Images taken at Sol 1060, Site 78, Drive 160 (P2350) and Sol 1061, Site 78, Drive 160.
The input images from the two rover locations were matched to generate a set of sparse multi image correspondences. These were fed into an iterative bundle adjustment process leading to a consistent local 3D model. This was done by the CTU using their CMP Sfm processing chain with the intrinsic parameters from PDS as an input. As a next step a best fitting transformation was calculated between the adjusted camera poses in the local system and the initial camera poses in the global system. This transformation was used to transform the adjusted cameras from the local system into the global one. The original images were matched and reconstructed in 3D applying the calculated transformation (see model in Figure 42). The benefit from using wide baseline reconstructions is visible when directly comparing the results in Figure 40 with the small baseline reconstruction shown in Figure 42.

Figure 42: Wide baseline PanCam stereo panorama from Sol 1060 and 1061 fused with HiRISE DTM/ORI
4.1.3.19 PRoGIS 2.0
PRoGIS is a new platform completely rewritten by the University of Nottingham (UNOTT). It was equipped with latest MER SPICE kernels provided by UCL using their localization chain based on HRSC unified coordinate system orbiter products.
The system was rewritten from scratch using the following main features:
o Open Gis Consortium compliant, Support for IAU planetary datum
o Reverse Proxy to preserve data sources
o WMS, WFT, PostGIS, Images
o Public/Restricted Access to Missions/Map
o Administration Interface
o 2 different levels of service: DataServer (QGIS-MapServer), WebServer (Apache 2.x)
o Python/Django Framework
In terms of integration the following aspects were considered.
o Personalized Maps
o Graphical Annotations
o Annotations and Maps shared
o Measurement Tools
o IAU Datum supported though Proj4JS
o Data layers also from external sources
o Transparency data layer supported
o Elevation Profile tool
o Point Cloud Support ,
o Panorama Support
The following added value components were also integrated.
o Annotation Tools
o Elevation Profile Tool
o Fulcra Viewer
o Panorama Display
o Internal Pointcloud viewer
o PRo3D Viewer Call
Figure 43 shows the main GUI of the new PRoGIS 2.0 system. The integration of the rover traverse, data products and annotations is illustrated as well. It is possible to make annotations to maps and measure profiles on an underlying HiRISE DEM.

Figure 43: Data and products displayed, profile tool to make measurements and plots in PRoGIS
4.1.3.20 PRo3D Visualization and geological analysis
The PRo3D Viewer is operational, and an installation is available for download for PRoViDE participants – it was extensively tested by JR and ICL. An issue tracking system was generated, in synchronization / synergy with ExoMars PanCam 3D Vision development.
The highlights of the developments are
o the Virtual exploration of fused multi-resolution planetary surface reconstructions,
o various measurement tools for comprehensive geologic analysis (see Figure 44) and
o Web access and platform independence by remote rendering.
Figure 44: Geological analysis within PRo3D

The requirements for PRo3D were the interactive rendering of multi-scale, multi-resolution 3D surfaces and the efficient navigation of the 3D scene to study rock outcrops from different perspectives. Geologists also need to accurately measure geological features with various tools and make annotations and localizations of geological phenomena.
Developments were therefore mainly driven by the geological science requirements to directly measure the size and shape of an outcrop as well as the thickness and dip & strike values of structures, to investigate the geometrical relationships between stratigraphic units, to calculate transport directions responsible for formation of structures. To estimate wind and water flow directions and locations and to locate source of sediments as well as to measure grain size and grain size variation.
The ability to display multi-resolution surfaces on PRo3D-level was demonstrated by making use of proper priority rendering schemes (Figure 44). The rendering of multiresolution surfaces at largely differing scales (Figure 45) was implemented using Levels of Detail rendering.
The main development work to meet the scientific requirements was concerning measurements and annotations. Method were implemented allowing the measurement of:
o Coordinates of surface points
o Distance from viewpoint to selected surface point
o Distance between two surface points
o Polylines and regions of interest
o Textual annotation
o Dip and Strike estimate paleo-transport directions
The distance can be measured as the direct distance between two points on the surface or as the true way length projected onto the ground (Figure 46). Figure 47 left and middle shows the important dip and strike measurement method geologists use to estimate paleo-transport directions of flow material. The tools were tested and used by the geologists within the project resulting in full scale geological analysis models of different outcrops such as shown in Figure 47, right.

Figure 46: Linear distance vs. projected way length

Figure 47: (left) Geological measurements and Annotations in PRo3D, (middle) Dip and strike measurements
4.1.3.21 Bidirectional Reflectance Distribution Function (BRDF)
A demonstrator for Martian skylight simulation was developed including a new illuminant model for Mars sol-light with colourimetric & radiometric processing and natural colour images & chromatic adaptation. An example of this development can be seen in Figure 48.

Figure 48: Martian skylight demo, left: sunset, right: midday
A new illuminant model for Mars was derived from existing Mars mission multispectral data and defines correlated colour temperature and standard illuminants for Mars for radiometric & colourimetric applications (Figure 49 left). The application of the illuminant model for geology is the measurement of reflectance spectra of certain rocks. Based on this information then type of the minerals can be derived. Another application is chromatic adaption or the question what would that rock that was imaged on Mars look like on Earth (Figure 49 upper right). The opposite would be natural or “true” color imaging. This approach answers the question what would this rock look like to my own eyes if I were on Mars (Figure 49 lower right).

Figure 49: (left) relative spectral irradiance over wavelength, (upper right) chromatic adaption, (lower right) Natural "true" color imaging, False color and Natural color
4.1.3.22 Remote rendering
Another important PRoViDE development was the possibility to call PRo3D via remote rendering. This gives access to PRo3D’s full functionality over the web and supports all platforms and also mobile devices. The Basic principle is illustrated in Figure 50.
Figure 50 shows the remote rendering setup from a technical perspective.

Figure 50: Remote Rendering concept

Potential Impact:
4.1.4.1 Exploitation policy
The PRoViDE project participants are ready to share the data produced and the tools generated with the general public and particularly the Planetary Science community. In this spirit, the PRoViDE web site (under “Solutions and Results”) contains the following terms:
The following elements are available upon request addressed to provide@joanneum.at :
1) Panoramas from selected MER (Pancam) and MSL (Mastcam) sites (available on a password-protected ftp server hosted at JOANNEUM RESEARCH - JR)
2) OPCs (Ordered Point Clouds) from selected MER (Pancam stereo) and MSL (Mastcam stereo) sites (available on a password-protected ftp server hosted at JR)
3) Non-commercial license for PRo3D viewer to be able to visualize Ordered Point Clouds OPCs and perform geologic annotations (granted by VRVis on bilateral basis) - Non-liability terms will be negotiated for each individual license grant
4) Access to a PRo3D remote rendering server located at UCL/MSSL, being able to load OPCs from representative areas of the two MER missions, the MSL mission and representative orbiter scenes (HiRISE/CTX/HRSC) as resulting from PRoViDE and iMars processing
5) SPICE kernels for MER Pancam and MSL Mastcam positions, co-registered to HRSC / MOLA (available on a password-protected ftp server hosted at JR)
6) HiRISE super resolution data sets from Home Plate (MER-A) and Victoria crater (MER-B) areas (available on a password-protected ftp server hosted at JR)
7) Shape-from shading enhanced DTM/ORI from Victoria Crater area (available on a password-protected ftp server hosted at JR)
8) User credentials for access to PRoGIS2.0
Following products are available from the MExLab Planetary data Geoportal http://cartsrv.mexlab.ru/geoportal/:
9) New Processed archive Lunokhods panoramas with corrected metadata and detailed morphologic description, including coordinates of observation points.
10) New Assembled Original Lunokhods panoramas (available upon special request addressed to MIIGAiK MExLab / i_karachevtseva@miigaik.ru)
11) High-resolution LRO NAC DEMs (obtained from stereo processing) and orthomosaics for Lunokhods-1,- 2 landing sites.
12) Vector files (shape-files) with reconstructed digital traverses of the Lunokhods-1, -2 identified manually on LRO NAC images.
13) Shape-files with coordinates of panoramas observation points and points of interest, measured in GIS in lunar ME coordinate system.
Any publication work resulting from the use of the named data and tools must contain acknowledgement to the PRoViDE project / EU funding / the specific originators of the tools and data. The following terms apply for the individual items as mentioned above (in addition to standard data origin terms, such as Courtesy NASA / JPL / CalTech / Arizona State University):
1)2)Data courtesy: EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu / JOANNEUM RESEARCH.
3) Credits: PRo3D by VRVis; EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu.
4) Credits: PRo3D by VRVis; EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu; EU-FP7 Project 607379 iMars Consortium, www.i-Mars.eu
5) Data courtesy: EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu / University College London / DLR
6) Data courtesy: EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu / University College London
7) Data courtesy: EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu / Aberystwyth University.
8) Credits: PRoGIS2.0 by the University of Nottingham & University College London; EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu.
9)10)11)12)13): Data courtesy: EU-FP7 Project 312377 PRoViDE Consortium, www.provide-space.eu / Moscow State University of Geodesy and Cartography, MIIGAiK Extraterrestrial Laboratory (http://mexlab.miigaik.ru/eng/)

4.1.4.2 Potential Impact / Key Dissemination to Users
Key Dissemination to Users among others took place at the following events (see the filled list in SESAM for a comprehensive table):
o All members of the PRoViDE consortium presented project work at a workshop at Arizona State University in April 2015.
o Rob Barnes, Michele Giordano, Irina Karachevtseva, Jan-Peter Muller and Yu Tao presented PRoViDE work at the ESA Planetary GIS workshop, ESAC, Madrid, May 5 – 7, 2015.
o There were numerous contributions from PRoViDE members Gerhard Paar, Jan-Peter Muller, Yu Tao, Thomas Ortner, Rob Barnes, Konrad Willner, and Michele Giordano, together with the CROSS-DRIVE project at EPSC 2015 in Nantes, France.
o At EGU 2015 in Vienna a poster was presented by Natalia Kozlova, and a presentation was held by Christoph Traxler.
o At the ISPRS WG IV/8 meeting in Berlin in September 2015 Irina Karachevtseva, Andrey Garov and Natalia Kozlova gave presentations.
o A presentation was given at the Astrobiological Society of Britain to show how PRoVIDE tools, particularly PRo3D are intended for use in future exploration for life: Barnes et al. 2015, “Using rover stereo-imagery to assess the habitability of ancient aqueous environments on Mars. “
o Posters were presented at GSA 2015 (Baltimore, 1-4th November 2015) and AGU 2015 (San Francisco 12-16th December 2015): Barnes et al. “PRo3D®: A Tool for High Resolution Rendering and Geological Analysis of Martian Rover-Derived Digital Outcrop Models” (see Figure 51:). Sanjeev Gupta also used Pro3D data in his talk at the AGU 2015 meeting.
o A poster was presented at the William Smith Bicentennial Meeting, Geological Society of London at Burlington House, London (5th November 2015) - Barnes et al. 2015, “New tools to map the geology of Mars”
o PRoViDE datasets were shown at the Swiss Geoscience Meeting in Sanjeev Gupta’s plenary talk: “Exploring Mars”, with 500 people in the audience (20 November 2015).
o Sanjeev Gupta attended at the William Smith celebration meeting at the Royal Astronomical Society on 11 December 2015 to give a invited talk: “The adventures of William Smith and Curiosity on Mars - a re-imagining”. PRoViDE data were presented.
o Sanjeev Gupta presented PRoViDE results at a Royal Society Discussion meeting on “Water in the Solar System” in his talk on “Water on Mars” on 1st February 2016.
o Gupta and Barnes presented Pro3D to 30 members of the UK Govt. Treasury Enterprise and Growth Unit on 9 February 2016.
o Sanjeev Gupta will present PRoViDE results at an invited talk for Heads of UK Geoscience departments (Doing field geology on Mars: Curiosity's exploration of Gale crater) on 24 February 2016.
o Sanjeev Gupta presented PRoViDE results at an invited talk “Geology in the picture: The adventures of the Curiosity rover in Gale crater, Mars' at the University of Ghent, Belgium on 4th March 2016.
o Gerhard Paar from JR is following up a contact forwarded by Sanjeev Gupta concerning a music project worked out by the German rock band ACCEPT, using PRo3D renderings as visual theme in the video.
o Future dissemination will involve Barnes leading a paper on “How to use and exploit PRo3D for geological analysis” with a first draft due by the end of February 2016. Plans are also in place for Barnes to contribute PRo3D analyses to a paper on structural analysis of the “Garden City” veins encountered by the MSL rover around Sol 926 of the mission.
o PRo3D performance was also validated on a multi-screen system hosted at Imperial College London, see https://www.imperial.ac.uk/data-science/about-the-institute/facilities/kpmg-data-observatory-/ – this can be further used for demonstrations for public, academia and stakeholders
Examples of Future Dissemination Plans
Future dissemination plans include:
o Development of distinct geological analysis exercises like the Victoria crater exercise for future training needs
o Targeting key conferences and workshops for data-rich presentations of PRoViDE tools
o Holding small workshops for hands on training of tools – eg., discussion underway with UKSA for training of young scientists
o A hands-on introduction to PRoGIS and PRo3D at MSL team meeting in April 2016 by Gupta and Banham (Gupta’s Postdoc on MSL)
o Contributing to the Europlanets Workshop in June 2016 led by Muller – training in PRoGIS and PRo3D
o Publications on applications of tools
o Publications that utilise tools for science – see some exploitation in Section 4.1.4.3.
o Use in public outreach/engagement activities – couple with Gupta’s MSL and ExoMars activities.
o Discussions about a possible data exchange and support of the FP7-SPACE project Cross Drive third use case test that will take place in July 2016 by PRoViDE were discussed.
o VRVis have started to release PRoViDE 3D data (based on vrml) in Sketchfab, see https://sketchfab.com/models/eb400670f8634f94a8ece2b1516cabf4

Figure 51: AGU 2015 Conferences contribution Poster, Page 1 and 2
4.1.4.3 Spin-In & Spin-Off
The following is a brief summary of the spin-in and spin-off activities of PRoViDE:
- The exploitation policy, particularly for access to data and tools has been discussed. The policy of data and services distribution such as defining copyright / credits granting (e.g. acknowledgement of data source, reference to originators or publications that describe the PRoViDE tools, or mutual, even bilateral agreements with those groups using the data and tools, DOI policy, etc.) is under progress after the project.
- A possible list of planetary scientists being candidates for PRo3D exploitation has been assembled.
- Plans for access of the MSL science team to PRo3D have been made – ProViDE has received already encouraging statements from prominent MSL scientists concerning the impact and possible use of the tools and data.
- Stemming from contacts to JPL (AB member Bob Deen) discussions were launched concerning PDS4 and European mission scientists and instrument teams to exploit the AMMOS PDS pipeline System (APPS) as currently under finalization by JPL.
- The FP7 project iMars, coordinated by Jan-Peter Muller from UCL plans to make the UCL/MSSL GPU-based PRo3D system available and in a separate development plan to develop an AR (Augmented Reality) based system using the new UCL East campus and facilities at the London Olympic park as well as on the main UCL campus for future demonstrations of ExoMars results. Muller continues his long-standing collaboration with JPL and is actively seeking future avenues of exploitation with close colleagues at JPL
- JR, UCL and DLR are embedded into the Mars2020 Mastcam-Z team as Co-investigators. JR & VRVis have already received an ESA/PRODEX contract to follow-up PRoViDE and ExoMars PanCam 3D vision work and support the Mastcam-Z team with (mission-critical) 3D reconstruction software, building upon PRoViP and PRo3D.
- JR and VRVis are contributors to the ESA ExoMars PanCam Team under ESA/PRODEX contract: PRoViP will be the main 3D vision pipeline for PanCam data processing, PRo3D will be used for geologic assessment of stereo products.
- Prof. Gupta from ICL has been involved as a proposer of landing sites for the NASA 2020 rover (landing sites: Hypanis delta, Eberswalde delta, and Jezero crater. This is being supported by the use of PRo3D on HiRISE data of these areas
- JR intend to use parts of the PRoViDE achievements for the involvement into future missions such as Lunar missions or Mars Sample Return.
- MIIGAiK will implement the developed techniques and obtained results in frame of future Russian and international (involving Russia) missions such as Luna-Glob (2018-2019), Luna-Resource Orbiter and Lander (2020) and ExoMars, including:
o Stereo photogrammetry orbital image processing.
o Photogrammetric surface panoramic image processing.
o Methods and GIS-technique of rover localization.
o Geomorphology analysis of lunar surface for landing site selection and characterization.
o Geodatabase and data import engine.
- Aberystwyth University has ambitions to exploit the work done as part of PRoViDE, especially in the areas of Shape-from-Shading (SfS) and Illumination Models. The development of the SfS algorithms and supporting software in particular was an important result of PRoViDE for AU.
- MSSL in collaboration with ISI Limited have recently completed an InnovateUK sponsored project to develop an automated crack detection system for molten steel. This has led to a subsequent project to apply some of the approaches developed in PRoViDE to process large numbers of images of molten steel containers.
- JR by using some technologies developed in PRoViDE (e.g. 3D data structure gpcx, panorama mosaicking) together with a UK company (mediated by UCL) have developed a system for stereo reconstruction of hot ladles for frequent inspection under hot conditions.
- The UK national mapping agency, the Ordnance Survey, has expressed strong interest in the application of SRR to aerobot UAV imagery whilst a UCL spin-off company now based in Silicon Valley, Marvx is very interested how this could be employed for repeat imagery from the Landsat-8 or Sentinel-2 satellite and discussions are underway with Google Skybox about how this could be employed to improve resolution from their EO platforms. Human Rights Watch, a NGO based in New York and Switzerland have provided MSSL with large numbers of test samples from Pleiades repeat images at 70cm for testing and are setting-up a UAV test flight in Spain to assess whether low resolution imagery from UAVs can be enhanced sufficiently for their purposes.
- UK Space Agency has funded Gupta and Barnes for 3 months (Jan-March 2016) to develop and test a set of three geological training exercises based on Mars Science Laboratory (MSL) rover-derived image data using PRo3D. These exercises will form the basis of future workshops to train UK scientists in the geological analysis of rover-derived image data in preparation for the ExoMars 2018 mission.
- In this frame, Rob Barnes is continuously reporting to JR and VRVis to trigger improvements within PRo3D that are realized in the frame of the Austrian 3D vision developments in the frame of ExoMars PanCam and Mastcam-Z.
- A proposal to the UK Space Agency (UKSA) led by Gupta, Gunn, and Muller with Barnes and Tyler as PDRAs entitled “Quantitative 3-D analysis and validation of terrestrial analogues for Martian habitable environments in preparation for the 2018 ExoMars rover” has been successfully reviewed by UKSA. The project will collect new 3D data using the ExoMars Pancam emulator AUPE2 of actual geological outcrops on Earth and compare geological analysis results between outcrop field measurements and those derived from PRo3D measurements of the point cloud data of the same outcrops.
- Same way, AU and UCL were successful in their UKSA proposal “Scientific Integration and Exploitation of ExoMars PanCam, ISEM, and CLUPI” which will follow up relevant pieces of PRoViDE R&D in the field of data fusion.
- Both above listed activities are highly relevant also for JR and VRVis having their (ESA/PRODEX-) funded counterparts in Austria for ExoMars PanCam and Mars 2020 Mastcam-Z 3D vision processing and visualization, being able to directly collaborate with the UK partners in a similar way as in PRoViDE.
- Following on from the spectacular results of PRoViDE for the rover traverse processed to 5cm (MER-A), an application to the UK Space Agency Mars exploration call has been submitted by MSSL to port the SRR technique to the GPU array being used for Pro3D® rendering. The target was to be able to process full HiRISE image stacks to the highest possible resolution within a few hours rather than the present several months for a single HiRISE scene. Initially, the HiRISE data stacks for the 3 rover sites employed in PRoViDE would be processed and added to the PRoGIS website. Unfortunately the proposal was not successful, however ESA have already expressed a very strong interest in applying SRR to the proposed ExoMars 2018 rover landing sites if suitable HiRISE images can be acquired. There are around 400 sites on Mars where 5 or more HiRISE images have been acquired. Such an activity could process as many of these as feasible with the GPU array and make them available to the scientific community through PRoGIS 2.0 at MSSL: The idea is being followed up in future funding possibilities.
- The UKSA application also proposes the development of SRR for CaSSiS onboard the Exomars Trace Gas Orbiter 2016 (EMTGO16) to enable images up to 1m to be produced from repeat pass and multi-look CaSSiS imagery. Similar prospects apply.
- JR have agreed to make PRoViP parts available to UCL that are able to convert HiRISE DTM / ORI to OPC for PRo3D visualization, which is an excellent opportunity for data presentation and validation, specifically in the iMars project coordinated by UCL. It is intended to realize it in the frame of the first two quarters of 2016. In turn, JR is able to use the OPC data sets for further fusion tests in the frame of their ExoMars PanCam and Mars 2020 Mastcam-Z 3D vision involvement.
- The EU-FP7 iMars project led by MSSL includes the construction of a webGIS which re-uses much of the architecture from PRoGIS. The same developer based at Nottingham University is also looking into how the iMars webGIS (developed mainly at the Freie Universitaet Berlin) can be interfaced to the PRoGIS 2.0 system so that planetary scientists can explore the geological context from iMars of the whole region that the rovers are in whilst looking in detail at the rover imagery. Elements of PROGIS are also being re-used at MSSL for a WMS server to deliver “as processed” orthorectified and co-registered (to HRSC DTM and ORI products) from all the 5 NASA missions which have visited Mars. It is planned to further develop PRoGIS for other planetary bodies, particularly the Moon in the future as and when suitable funding opportunities occur.
- JR and VRVis together with an Austrian industrial partner in March 2016 have submitted a proposal to an Austrian funding platform to further develop PRoGIS as a science-cooperation asset in ExoMars. The Idea of PRoGIS-3.0-EM is to make PRoGIS and PRo3D available to the ExoMars science team. PRoGIS & PRo3D will be extended in terms of data management to support ExoMars landing site selection, and during the first months of the ExoMars operational phase they will be introduced as unique environment where the science teams can maintain and share their science data based on the spatial context of remote sensing data products as available by then (HiRISE, HRSC).
4.1.4.4 Validation by Students
The PRoViDE Summer School which took place in September 2015 at TUB was the main asset for validation. A number of issues have been considered during planning the Summer School, e.g. right mix of participants with planetary science background and participants with engineering background, presentation and evaluation of new tools developed within PRoViDE, testing work with new software as well as new data and making the content of the Summer School exciting.
The final programme contained the following elements (see the cms directory containing all material of the Summer School http://3dvision.joanneum.at/projects/provisxxx/internal/meetings/2015-09-21-to-23-provide-summer-school-berlin).
- An introductory session about the Summer School objectives and logistics
- Presentations about planetary rover missions & data
- An introduction into the PRoViDE tools’ and data framework including demonstrations of PRoGIS and PRo3D
- Basics about geological interpretation of Planetary image data and 3D vision products, as well as Lunar surface interpretation and MSL
- Exercises in PRo3D
- Practical lessons and exercises in geologic data interpretation using PRo3D
- Filling of questionnaires
- Presentations by students & feedback
The Summer School had 12 “participants” all in all, with 5 persons directly related to PRoViDE institutions, and 7 persons not related to PRoViDE. Background of participants was a balanced mix of photogrammetry, engineering, cartography, planetary sciences and planetary geology. The participants were in different stages of research career including BSc and PhD participants, engineers as well as PhD graduates working as researchers.
The summer school took place on the premises of the TU Berlin. Here a room facilitated with Beamer, Smartboard, seating for approx. 26 persons was made available to the PRoViDE team. Remote rendering was supported by the installation of virtual machines at UCL that were remotely accessed by the participants groups. For the on-site local installations some PRoViDE partners brought additional laptops to enable a smooth parallel operation of tools. In the end, 8 workspaces were provided for the participants.
During the PRoViDE Summer school mainly the geologic interpretation of data was driving the project’s validation, therefore the following elements were involved:
- Expertise of PRoViDE partners as information source in a first round of presentations
- Using highly fused data processing results (MRO orbiter HiRISE and Rover Pancam imagery) from a specific region of the MER-B (Opportunity Rover) with the novel processing mode of large base-length stereo 3D reconstruction in the area of Victoria Crater
- Addressing the data selection mechanisms of PRoGIS 2.0
- Using / testing and exploiting the data presentation mechanisms of PRo3D.
The layout of the whole PRoViDE context was explained in the JR presentation about the PRoViDE project as a whole, and the processing in PRoViP, data representation in PRoGIS, and data presentation in PRo3D. PRoViP in this environment was kept out of the loop, and participants were purely working with PRoGIS for data selection and PRo3D for 3D data analysis and geologic interpretation.
To allow access to the tools, each participant was given credentials to the MSSL remote rendering system for the time of the workshop. ICL had prepared a thorough list of geology exercises that were followed in the practical lessons. This was to be executed for the participants.
The primary purpose of the exercise prepared for the summer school was to guide the participants through a typical geological rock outcrop analysis workflow using the tools provided in PRo3D. This involved initial determination of the scale of geological features in the 2D images they were provided with, by locating the same features in the 3D scenes in PRo3D, and measuring it directly using the line measurement (point to point) tool. This was followed by characterization of the principle lithological variations using detailed Microscopic Imager and Pancam images.
The evaluation of PRo3D has been done in parallel by several groups of participants and lead to several questionnaires as well as a summary final presentation by the participants. Therefore, a number of comments are overlapping and related. A consolidated summary of the issues and suggestions presented during the final presentation and from Suggestion field in the questionnaires were compiled and formulated in the D7.41 Validation Report, leading to a series of JIRA Issue tracking entries in the PRo3D development thread.
The questionnaires have been constructed to answer some of the questions appearing during the development of the software tools as well as when the hands-on sessions have been designed.
Based on the questionnaire one can conclude that the overall experience by the participants was very positive (Figure 52), though the “emotions” graphic shows that the first day that comprised of lectures mainly did not fully meet the expectations of the students leading to a medium happy experience.
The evaluation of the PRoVIDE tools and approach to data serves as a very useful source of information for improving the tools. Participants with background in planetary geology concluded that PRoViDE developed unique tools that allow to access existing as well as new products in unprecedented way by providing an integrated access to 3D structures together with relevant imagery and creating and managing metadata produced by the scientific analysis of planetary data.

Figure 52: Left: Participants of the PRoViDE Summer School with some of the lecturers. Right: Evolution of feelings about PRo3D
In addition to validation by students, the Review meeting in July 2014 expressed the need for additional activities in PRoViDE tools’ and results’ validation. This resulted in the D7.42 “Evaluation & User Validation Document which addressed a couple of important aspects in this respect:
• Individual processing elements such as stereo matching and super resolution and their products and access thereto
• accuracy aspects for the Lunar data products
• a list of items that the PRoViDE claims as being beyond the state-of-art
• statements about the Parameters of 3D imagery data for application to geological analyses and PRo3D gain during the project
4.1.4.5 Science Exploitation
In addition to data provision, PRoViDE has also made available tools and data presentation means to browse and exploit the generated data products in their spatial context.
Data and tools were used still in the PRoViDE project to demonstrate their added value in a set of scientific exploitation cases. Although PRoViDE is not per se a planetary science project, scientific use of the tools and data was thoroughly demonstrated, and the added value of the PRoViDE results could be shown. Various modes of geologic analysis on Mars and the Moon are demonstrated, including rock size frequency distribution estimation, and digital outcrop analysis. Apart from planetary science also the deviation and development of new computer vision techniques for 3D reconstruction from planetary surface imagery are shown.
Extrapolating from the use cases immediately addressed within PRoViDE candidates for using PRoViDE results and exploitation modes were conducted, both for forthcoming planetary missions and related research projects:
PRoViDE tools have been tested on three case studies; Victoria Crater, Yellowknife Bay and Shaler. Victoria Crater, in the Meridiani Planum region of Mars, was visited by the MER-B Opportunity Rover. Erosional widening of the crater produced <15 m high outcrops which expose ancient Martian aeolian sedimentary strata. Yellowknife Bay and Shaler were visited in the early stages of the MSL mission, and provide excellent opportunities to characterise Martian fluvio-lacustrine (ancient river and lake) sedimentary features.
The PRo3D geological analysis workflow beginning with selection within PRoGIS to final geological assessment within PRo3D has been defined and described (Figure 53).

Figure 53 : Various elements of PRo3D geological analysis. Left: Interpretation workflow for geological analysis of 3D DOMs of Martian rock outcrops in PRo3D. Upper middle: PRoGIS interface illustrating the summoning of a Digital Outcrop Model (DOM) within PRo3D by selecting the footprints of the stereo imagery used to create the OPCs. Upper right: Dip and strike is calculated on the main stratigraphic boundaries as well as the internal laminations and cross-laminations. The coloured planes are the best fit planes representing the dip and strike calculations. The blue lines show the strike direction, and the arrows in the same colour as the plane show the dip direction. Lower middle: Rose diagram of the dip directions of laminations in the Shaler outcrop, measured in PRo3D. Lower right: Detailed interpretation of the stratigraphy at Shaler, showing the main stratigraphic boundaries as red and blue lines, bedset boundaries as thick white lines, and laminations within those bedsets as the thin white lines. The dip and strike values are colour coded by dip value, and generally dip 15° - 20° to the southeast, however, this requires validation. The findings are consistent with those in Grotzinger et al., (2014) and Anderson et al., (2015) in that the outcrop represents a fluvial environment, with recessive, fine grained units interlayered with coarse, pebbly units.
Comparisons of the PRo3D-based assessments with existing Planetary geology papers were made (Figure 54 and Figure 55) which showed good compliance, see
Table 6 and Table 5. Building upon this experience, further potential PRo3D geologic exploitation use cases were identified for MER-A, MER-B and MSL, such as Endurance Crater, Erebus Crater, Home Plate, Comanche Outrcop, Garden City, and Shaler.

Figure 54. Interpretation of Cape Desire from Hayes et al., (2011). The coloured dots represent locations at which dip and strike values were calculated.

Figure 55. Interpreted 3D scene of Cape Desire in PRo3D. The stratigraphy has been correlated with the Duck Bay reference section. Cross-lamination patterns were mapped (thin white lines) in order to locate the bedset boundaries (thick white lines). These form 3 - 6 m thick preserved bedsets. The dip and strike of the boundaries and laminations have been calculated and show a steepening down the section. At the base of the outcrop, dip values reach up to 43°, exceeding the angle of repose, inferring some kind of steepening as a result of rotation or faulting (Hayes et al., 2011).
Table 5. Comparison of observations made at Cape Desire by Hayes et al., (2011) with those made in PRo3D in this research.
Hayes et al., 2011 This research
Outcrop dimensions (m) 12 vertically 13.75 vertically
Stratigraphic interpretation Three units Duck Bay stratigraphy identified
Unit thicknesses (m) Unit I - 1.5 Unit II - < 1.5 Unit III - 4.5 Steno - 0.79 to 1.05 Smith - 0.37 to 0.71 Lyell - 12.75. Steno and Smith combined (Unit I in Hayes et al., 2011) - 1.43 to 1.6.
Sedimentary structure interpretation Two bedsets underlying upper layer 5 bedsets identified in Lyell Member
Bedset thicknesses (m) Unit II - < 1.5 Unit III - 4.5 Unit I - 1.67 to 1.92 Unit II - 2.25 to 2.99 Unit III - 2.15 to 3.63 Unit IV - 2.98 to 3.03
Surface slopes N/A N/A
Dip and strikes of boundaries Top Unit I (Steno) - < 10⁰ to NNE, Top Unit II/III - 6⁰ to NW Top Steno - 5⁰ to 264⁰, Top Smith - 4⁰ to 263⁰, Top Lyell - 6⁰ to 285⁰.

Table 6. Comparison between observations made at Duck Bay by Edgar et al., (2012), with those made in PRo3D as part of this research.
Edgar et al., 2012 This research
Outcrop dimensions N/A N/A
Stratigraphic interpretation 3 units - Lyell, Smith and Steno 3 units - Lyell, Smith and Steno
Unit thicknesses (m) Lyell - 1.8 Smith - 0.8 Steno - 0.7 Lyell - > 0.92 Smith - 0.48 Steno - 0.57
Sedimentary structure interpretation N/A N/A
Bedset thicknesses (m) N/A N/A
Surface slopes 12⁰ - 25⁰ 12⁰ - 29⁰
Dip and strikes of boundaries Lyell-Smith - 2⁰ to ~270⁰, Smith-Steno - ~10⁰ to ~160⁰ Lyell - 2⁰ to 246⁰, Smith-Steno - ~24⁰ to 110⁰
Geological analysis of Lunar orbital and Rover imagery received panoramas to identify the structure and processes of the evolution of elements of surface topography of the lunar maria and highlands. This study includes morphological analysis of micro-relief and rocks composing it. The brief geologic description of panoramas included the following characteristics:
o The general description of surrounding terrain - mare, highland.
o Characteristics of environment: characteristics of relief, such as flat smooth plains, or rugged terrain.
o Prominent geologic objects - big craters, mountains, large rock fragments.
o The description of neighboring craters, including their size, class, type.
o The description of rock fragments, including their size, class, type
o The description of the ground, including their structure, the characteristics of rover tracks.
A study conducted by MII included the Geomorphic classification (craters, rock fragments - Figure 56, and Regolith), resulting in a catalogue for morphologic description of the Lunokhod-1 and Lunokhod-2 panoramas assembled in the course of PRoViDE.
Rock fragments The morphological classes
Angular Angular-rounded Rounded
1 2 3
The smooth surface with primary cleavages The microcrater pitted surface (of microcrater nature) with primary cleavages and secondary (in situ) cracks The densely pitted surface with irregualar cracks
The morphological types Irregular I
Pyramidal II
Prismatic III
Flattened IV
Figure 56. The morphological types and classes of rock fragments on panoramas
UCL have studied the potential of SRR imagery to improve knowledge of rock size distributions, which is critical for understanding the geological history described in (Golombek et al., 2008) as well as the potential navigability of the surface. Figure 57 shows that in 25cm HiRISE images, rocks less than 150cm diameter are hard to detect, whereas in 5cm SRR images, rocks larger than 50cm diameter are fully resolved.

Figure 57: Left: Automatically detected rocks (labelled green) of 25cm HiRISE image (PSP_001513_1655) (top) with 20pixel grid (5m) around an impact crater close to MER-A traverse at ~ (175.51045º, -14.58461º). Right: 58 Automatically detected rocks (labelled green) of 5cm SRR image with 20pixel grids (1m) around the same impact crater close to MER-A traverse at ~ (175.51045º, -14.58461º).
As already described in Section 4.1.3.5 the Beagle 2 Lander was found with the help of Shape-from-Shading techniques by AU.
In the frame of Tasks related to work described in Section 4.1.3.3 the scientific application of computer vision techniques has been turned out also a scientific undertaking due to unexpected challenges of serendipitous stereo search.

List of Websites:
http://www.provide-space.eu
Project Coordinator:
Joanneum Research Forschungsgesellschaft mbH.
Steyrergasse 17 A-8010 Graz
DI Gerhard Paar
Tel: + 43 316 876 1716
Fax: + 43 316 876 1720
E-mail: gerhard.paar@joanneum.at

The PRoViDE website http://www.provide-space.eu/ was set up with project start and has been maintained and further filled with contents during the whole project runtime. It was agreed by JR to further maintain the PRoViDE web site for a couple of years after PRoViDE termination.
final1-prvd-rep-jr-final-report_2016-04-13.pdf