Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Contenu archivé le 2024-06-18

Advanced Thermosphere Modelling for Orbit Prediction

Final Report Summary - ATMOP (Advanced Thermosphere Modelling for Orbit Prediction)

Executive Summary:
The “Advanced Thermosphere Modelling for Orbit Prediction (ATMOP)” research project aims at building a new thermosphere model with the potential to spawn an operational version. This will enable precise air drag computation, which is mandatory for improved survey and precise tracking of objects in Low Earth Orbit (LEO) and the initiation of appropriate measures to minimise risks to satellites (tracking loss, collisions) and ground assets (re-entry zone).

The ATMOP project has been defined to fill this gap through several objectives, which have resulted in the following main outcomes:

• Defining and assessing new proxies to describe the external forcing of the thermosphere, with improved nowcast and forecast capabilities for both solar and geomagnetic proxies. Severral proxies are considered during the activity: Solar Spectral Irradiance model (nowcast and forecast) covering full EUV spectrum; new geomagnetic indexes with better spatial and temporal resolution. It is recommended to use for EUV proxy, the radio flux at 30 cm; and for geomagnetic proxy am and αm planetary indexes (3h basis)

• Developing an advanced semi-empirical Drag Temperature Model (DTM) that improves the orbital propagation accuracy. The delivered model makes use of the new proxy schemes mentioned above and profit of a large dataset of densities coming from several in-orbit satellites. Comparison among the developed models and COSPAR reference models results in a significant improvement in DTM accuracy. DTM2013 is the most accurate model overall in the altitude range 250-1000 km, and an overall improvement over DTM2012; JB2008 is better for low altitudes when comparing with the EDR densities, but much worse for the high-resolution GOCE densities. At high altitudes NRLMSISE-00 sometimes is the most accurate.

• Improving physical modelling of the thermosphere to assist the development of the advanced DTM and of a global physical model with data assimilation capabilities. Several models aspects have been considered, for example, proxies variability, high-altitude convection forcing, density enchancement observed in satellite data related to low energy precipitation, etc. Additionally, CMAT2 model was used to look at specific controlling factor on thermospheric temperature and dynamics that would be of importance for accurate prediction of thermospheric conditions and that is the influence of the previous history on the current condition of the thermosphere

• Developing schemes for near-real-time assimilation of thermospheric and ionospheric data into an advanced predictive DTM and into the physical Coupled Middle Atmosphere-Thermosphere (CMAT2) model. Two independent and complementary tasks are done to fulfil this objective. DA-NRTP has focused on assimilation of thermospheric data into DTM for near-real time prediction. A method to predict temperature corrections up to 3 days out has been developed using a neural network. The resulting model is superior to the simple DTM model based only on observed proxies. These improvements are all planned for testing soon with SWARM data, which can be delivered in near-real time. In regards, to the second task, DA-GA is focused on the assimilation system designed to be used with first-principles models of the coupled thermosphere & ionosphere. Two models were used here: CMAT2 and the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIEGCM). TIEGCM provides various advantages for speed, robustness, and scientific return compared to CMAT2. The assimilation process shows good behaviour, persistency between assimilation cycles and overall overall performance

In summary, the ATMOP project has fulfilled the objectives defined at the initial stage of the activy, contributing to improve the orbit computation prediction performance and our understanding of the processes governing the atmosphere response to solar and geomagnetic activity.
Project Context and Objectives:
The “Advanced Thermosphere Modelling for Orbit Prediction” (ATMOP) research project aimed at building a new thermosphere model with the potential to spawn an operational version, so as to enable precise air drag computation, which is mandatory for improved survey and precise tracking of objects in Low Earth Orbit (LEO) and the initiation of appropriate measures to minimise risks to satellites (tracking loss, collisions) and ground assets (re-entry zone).

With thousands of objects orbiting the Earth and the majority of them in LEO, survey and tracking of the larger specimens among these objects becomes an indispensable task for space agencies and satellite operators. Orbit determination methods are used to predict the trajectory of the objects hours to days ahead, and the estimated orbits are updated each time an object is tracked. For the sake of operations, it is obvious that an accurate orbit prediction is necessary to locate an object in time and space. This requires an accurate thermosphere model.

Despite the presence in Europe of one of the three groups that have the capability to develop and maintain an operational semi-empirical thermosphere model (CNES/CNRS, the other two are in the US), and of one of the world’s leading teams in the field of physical modelling of the atmosphere (UCL), Europe has currently neither a near-real time thermosphere prediction model nor operational services to provide regular thermosphere nowcast and forecast.

The ATMOP project is designed to fill this gap through
• Defining and assessing new proxies to describe the external forcing of the thermosphere;
• Developing an advanced semi-empirical Drag Temperature Model (DTM) that meets the requirements for operational orbit computations;
• Improving physical modelling of the thermosphere to assist the development of the advanced DTM and of a global physical model with data assimilation capabilities, which may ultimately become the successor to semi-empirical models;
• Developing schemes for near-real time assimilation of thermospheric and ionospheric data into an advanced predictive DTM and into the physical Coupled Middle Atmosphere-Thermosphere (CMAT2) model.

1. Density database

The updated semi-empirical DTM that was constructed in the framework of the ATMOP project was based upon the most comprehensive database available to researchers. "It includes high-resolution and high-precision accelerometer-inferred densities from the CHAMP and GRACE missions, which supplied high quality thermospheric density data over almost one solar cycle, including years of high geomagnetic activity (2003) and exceptional low solar activity (2008 -2010). It also include GOCE density data (2009-2012). Besides, it includes daily mean densities inferred from Stella and Starlette precise orbit determination, and mean densities derived using radar tracking data (two-line elements: TLE), which is available for a very large number of objects."This database enables one to analyse phenomena whose period ranges from few tens of minutes to years, for altitude ranging from 250 to 800 km typically.

2. EUV and geomagnetic proxies to describe the external forcing of the thermosphere

The thermosphere can vary rapidly and significantly in response to solar and geomagnetic activity (space weather), i.e. accurate orbit prediction requires accurate space-time nowcast and forecast of the thermosphere. The first and almost immediate external forcing of the thermosphere results from the direct interaction between EUV radiation and the neutral atmosphere: it predominantly drives the medium and long-term evolution of the thermosphere, (on time scales of days to years). The second forcing process results from the solar wind impact on the magnetosphere and its coupling to the ionosphere and the complex interaction between the neutral and ionized components of the Earth atmosphere: it mostly drives short-term changes in response to rapid variations in the solar wind conditions, and the associated geomagnetic activity. The latter forcing process can be described in terms of energy deposition in the auroral zone and subsequent heat transport to mid and low latitudes. Because of the short time lag between geomagnetic forcing and the thermosphere response only rapid thermosphere modelling can be efficiently used for satellite orbit computation and debris surveillance.

The search for the best possible EUV and geomagnetic proxies led to the derivation and analysis of new EUV and geomagnetic proxies:
- an online Solar Spectral Irradiance (SSI) model that delivers nowcasts and up to five-day forecast. This is the first model to deliver such high quality spectra in real time. This model covers the full UV spectrum except for the 80-115 nm band, for which there are currently no good observations. However, to cover the full EUV spectrum, we are currently filling in this gap by training the model with much lower quality TIMED data;
- new geomagnetic indices, with better spatial and/or temporal resolution: sub-auroral am 3-h range MLT sector indices that cover more than 4 solar cycles and, thus, allow robust statistical analysis; trial planetary αm15 and MLT sector αm15-MLT 15-min indices, based upon 15-min averages of the intensity of the magnetic variations used for am and am indices derivation. These trial indices were computed for the years 2000 to 2006. We are currently defining a new calculation scheme for these indices, in the frame of International Service of Geomagnetic Indices activities.

The EUV and geomagnetic proxy assessment resulted in the recommendation of using:
- as EUV proxy: the radio flux at 30 cm, as measured at the Nobeyama Observatory in Japan. A now- and fore-cast prototype model was developed. A 3-day forecast works; given the interest in such forecasts, the development will definitely be pursued after the ATMOP project has terminated;
- as geomagnetic proxy: the am and αm planetary indices calculated on a 3-hour basis, with a 1h30 day-night difference on the time delay between density perturbations and geomagnetic indices at mid and high latitudes, and using a linear relationship between density perturbations and the proxy, allowing for different slopes for nightside and dayside. An am forecasting model was developed. It takes into account the nowcasted value of the geomagnetic index and 12-hour solar wind history. The best performances were obtained for 1-hour interpolated am indices, with a 1-hour lead time.

3. Development of an advanced semi-empirical Drag Temperature Model (DTM)

The ultimate objective of the ATMOP project was to perform precise thermosphere modelling within a time delay that will eventually enable operational thermosphere nowcasting and forecasting and which we call ‘near-real time’ modelling.

A first upgrade of the DTM model has been done, named DTM2012. The data set on which the model is fitted, is that used for the previous DTM2009 model supplemented by the high resolution and high precision CHAMP and GRACE densities. In fact, two models have been constructed using the same data but with different solar activity proxies, namely with F10.7 and with SOHO/SEM HeII. Comparison of the fit to the data revealed that the model created with the classically used F10.7 proxy is better. The problem with the SEM data appears to be on long time scales only (months-years), but not for periods up to a solar rotation of about 27 days. Therefore, the gain in accuracy of this first update of the DTM model is due to the large database to which it was fitted.

The second upgrade of the DTM model, named DTM2013, is fitted on the data set used for the DTM2012 model, supplemented by 2.5 years of GOCE data at 270 km. This model is constructed using the 30 cm radio flux as solar proxy, and the 3-hour am index as geomagnetic proxy.

The pre-ATMOP model DTM2009, DTM2012, DTM2013, as well as the COSPAR reference models JB2008 and NRLMSISE-00 have been compared with the same observations. As expected, DTM2013 is significantly more accurate than the other models when compared to the internal data, and especially for GOCE. Comparison to US Air Force (courtesy Bruce Bowman) EDR mean densities, which were assimilated in JB2008, showed that DTM2013 is less precise up to 400 km than JB2008, but more precise above that altitude. The comparison with EDR densities is slightly better with DTM2012 than with DTM2013. The CHAMP and GRACE results are also better with DTM2013, even if the same data were used. The improvement is thus due to the new F30 proxy.

DTM2013 is the most accurate model overall in the altitude range 250-1000 km, and an overall improvement over DTM2012; JB2008 is better for low altitudes when comparing with the EDR densities, but much worse for the high-resolution GOCE densities. At high altitudes NRLMSISE-00 sometimes is the most accurate.

4. Improvement of thermosphere physical modelling

The UCL numerical 3-D GCM (Global Circulation Model) CMAT2 was used to evaluate techniques and proxies considered in the development of the advanced DTM model. CMAT2 is a first-principles model based on numerical solution of the Navier-Stokes equations of energy, momentum and composition.

A series of simulations at different geomagnetic and solar activity levels has been run. Taken together they show the way thermospheric parameters varied as solar insolation in the EUV changed (via the F10.7 index) and as geomagnetic activity was ramped up (via the Kp index). We showed, for example, that there is only a small global average temperature variation seen for Kp's up to 5 (i.e. am up to ~100 nT) - it requires a substantive storm to produce a significant increase globally. The thermospheric temperature is much more responsive to F10.7 changes.

Different high-latitude convection forcing were implemented in the model. For example, using SUPERDARN data in place of Kp dependent Foster empirical convection patterns has shown how dependent the morphology of the energy inputs (via Joule heating and ion drag heating) are on the detailed convection patterns one uses. Also, using IMF Bz and By dependent field patterns instead of Foster evidenced difference in thermal response of the atmosphere to an IMF Bz positive and IMF Bz negative at times when the Kp was roughly the same and so the same Foster convection pattern would have been used. We also studied how density enhancements seen by the CHAMP satellite can be produced by low energy precipitation. Our results showed how a non-standard modelling simulation can illustrate local phenomena by making local modifications to the usual energy or momentum inputs.

In parallel, CMAT2 model was used to look at specific controlling factor on thermospheric temperature and dynamics that would be of importance for accurate prediction of thermospheric conditions and that is the influence of the previous history on the current condition of the thermosphere. Solar activity both heats the atmosphere and causes additional ionization. As a result, the ionosphere will have a high conductivity when the geomagnetic storm occurs. The heating caused by the storm then increases as the recent Solar activity increases. The increase in heating manifests itself as an increase in the intensity of the response to the geomagnetic storm. On the contrary, the recent geomagnetic history of the atmosphere has very little effect on the response to further geomagnetic storms. Considering the energy input to be constant per electron, this can be traced to the tendency of the second storm to control the electron content in the auroral region. This then fixes the response of the atmosphere to that storm, regardless of the previous electron content. The atmosphere, at least as modelled by CMAT2, has a geomagnetic response that is insensitive to recent geomagnetic history.

The actual longitudinal dependence and temporal dynamics of geomagnetic activity is not properly described by a 3-hour planetary index. Modelling of two major storms showed that the alpha indices gave significantly different results in CMAT2 than did Kp, but it specifically highlighted the fact that the temporal evolution of the atmosphere during and after the storm differed significantly from some of the measurements taken at the time.

This latter result illustrates the fact that using CMAT2 for ATMOP purposes challenged the current CMAT2 version. In this case again, ATMOP activities led to raise very interesting questions that are worth being explored after the program ATMOP

5. Development of schemes for near-real time assimilation of thermospheric and ionospheric data into an advanced predictive DTM

In this work package, two distinct yet complementary assimilation activities have been carried out.

[DA-NRTP] addressed assimilation of thermospheric data into a semi-empirical model (DTM) for near-real time prediction. The first model developed and released through ATMOP was DTM2012, which is constructed using the full CHAMP and GRACE high-resolution density data, new mean total density data and historical mass spectrometer data. Also used are the 81-day mean and 1-day delayed F10.7 solar flux and the am geomagnetic index. Although the mean accuracy over time frames of weeks/months has improved significantly over the years, errors on time scales of less than a few days can still be of the order of tens of percent, mainly due to the inadequateness of the solar and geomagnetic proxies used and their low temporal resolution.

To improve the density prediction up to 3 days out, we developed a method to predict temperature corrections to DTM using a neural network. DTM2009 is used as the background model here and results are compared to DTM2012, which is much more accurate at the lowest altitudes. The neural network model uses the exospheric temperatures and the gradients of F10.7 flux for the past 4 days, the Ap geomagnetic index and its gradient for the past day, and a bias. Input variables were chosen via an ARIMAX (autoregressive integrated moving average with exogenous inputs) analysis.

The artificial neural network is superior to a persistence model and to a version of DTM2009 that uses observed proxies (i.e. “perfect predictions”) for 24, 48 and 72 h forecasts. The accuracy is limited by Ap, for which an accurate forecast is not available. Therefore, the neural network cannot predict a geomagnetic storm, but after the onset it reacts fast. The prototype is constructed using daily observations, whereas the geomagnetic index is available every 3 hours. For operational use, the most important task will be to design a robust data quality control procedure. These improvements are all planned for testing soon with SWARM data, which can be delivered in near-real time.

The [DA-GA] assimilation system built in ATMOP is designed to be used with first-principles models of the coupled thermosphere & ionosphere, Two models were used here: CMAT2 and the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIEGCM). TIEGCM provides various advantages for speed, robustness, and scientific return compared to CMAT2. We assimilated neutral density observations from CHAMP. Similar observations from GOCE and GRACE were not assimilated, but the [DA-GA] code was set up so that it will be easy to do this in the future. An ensemble optimal interpolation assimilation scheme was developed, which provides an optimally-weighted average of the model background and the observations. The background errors are key, and were calculated using an ensemble of independent model runs.

The assimilation cycle was short enough, 1 hour, so that the assimilation impact persists between cycles. TIEGCM, CMAT2 and DTM2012 results were compared with CHAMP at solar minimum (2009 March) and solar maximum (2003 March). (NB: CMAT2 ran without assimilation here due to issues with CMAT2’s stability). DTM2012 performs the best at solar minimum, but TIEGCM also performs very well, with densities only slightly higher than DTM2012 overall. CMAT2 performs the worst here. At solar maximum TIEGCM generally outperforms CMAT2, and more so DTM2012. However, DTM2012 tends to mirror sharp changes in the CHAMP densities better than the physical models. In addition, using the [DA-GA] assimilation has a small positive impact on the TIEGCM results, improving performance by ~2.5% in both periods.

The above comparisons were repeated with forecast F10.7 flux developed in WP2. At solar minimum, there is very little difference between the actual and forecast daily F10.7 flux. For solar maximum, the forecast F10.7 flux shows larger variability. Encouragingly, there is only a small deterioration in the density results when using the forecast F10.7 flux, meaning this forecast proxy is certainly accurate enough for general use.

Project Results:
Forcing the Termosphere: physical processes and proxies for semi-empirical modelling
Solar proxies:
Nowcast of the solar EUV spectrum for upper atmospheric specification and proxies to be used in DTM, and the associated model is operational and has been trained by using data from the SDO and SORCE satellites. This model covers the full UV spectrum except for the 80-115 nm band, for which there are currently no good observations. However, to cover the full EUV spectrum, we are currently filling in this gap by training the model with (much lower quality) data from the TIMED mission. The development of the nowcast model has led to number off spinoffs that are described below.

For the objectives of ATMOP, the most important result is the achievement of an online solar spectral irradiance model that delivers nowcasts and up to five-day forecast. This is the first model to deliver such high quality spectra in real time. The other existing (and commercial) model Solar2000 is based on solar proxies only.
This work also led to two unexpected and relevant spin-offs:
1) The same irradiance model has been extended toward the specification of the irradiance out of the ecliptic. This is important for space exploration but also provides new and deep insight in the solar-stellar connection. Indeed, by integrating the solar spectral irradiance over the full Sun, one obtains its luminosity, which is one of the key stellar parameters. For the first time, we have been able to estimate the solar luminosity. We find that it is not constant. As a consequence, the flux deficit caused by sunspots is unlikely to be compensated by bright structures on the Sun, which in turn has deep consequences on the solar variability. The same model shows that past changes in the terrestrial orbital inclination cannot explain the 100 kyr periodicity that is observed in climate records.
2) A second spinoff is the building and delivery of a database of solar proxies that are relevant for the specification of the terrestrial atmosphere. A unique feature of this database is that all data gaps are filled by a new technique that has been published in Dudok de Wit 2011

We have shown that the spectral contribution for the “channel-3” (work on with the PROBA2/LYRA 2 extreme UV channels ) has changed since its pre-flight calibration, and that the contribution for EUV wavelength (above about 17 nm) are nearly negligible in this channel after about mid 2011, in spite of tentative to recover the EUV contribution through various inter-comparison between the different LYRA channels (Kretzschmar et al. 2012). At the same time, we have studied the unique observations of flare emission in the H I Lyman-alpha irradiance line provided by the “channel-1” of LYRA. Due to degradation, this channel is usable only during the first 2 month of the mission but some flares during this period have shown flux enhancement at H I Lyman-alpha, of the order of 1%.

We showed that the flare energy released in Lyman-alpha is of the order of a few percent of the total radiative energy. Furthermore, we found no evidence that flares that produce an enhancement of the H I Lyman alpha line induce changes in the terrestrial ionosphere, as indicated by signature in the ULF waves. Finally, we have investigated the energy released in the EUV wavelength for flare of various soft X-rays amplitude, which show that a significant amount of flare energy goes into the short wavelength range.

After spending the first two years of the ATMOP project on building and delivering an online now- and fore-cast of the solar spectral irradiance (SSI) that is now fully operational, in the final stage of the activity, there has been a shift towards solar proxies with a major step forward in the description of the solar forcing. Progress has also been made in the physical modeling of the EUV modelling, and the contribution of flares to daily EUV irradiance has been evaluated.

The reason for going back to proxies lies in the drawbacks of the SSI: this quantity is definitely the most appropriate input for the physical modeling of the upper atmosphere response to solar variability, but it still suffers from a number of drawbacks that impede its use in operational models. First of all, our reconstructed SSI is available since June 2010 only, when magnetograms from the SDO spacecraft, which are the prime input of our model, became available. This severely limits the period over which the model outputs can be tested with the DTM2000 model. Secondly, the observational data from SORCE upon which this empirical SSI model is based, are still affected by calibration issues, and so their long-term trends remain poorly known. Satellite drag models, on the contrary, require long and well-calibrated records. Then, we stopped developing our SSI model, which remains fully operational and will gradually become more relevant as time goes on.

As an alternative to direct SSI data, we moved back to solar proxies, among which centimetric radio observations stand out by their long records (starting in the 1950’s), their exceptionally stable calibration in time and their relatively good performance for upper atmospheric modelling, see (Dudok de Wit and Bruinsma, 2011). So far, all the attention has focused on the popular 10.7 cm solar radio flux. However, daily observations also exist at other wavelengths, but are rarely, if ever, used. We collected them from various observatories and developed an automated procedure that merges them into one single homogeneous database, fills gaps self-consistently by expectation-maximization (Dudok de Wit, 2011) and sets them all on the same time grid. Five wavelengths were found to be of particular interest: 3.0 8.1 10.7 15.0 and 30.0 cm. All are observed on a daily basis, are available within 24 hours, and have less than 1% of observation outages. All started in 1957 or before.

This unique database has several assets. First of all, since different wavelenths probe different emission processes of the solar atmosphere, their intercomparison sheds light on the relative contribution of these processes. A longstanding problem here is the relative contribution of gyro-emissions over Bremsstrahlung in the solar rotational modulation. The two originate from different solar features and there is no consensus on which is prevalent. Thanks to a blind source separation approach, we were able to identify and extract these two contributions, and thereby reveal the relatively stronger contribution of gyro-emissions.

The result that is most relevant to ATMOP, however, is the identification of the radio flux at 30 cm as being the best solar proxy for describing the thermospheric density. We tested solar radio emissions at various wavelengths, and also their Bremsstrahlung and gyro-emission components, by using them as inputs in the DTM2000 model and found that the radio flux at 30 cm, as measured at the Nobeyama Observatory in Japan, provides superior performance over all other radio fluxes. It also outperforms the proxies we had been recommended before in ATMOP, namely the MgII index, and the HeII line intensity from SEM onboard SOHO. The RMS of the observed-to-calculated density ratios, which is a standard metric for testing the model fitting capacity, drops on average by 7% when using the 30 cm flux, as compared to the 10.7 cm flux.

This major step forward reveals that synoptic solar radio observations, as of today, are superior to all other solar proxies when it comes to modelling of the upper atmosphere response. This situation is likely to last for at least a decade until better SSI observations become available.

The forecast of the EUV spectrum has actually been operational for more than a year now, and has already been described in D2.3 delivered during the project. Our intent was to add to it a prototype of the solar radio fluxes, which we now know to be more relevant for ATMOP. Although a 3-day forecast now works, it has to be run manually because the automated download of the radio data is still under development. As an alternative, D2.7 will propose a linear regression forecast. However, given the interest in such forecasts, the development will definitely be pursued after the ATMOP project has terminated.

EUV forcing
Because the EUV emission occurs high in the very structured solar atmosphere, the actual configuration of the solar corona is an important factor to precisely model the EUV flux. Additionally, the off-limb configuration of the corona should help to improve a forecast at about 7 days (a quarter of the solar rotation). These ideas led us to make an attempt to physically model the solar EUV flux in the following way:
1. Compute the configuration of the corona by extrapolation of the photospheric magnetic field.
2. Fill the coronal loops and open field lines with plasma at density n and temperature T.
3. Compute the EUV emission for each model cell (latitude x longitude x altitude) and integrate it along the line of sight.
4. Integrate over the solar disk to compute solar irradiance.

Step 2 is particularly important. For this step, we use scaling relationship that are used in the literature and that require the determination of 8 free parameters. To determine these parameters, we use an iterative procedure to match observations. The first observational constraint that we use in the EUV images observed by the EIT telescope onboard the SOHO spacecraft at a particular wavelength. However, we have found that the parameters that best reproduce the solar image at wavelength l1. and provide a good agreement with the irradiance at that wavelength, are not able to match the irradiance level observed at wavelength l2, and vice-versa .

We then made an attempt to match the not only one wavelength but the full spectrum or more precisely the differential emission measure, an estimate of the amount of emitting material at a certain temperature along the line of sight. This showed us that only very high temperature emission can be reproduced by the model but the solar flux at transition region or chromospheric temperature that are still in the EUV spectral range could not be reproduced. This has been interpreted as a limitation in the validity of the magnetic field extrapolation we used at low altitude in the solar atmosphere. This research direction has thus been abandoned in the ATMOP context.

Solar flares and EUV irradiance.

Solar flares are well known to produce strong increase of the solar flux at short wavelength. They are traditionally observed in the 0.1nm-0.8nm spectral range by the GOES spacecraft, however, very few EUV observation was available before the launch of the SDO spacecraft with onboard the EVE instrument that monitor the EUV solar irradiance at a cadence of 10s.
We have performed a statistical analysis of the flares observed by SDO/EVE and have found the following results:
- Most of the energy between 0.1nm and 45nm actually goes into the 0.1-7nm spectral range.
- The solar EUV flare flux at different wavelengths correlates well with the flux in the GOES 0.1nm-0.8nm spectral range.
Based on these results, we have estimated the contribution of solar flares to the actual daily EUV irradiance. Indeed, current EUV flux models do not take into account flares and no justification has been provided up to now for this. Our preliminary estimate show that, for the ascending phase of solar cycle 24, solar flares contribute most of the time for less that 1% to the daily EUV irradiance for the He II line at 30.4nm and between about 2% and 4% for the 0.1nm-7nm interval where it is expected to be the strongest contribution. Accordingly, the error made on daily EUV flux value by neglecting flares is negligible compared to typical uncertainty of the measurements.


Geomagnetic proxies

One of the early deliverables of the activity was a database of IAGA geomagnetic indices, on the period covering the whole CHAMP and GRACE densities database (2000-2010) as well as a database of the 3 hourly longitude sector indices (Menvielle and Paris, 2001).

While studying the statistical properties of the longitude sector indices over the period 1959-2010, it became rapidly clear that the important parameter was not the geographic longitude of the sector but its magnetic local time, as the coupling between solar wind and the magnetosphere/ionosphere system occurs in a frame fixed with respect to the Sun. We have therefore constructed new indices defined on 4 MLT sectors centered on 06:00 (dawn), 12:00 (noon), 18:00 (dusk) and 24:00 (night). This first study of the statistical properties of the MLT sector indices has been presented at the EGU meeting in Vienna (El-Lemdani Mazouz et al. 2012a).

Using the am network of observatories, we also computed planetary and MLT sector indices with a 15 minutes time resolution, on the basis of the same magnetic variations as those used for am derivation, but using the average of rms of these magnetic variations over 15 minutes (Menvielle et al. 2003). Theses indices, named in the following ‘αm15’ and ‘αm15-MLT’ have been calculated from 2001 to 2006, i.e. they cover the periods of CHAMP and GRACE data with significant magnetic activity, allowing testing these indices. After 2006, we enter into the period of minimum solar activity, with very limited geomagnetic activity. The 15-min time resolution planetary and MLT sector indices have been presented at the European Space Weather Week in Bruxelles (El-Lemdani Mazouz et al., 2012b). They have been provided to D. Boscher at ONERA for testing with radiation belt data.

Based on αm15 indices, 1-hour and 3-hour time resolution indices can be computed for the time period that precedes any observation, thus allowing the use of these indices for quasi-real-time modelling of the thermosphere. This is not the case for the 3-hour K-derived am and am-MLT indices.

Assessment of geomagnetic indices as proxies for DTM

This assessment was based upon the results of correlation calculations between indices and CHAMP and GRACE observations. Density data pre-processed during the first reporting period (transformation into AACGM geomagnetic coordinates, and projection at a constant altitude) have been further averaged in 10° geomagnetic latitudes bands or low-pass filtered to get rid of density variations associated to gravity waves. Density perturbations are then defined as the deviation of the observed density with respect to a “density base line”. This baseline was computed using the pre-ATMOP DTM2009 model by setting the geomagnetic activity level to zero (i.e. magnetic quietness).

Two kinds of analysis have been performed: correlations on long time period (one year) separated into the 4 MLT sectors, and correlations for all geomagnetic storms that occurred during the 2002-2006 years. Spearman and Kendall’s rank correlation coefficients have been calculated between the normalized densities time series and the geomagnetic indices with different lag times.

Dst, am, am-MLT, as well as αm and αm-MLT indices with time resolution of 15min, 1 and 3 hours have been used in our studies. The best correlations were obtained using the indices calculated on a 3-hour basis. In addition, the 1-year time series analysis shows that while MLT sector indices lead to high correlation for the midnight sector observations, this is no more the case for the noon sector. Both indices am and αm180 seem the most appropriate to describe density

The use of either a better time resolution or a better spatial resolution did not allow improving the density description during moderate geomagnetic activity periods as well as during geomagnetic storms.

Detailed analysis have been addressed on the following:

The K-derived aσ MLT-sector indices:

The new sub-auroral sector indices are based upon the K local geomagnetic activity indices from the planetary am network stations. Their derivation scheme draws directly from that of am indices. Four Magnetic Local Time (MLT) sectors are considered, leading to four different K-derived MLT-sector indices: the aσDawn (09-15 MLT), aσNoon (21-03 MLT), aσDusk (15-21 MLT) and aσMidnight (21-03MLT) indices. They cover more than 4 solar cycles and, thus, allow robust statistical analysis.

Maxima on the mean UT/DoY behaviour of each index are observed at equinoxes and minimum around June and December is observable, also (although minor) in dawn and noon MLT sectors.
The variations of the angle between the Earth-Sun’s line and the Earth’s dipole axis, responsible for the equinoctial effect [e.g. Cliver et al., 2000] very well reproduce the observed UT/, and confirm that the equinoctial effect is the main cause of the magnetic variability observed in each MLT-sector. Variable magnetopause shape and cusp geometry is the most probable cause for it [Crooker and Siscoe, 1986]. The seasonal behaviour is very similar in analysed dat, and the amplitude level increases from noon over dawn, dusk, to midnight.

Case studies of several geomagnetic storms have also been made. In particular, use of MLT-sector indices has allowed to dissociate the effect linked to the day-side coupling (magnetopause reconnection and pressure pulse) from the one associated with substorm and partial ring-current intensifications in the night and dusk sectors. These analyses clearly show that the four aσ indices have specific behaviours, and that it is possible to get insight into both the statistical properties and the contribution to the dynamics of a given storm of the physical processes responsible for the observed geomagnetic activity.

Yearly files of the aσ indices (1959-2011) may be downloaded at the following Web address: http://cdg.u- strasbg.fr/PortailEOST/Mag/ObsMag-data.html.


The 15mn resolution αm indices:

Initial delivered indices were provided in the basis years 2001 to 2006 included, and their calculation scheme is similar to the one of am indices: a standardization in latitude in order to take into account the fact that the network stations are not located at the same magnetic latitude, and an intermediate estimate of geomagnetic activity in longitude sectors (5 for the northern hemisphere, and 4 for the southern hemisphere). Further analysis seem to indicate that the whole procedure may be improved mainly because the Intermagnet network of magnetic observatories gathers now much more geomagnetic stations than those currently used for intermediate estimate in longitude sectors in the am computation process [Mayaud, 1980; Menvielle and Berthelier, 1991]. Thus, no αm preliminary indices for year 2006 to 2012 are provided. Main conclusion on this regard is related to the use of geomagnetic indices, for which a better time resolution did not allow improving the density description. A new new calculation scheme of αm indices is being implemented after ATMOP acitivity, using much larger observatory network: 32 stations, with 11 in southern hemisphere. The new procedure for the standardisation in latitude will take into account the secular variation of the internal geomagnetic field, and finally we will use the same interpolation in longitude for calculation of hemispheric and MLT-sectors indices. This will form part of the new International Service of Geomagnetic Indices that is moving from LATMOS to EOST in Strasbourg, under the supervision of Aude Chambodut. 15mn resolution αm indices will be produced in a planetary version, an hemispheric version as well as a MLT sector version, starting from year 2000. Nowcast of αm indices as well as am indices will be implemented in the frame of this rejuvenation.

Forecast of geomagnetic indices

A tool for am indices forecasting was developed, on the basis of Artificial Neural Network using a Feed Forward scheme with solar wind parameters as inputs. Such approach has already been successfully used for geomagnetic indices forecasting [see, e.g. Wing et al 2005, Boberg et al, 2000].

We considered two geomagnetic indices time series: the original am index data series with a 3-hour time resolution, and a 1-hour time resolution data series deduced from the am data series through linear interpolation, hereafter named am3h and am1h respectively. For the solar wind parameter values, we used the 5-min dataset of the OMNIWEB database, based on measurements made onboard ACE, IMP8 or Wind satellites and projected at the Earth’s bow shock nose.

We used the 1995-2010 period for training and validation of the neural network, and we kept a two years period (2011-2012) for testing. For a given dataset, am1h for example, we tested two different lead-time of predictions, 1 hour or 3 hours forecasts. We tested also up to 18 hours history of solar wind input parameters and the possible use of now-casted value of the geomagnetic index. The construction of the optimum scheme (number of nodes in the hidden layer and length of solar wind parameters time history) for a given time resolution and lead time of prediction is made through minimisation of the root mean square error between the measured indices and the forecasted ones.

To compare the different forecast models and lead times of prediction we calculated classical contingency tables for different thresholds of the geomagnetic activity, and deduced from them the Probability of Detection (PoD), the False Alarm Rate (FAR) and the True Skill Score (TSS).

The best performances in this process were obtained for am1h indices forecasting, with a 1-hour lead time and taking into account the now-casted value of the geomagnetic index and the 12 preceding values of solar wind density (n) and velocity (v), IMF magnitude (|B|) and southward component (Bz); the worst performances were obtained for am3h forecasting, with a 3-hour lead time (F3) and taking into account the IMF magnetic field only as inputs parameter. The good results obtained with the 2nd model are encouraging in case of failure of plasma instruments, which occur more frequently than for the magnetic ones. This forecasting tool showed promising results. It is not currently designed for real time forecasting. Such real time forecasting can be achieved by using MAG and SWEPAM ACE spacecraft instruments data, available within a few minutes delay, and nowcasted value of the am index, available within a delay of about 30 minutes. This forecasting tool can also be adapted to forecasting of alpha indices.

Lag time between geomagnetic activity and density perturbations:

Lag times depend on magnetic latitudes and local time and are much variable from one storm to the other, as pointed out in several previous study (Guo et al., 2010, Liu et al., 2010). Our study confirms that the lag times are shorter during day than during nights at mid latitudes, regardless the geomagnetic index used, and the altitude of observations. In addition, the 1-year time series analysis shows that these lag times increase with year, i.e. with decreasing solar activity.

The storm study also shows significant smaller lag times for the strongest storms, in agreement with Lathuillere and Menvielle, 2010.


REFERENCES:

Boberg, F., P. Wintoft, and H. Lundstedt (2000), Real-time Kp prediction from solar wind data using neural networks, Phys. Chem. Earth, 25,275.
Cliver, E. W., Y. Kamide, and A. G. Ling (2000), Mountains versus valleys: Semiannual variation of geomagnetic activity, J. Geophys. Res., 105 (A2), 2413–2424, doi: 10.1029/1999JA900439.
Crooker, N. U., and G. L. Siscoe (1986), The effect of the solar wind on the terrestrial environment, in Physics of the Sun, edited by P. A. Sturrock, p. 193, D. Reidel, Norwell, Mass.
Dudok de Wit, T. (2011), A method for filling gaps in solar irradiance and solar proxy data, Astron. Astroph., 533, A29.
Dudok de Wit, T., and S.Bruinsma (2011), Determination of the most pertinent EUV proxy for use in thermosphere modelling, Geophys. Res. Letters,doi:10.1029/2011GL049028.
Mayaud, P.N. (1980), Derivation, Meaning, and Use of Geomagnetic Indices, American Geophysical Union, Washington D.C..
Menvielle, M. and A. Berthelier (1991), The K-derived planetary indices: description and availability, Rewiews Geophys. Space Phys., 29, 415-432.
Wing, S., J. R. Johnson, J. Jen, C.-I. Meng, D. G. Sibeck, K.Bechtold J. Freeman, K. Costello, M. Balikhin, and K. Takahashi (2005), Kp forecast models, J. Geophys. Res., 110 (A4), DOI: 10.1029/2004JA010500


Modelling of Thermospheric Drag Processes

After a set of initial model runs, several reports were released with analysis of the way atmospheric parameters varied as solar insolation in the EUV changed (via the F10.7 index) and as geomagnetic activity was ramped up (via the geomagnetic proxy Kp or Tiros energy level). This has formed a backbone for subsequent work where we have examined how different ways of implementing the solar or geomagnetic forcing into the model can be attempted.
Work has focused on using SUPERDARN data in place of the Foster empirical convection patterns and this has shown how dependent the morphology of the energy inputs (via Joule heating and ion drag heating) are on the detailed convection patterns one uses. This would affect the geographical distribution of satellite drag effects, although no conclusions on whether there would be a significant effect on the total or average drag over an orbit.

Another relevant piece of work is a similar study looking at using the Weimer field patterns instead of Foster. This has the advantage of using the IMF Bz and By measurements, plus solar wind velocity as driving parameters and so – besides being more flexible than Foster, allows for a lot more combinations of factors to be considered.

An analysis of the “Modelling of cusp density enhancement” has been also done on how density enhancements seen by the CHAMP satellite can be produced by low energy precipitation. This demonstrates how a non-standard modelling simulation can illustrate local phenomena by making local modifications to the usual energy or momentum inputs. In this case a heating source was inserted in the cusp at F-region altitudes to show that the density enhancements seen by CHAMP would need the heating source to be well above the usually assumed electrojet altitudes.

More work on aspects as the balance between eddy diffusion and molecular diffusion in the Mesosphere-Lower-Thermosphere (MLT) region affects thermospheric temperature structure. The eddy diffusion has a strong effect on the thermospheric temperatures, through an increase in the O:N2 ratio. However, the eddy mass diffusivity which controls this is well constrained by the height of the turbopause. The model also shows a cooling with increased carbon dioxide, due to enhanced non-LTE cooling at the mesopause. Thermospheric temperatures are largely unaffected by the decision between climatological minor species and full chemistry. This analysis is reported in dedicated deliverable


Thermal gradients in the Lower thermosphere

Work on the factors controlling the temperature gradient through the MLT (Mesosphere Lower Thermosphere) region as addressed. The DTM program has accuracy problems in the MLT due to the small number of measurements available here (satellites do not last long in this region as their orbits decay fast). The work went in particular at the influence of the eddy diffusion coefficient, the proportion of Carbon Dioxide, and the effect of minor chemistry on the MLT temperature gradient. The diffusion of heat is controlled by two diffusion coefficients – molecular diffusion and eddy diffusion. The molecular diffusion is considered to be fairly well determined by theory as it is based on the collisions between molecules which are of well-known size and properties. The eddy diffusion coefficient however is a proxy for the effect of different scales of turbulent and stochastic processes in the gas. It is scale dependent in that any turbulent motion of a scale size greater than the model grid should be explicitly modellable, but some way must be found of parameterising sub-grid scale processes and that is what the eddy diffusion coefficient does.


Preconditioning and thermospheric history analysis looked at specific controlling factor on thermospheric temperature and dynamics that would be of importance for accurate prediction of thermospheric conditions and that is the influence of the previous history on the current condition of the thermosphere. The model was set up to do a series of simulations of a geomagnetic substorm.The model was run in various combinations of precursor and main run. The control was done with a steady-state start, then there runs with a starting point after low, medium or high solar activity, and runs where the starting point was a previous substorm. Solar activity both heats the atmosphere and causes additional ionization. Since it, in general, varies only slowly, high Solar activity over the proceeding time period will imply continued high Solar activity over the period of interest. As a result, the ionosphere will have been charged by the extra short wave radiation, and will have a high conductivity when the geomagnetic storm occurs. The heating caused by the storm then increases as the recent Solar activity increases. The increase in heating manifests itself as an increase in the intensity of the response to the geomagnetic storm. Specifically, the temperature throughout the middle and upper thermosphere increases, and the two-cell neutral wind convection pattern accelerates.
As modelled by CMAT2, the recent geomagnetic history of the atmosphere has very little effect on the response to further geomagnetic storms. Considering the energy input to be constant per electron, this can be traced to the tendency of the second storm to control the electron content in the auroral region. This then fixes the response of the atmosphere to that storm, regardless of the previous electron content. The atmosphere, at least as modelled by CMAT2, has a geomagnetic response that is insensitive to recent geomagnetic history.

The idea of using the DTM to generate a start-up data set for CMAT2 was tried but shown to be more complex than originally envisaged. Because DTM is useful mainly above 200km, the MLT region was not a good constraint on the model and trying to make a mix of MSIS and DTM as a start-up gave discontinuities in the thermal and density profile which caused instabilities in the model. When this was made to work by smoothing the overlap between the two empirical models, it was seen that CMAT2 reverted within a few iterations to the state it would have been if start-up had been from MSIS alone - that is the CMAT2 model had a default state it would iterate to under given input conditions no matter what start-up data set it started from (within acceptable bounds). The only way the DTM input could be used to significantly alter the final simulation would be to assimilate it unchanged into CMAT2


Runs to evaluate different input proxies.

Former work by Lathuillere et al which suggested that geomagnetic substorms would produce different levels of response depending at which longitude they occurred has been investigated. This former study suggested that the proxy for geomagnetic activity should have a longitude dependence, and that at the least the usual proxy for geomagnetic activity – Kp – had too coarse a time resolution. (Kp is produced every 3 hours.). The study therefore took the alpha-15 proxy (the “15” being the time resolution, 15 minutes) and simulated two specific substorms, comparing the results to what one saw if the standard Kp parameter was used. The results did indeed show striking differences from the coarse runs, but were not entirely in accordance with the data, suggesting more work was needed – the difference is very likely to be due to the way the auroral convection pattern is generated for numerical models. This has lead to suggestions of further work out of the scope f current project.

Comparison with Data
Comparison with data from CHAMP (300 km) and GRACE (400 km) at high resolution (every 80/160 km along the orbit respectively), plus several other satellites was envisaged enable the absolute densities and temperatures in the model to be tuned to the right average values and thus enable a comparison between the variability in the numerical model and the data, and to look at whether it was possible to improve the now- and fore-casting capabilities of the DTM semi-empirical model by using numerical routines to do some pre-manipulation of parameters or algorithmic dependencies before DTM does its main calculations. Due to the difficulties in accessing to the data and the need of addressing other tasks, this comparison has not been addressed.

Effort has been devoted to Modelling the 2009 low Solar minimum". The 2008-10 solar minimum has been a challenging period for modellers and indeed atmospheric physics generally. This solar minimum has been "quieter", in terms of solar output (eg sunspot number) and longer than any for at least 100 years - certainly for any in the Space Age and most modern types of data-taking. Global Circulation models have uniformly failed to simulate the state of the upper atmosphere during this period as it has been cooler and less active than any of the major models predicted. This actually makes it more important as it becomes a worst-case scenario that we must understand in order to be sure our atmospheric physics is complete and fully comprehensive.

Several runs of the CMAT2 model were carried out and compared to other models and data. Emmert (2010) found the standard MSIS model mass density needed correcting during this period: this is one of the "starting points" for the CMAT2 runs if done from "spin-up". When compared with CHAMP and GRACE density data - and CHAMP winds - CMAT2 was found to have the same deficiencies as the other major models. This was also true when comparing to GOCE densities from ESA (Nov 2009 onwards). However it was found that a better agreement was reached with the data if the standard F10.7 proxy was changed to the S10.7 variant. This was much better at simulating the low EUV fluxes seen in this solar minimum.

There were, however, problems with CMAT2 during the lowest part of the minimum. It was found that the thermospheric (neutral atmosphere) part of the code coped well with the unusual conditions but that the GIP (Global Ionosphere Plasmasphere) module that calculates the self-consistent ionospheris physics crashed at these low ion- and neutral-density values. The reason for this is not known at present and work continues to look for the reason.

Regarding an additional study on Incorporating near-real-time plasma convection, based the insertion of the SUPERDARN convection data into the CMAT2 model. This work shows how the effect of geomagnetic activity is more complex than can be described by a simple proxy or two. In DTM, as indeed in most numerical GCMs too, the effect of geomagnetic activity is included via a simple proxy like Kp. The geomagnetic activity can be broken into more detailed components - the alpha activity index used there was generated in work to show that there is a longitudinal dependence of activity not included in the basic Kp or its like. The work shows that for similar geomagnetic Kp values, the actual convection patterns can be quite different and this affects the geographic distribution of the atmospheric response. Although we are showing this in the numerical model this is of course relevant to the semi-empirical DTM model since its response to geomagnetic activity is controlled by simple proxies only.

Evaluation of data assimilation by testing in numerical models
Incorporating data assimilation techniques into both ionospheric and thermospheric elements of the model(s), both DTM and CMAT2 was part of the study, distributed among two main workpackages

In regards to the incorporation of data assimilation techniques into the thermospheric component of the numerical model (WP3), and in particular whether drag estimates from satellite/debris tracking systems or other data sources could be used. One aspect of the work that had been highlighted by studies previous to ATMOP was the need to evaluate how important a component the ionospheric input is to the thermospheric densities and behavior and vice versa. This would be done with tests to evaluate the quantitative effect variations in the ionosphere have on the thermospheric density and velocity structure. Other suggestions included determining how long the model takes to return to “equilibrium” after a range of parameters is “forced” away from the free-running self-consistent state, and a test of how easy it is to "nudge" the model towards real-world measurements.

A series of integrating and trialling tests were carried out in close association with the Met Office (who also have their own version of the model). While the main goal was to improve the empirical model, a secondary goal was improvement in the physical model itself and evaluation of assimilation of data into this and its potential for the future.

Progress made in the use of data assimilation in the CMAT2 model has been described in a report on Testing the incorporation of data assimilation into the CMAT2 model. First using ERA re-analysis data sets we could show the effect of assimilating data up to the 80km height level on the thermosphere above it. The thermospheric temperature and wind structure was shown to be changed, though the largest effects were close to the boundary with the assimilated data set.

A second report on Effects in the thermosphere of introducing assimilated data into the middle atmosphere of the CMAT2 model looked at the result of incorporating the US NoGAPS-alpha simulations into the lower atmosphere by assimilation. NoGAPS alpha was a US NRL (Navel research Lab) project to show the effect of data assimilation into the lower atmosphere on the thermosphere of a GCM. In particular some of their work seemed to show that a Sudden Stratospheric Warming (SSW) had a thermospheric signature indicating significant coupling between the stratosphere and the thermosphere. The simulations described in report D3.53 take the lower atmosphere (up to 80km) from the NoGAPS-alpha runs (they look similar to the ERA data we had used previously, but we wanted to ensure we had the same conditions as the US SSW simulations). We show that CMAT2 also demonstrates significant effects on the MLT region of the atmosphere, although most of this can be reproduced just by changing the gravity wave scheme in the model.

Semi-empirical modelling of the thermosphere

The main output of this activity, is a revising and upgrading the last version of the DTM model, DTM2009. This upgrade has been executed in two stages.

The first upgrade of the DTM resulted in the DTM2012. First, the DTM model Fortran source codes have been verified and lines that are no longer used were deleted; and the model coefficients have been re-ordered according to the perturbation type. The interface information in the header has been made complete. The model is fitted to the new high-resolution and high-precision accelerometer-inferred densities from the CHAMP and GRACE missions in particular, but low-resolution densities derived from orbit perturbation analysis as well as spectrometer data from Atmosphere Explorer and Dynamics Explorer have also been used.

In fact, two models have been constructed using the same data but with different solar activity proxies, namely with F10.7 and with SOHO/SEM HeII. The latter is assumed to be more representative of atmospheric heating due to EUV activity, and it was initially planned to be the new solar activity proxy as stated in the objectives. However, comparison of the fit to the data revealed that the model created with F10.7 is better. The problem with the SEM data appears to be on long time scales only (months-years), but not for periods up to a solar rotation of about 27 days.

There is a stability issue with all solar proxies, from time scales of 6 months and beyond. For the short range, the stability issue stems from the correction of semi-annual variations that had to be made when fitting the solar proxies. On the long term (11 years and beyond), the main uncertainty comes from the possible existence of a long-term trend. This trend does not show up in all proxies alike (absent in f10.7 weakly present in SEM, strongly present in the TSI) and its relation to a secular change in solar activity is still an ongoing matter of debate.

Therefore, this first update of the DTM model does not benefit from a more representative solar proxy, but its gain in accuracy is due to the large database to which it was fitted (including CHAMP, GRACE, Starlette & Stella, Deimos-1, CACTUS, OG06, DE-2, AE-C, AE-E).

This modes was extensively evaluated, using both data that was used in its creation as well as external data of the US Air Force (courtesy Bruce Bowman). The a-priori model DTM2009 and the COSPAR reference models JB2008 and NRLMSISE-00 have been compared with the same observations. As expected, DTM2012 is significantly more accurate than the other models when compared to the internal data. Comparison to the Air Force data, which were assimilated in JB2008, showed that DTM2012 is less accurate up to 400 km than JB2008, but more accurate above that altitude.

The DTM2012 is the most accurate model overall in the altitude range 250-1000 km; for low altitudes, JB2008 is sometimes better, and at high altitudes NRLMSISE-00 sometimes is. All results are presented in a detailed Evaluation Report, which is available on the ATMOP web site. The report displays the model comparisons to all data sets, as well as the models’ performance (observed-to-model ratios; relative precision) compared to multiple data sets in a single plot.

The second stage of DTM upgrade consists of revising and upgrading the last version of the DTM model to create the new DTM2013. The DTM2013 model is fitted to the high-resolution and high-precision accelerometer-inferred densities from the CHAMP and GRACE missions, but also to 2.5 years of GOCE data at 270 km. The low-resolution densities derived from orbit perturbation analysis as well as spectrometer data from Atmosphere Explorer and Dynamics Explorer have also been used, i.e. the same density data sets that were used in the construction of DTM2009 and DTM2012.

The first upgrade in 2012 should have been elaborated with a new solar proxy, but tests showed that F10.7 still performed best taking everything into account (precision, stability, availability). This year, a ‘new’ (measured since 1954, but never distributed) solar proxy was made available, the 30 cm radio flux measured by Nobeyama observatory in Japan. Tests demonstrated its performance to be superior to F10.7 and therefore it was used in DTM2013. However, new operational geomagnetic indices were not produced in time within ATMOP, which is why the same index am is used as in DTM2009 and DTM2012.

The DTM2013 model has been extensively evaluated using mainly data that were used in its creation; the only external data available are the US Air Force (courtesy Bruce Bowman) EDR mean densities. The pre-ATMOP model DTM2009, DTM2012 as well as the COSPAR reference models JB2008 and NRLMSISE-00 have been compared with the same observations. As expected, DTM2013 is significantly more accurate than the other models when compared to the internal data, and especially for GOCE. Comparison to the Air Force data, which were assimilated in JB2008, showed that DTM2013 is less precise up to 400 km than JB2008, but more precise above that altitude. The comparison with EDR densities is slightly better with DTM2012 than with DTM2013.
Bruce Bowman made available densities from the US Air Force near-real time density assimilation model (HASDM) along the GOCE orbit at 270 km altitude, for 2010-2012. The Dynamic Calibration Atmosphere (DCA) algorithm solves for the phases and amplitudes of the diurnal and semidiurnal variations of upper atmospheric density near real-time from the observed drag effects on low-perigee inactive payloads and debris. These pseudo observations are very close to the real GOCE observations, but without any data gaps and therefore all years have the same number of observations (i.e. no seasonal bias as in case of data gaps). The CHAMP and GRACE results are also better with DTM2013, even if the same data were used. The improvement is thus due to the new F30 proxy.

Finally, the model O/N2 ratios were compared to TIMED/GUVI zonal averages. These data have also not been used in any of the models tested here. The JB2008 model cannot be tested because it provides total density only. DTM2013 and NRLMSISE-00 were averaged similarly to the data.

DTM2013 is the most accurate model overall in the altitude range 250-1000 km, and an overall improvement over DTM2012; JB2008 is better for low altitudes when comparing with the EDR densities, but much worse for the high-resolution GOCE densities. At high altitudes NRLMSISE-00 sometimes is the most accurate. All results are presented in the corresponding Evaluation Report, which is also available on the web site. The report displays the model comparisons to all data sets, as well as the models’ performance (observed-to-model ratios; relative precision) compared to multiple data sets in a single plot.

The model source code has been rewritten in Fortran90, and it is available as a compiled library on the web site after registering. A wrapper subroutine was developed in order to make the model more user friendly (in particular for computation of solar local time and use of calendar time or MJD2000), more robust and provide automatic computation of solar and geomagnetic proxies without user contribution. A C/C++ wrapper has also been developed to allow users of these languages to make use of the DTM model.
In addition to the source code, which is available upon request, a set of precompiled packages are also provided. The developed software is accompanied with appropriate documentation, both in document and HTML format, an example of use and the proxy data files.

Data Assimilation for Global Analysis and near-real time prediction

In order to accomplish the objectives of this task, several activities are undertaken with the following outcomes.

Achievable model corrections for near-real time conditions as a function of TLE-data availability will be determined through simulation (DA-NRTP).

In order to evaluate the capability to estimate the drag scaling factors through the processing of radar-derived orbits, we have performed several simulations using CNES GINS software. Estimation of drag scaling factors is impacted by the number of observation of an object (i.e. its orbital characteristics and sensor(s) configuration), the accuracy of the observational data and the solar activity influence. All these parameters are included in the analysis to understand the achievable accuracy in such coefficient.

One tracking station located in Europe is used for the generation of measurements with different observation accuracy (0.01 0.1 and 1 meters) and duration of 5 days. Additional test cases have been executed with several stations in order to understand the capability of estimating drag for the case of very low orbits (poorly observable with one radar due to the short duration pass).

Several different test cases have been executed to identify the behaviour of the orbit determination process and define the best approach for the analysis. Once the configuration was established, a larger set of objects was simulated. The spacecraft used in the simulations is starlette, which is a sphere with 47Kg of mass and 0.24 meters of diameter.

From preliminary test cases, we can conclude that the final analysis needs to focus on the case of circular orbits covering different altitudes and solar activities. Then, a set of final cases for six objects at four altitudes and three different epochs were executed. The altitudes considered are 250, 400, 600 and 800 km at times 2009, 2005 and 2001 (minimum, medium and maximum solar activity). As expected, observations with measurement accuracy of 1m give bad results of the accuracy of the drag scaling factor coefficient. This is true even in the case of starting the estimation (adjusting) process with the real value used in the generation of measurements. Sensors providing observations with an accuracy of 10 cm or 1 cm seem to allow a proper estimation of the scaling factor.

In regard to the solar activity (date), it can be concluded that cases associated with low solar activity (2009 cases) provide worse results than other cases. This is related to a lower atmosphere density, and thus, lower drag perturbation, which can be observed worse through the dynamics of the object.
Similarly, it can be seen that high altitude objects (800km), where the drag perturbation is much lower than in the low altitude orbits, offer worse capabilities to estimate the drag scaling factor.
On the contrary, the lower the orbit is, the larger the drag perturbation, and the easier the estimation should be. But, in this case, we need to take into account that, for low orbits, the number of observations is less due to the shorter sensor passes (poor observability profile). This lower number of observations makes the estimation process more difficult, requiring larger number of iterations. In spite of this difficulty, the resulting estimation is good.
Orbits at medium altitudes (400 and 600 km) provide the best results (avoiding the problem of short observational arcs and the low drag perturbation). In any case, the 1m accuracy measurements do not provide accurate estimation of the scaling factor.
Additional analysis were also done with different spacecraft configuration. The aim is to provide information on the impact of different space object configuration (mass, area). From the analysis, it seems that larger satellites, with larger Area to Mass ratio provide better capabilities for drag coefficient estimation. This is related to the larger influence of the drag perturbation on the dynamic of the object, which makes the effect easily observable through the measurements obtained with the radar.
Then, the use of tracking data of satellites or space debris for the evaluation of drag scaling factor may be constrained to the objects with larger Area to Mass ratio. The orbit of those objects are particularly difficult to estimate if the Area and Mass are not known perfectly, then, it may be the case that only well-known spacecrafts shall be used for a future evaluation of scaling factors. Small debris, or those with unknown shapes (coming from fragmentations) should be discarded from such analysis.
Development of a DTM model with near-real-time capabilities (DA-NRTP).

The DTM models are constructed by fitting to the underlying density database, as good as possible in the least-squares sense, to reproduce the mean climatology of the thermosphere. Although the mean accuracy over time frames of weeks/months has improved significantly over the years, errors on short time scales of a few days or less can still be of the order of tens of percent. The main causes of these errors are the inadequateness of the solar and geomagnetic proxies used, on short time scales in particular, and their too low temporal resolution.

A biased density model (i.e. a scaled version of the truth, which systematically under/over estimates densities) causes by far the largest orbit prediction errors due to its quadratic increase over time. To improve the density prediction of up to 3 days out, daily density observations are necessary in order to compute corrections to the model that remove model bias above all. We have developed a method to predict temperature corrections to the DTM model, based on observed model errors, using a neural network. The model errors are estimated using density data, which can be converted to corrections to the model’s mean exospheric temperature after a relatively simple procedure. DTM2009 is used as the background model for which corrections are estimated. Results are compared to DTM2012, which is much more accurate at the lowest altitudes, in order to quantify the results. The JB2008 model is the COSPAR reference model recommended for use in orbit determination. We have only compared density results with this model, because the derivation of exospheric temperatures is much too complicated with JB2008 due to the internal management of the solar proxies. The present study is to test the algorithm of the prototype model: density data are not processed in near-real-time, but the data of five scientific satellites are used as if they were. Therefore, we can compare our predictions to truth and quantify the performance.

The densities are used to compute a daily correction to the mean exospheric temperature via the DTM2009 model. The model values density and exospheric temperature are computed for the actual density parameters (position, mean and daily F10.7 and am) as well as for the cases in which mean F10.7 plus 10 sfu and minus 10 sfu are used (sfu = solar flux unit (10−22 Wm−2Hz−1)). Assuming linearity, the observed density differences with the model can be converted to exospheric temperature corrections. The conversion to exospheric temperature makes it possible to use density data at different altitudes.

The model-inferred temperatures will be used in this study to train the neural network models, whereas unused data from all four satellites in the test period from December 18th, 2007 to December 18th, 2008 will be used to assess the prediction accuracy of the models up to 72 hours out (validation data set). An artificial neural network is a more or less complex graph of elementary objects called nodes or neurons, in reference to the biological vocabulary. An artificial neuron is a unit characterized by its input signals x1, x2, . . . , xp, a vector of weights W , a bias b and an activation function ψ also called transfer function.

The architecture of the network is based in a multilayer perceptron with a single hidden layer is used for its property of universal approximation. Input variables are displayed on the first layer, called input layer. The model uses the exospheric temperatures from the past 4 days (Tt–1 , : : : , Tt–4 ), the gradients of F10.7 for the past 4 days (∇Ft–1, : : : , ∇Ft–4), the Ap and its gradient for the past day (At-1, ∇At–1), and a bias. Tuning parameters have led to the use of 10 hidden neurons (s). The choice of input variables has been deduced from an ARIMAX (autoregressive integrated moving average with exogenous inputs) analysis. This preceding analysis in particular revealed the need to use temperature from the past 4 days.


The artificial neural network forecasts accurately the variations in temperature. Several models were tested using combinations of the satellite data available for training, and it was for example shown that a model trained on Stella data gives good results for other satellites (also at lower altitude).
The model presented so far is a 24 h forecasting model for exospheric temperature. Without changing the architecture, it has been tested for 48 and 72 h forecasts. Forecasting 1 day further out adds approximately 10 K of RMS error, but using the neural network constantly leads to an error 10 K lower than with naive modeling. A complete description of the model and its performance is given in the journal Space Weather (Choury et al., 2013).
The accuracy of the present model(s) is limited by the geomagnetic index Ap, an accurate forecast of which is not available. Therefore, the neural network cannot predict a geomagnetic storm, but after the onset it reacts fast. If an accurate forecast becomes available (based on solar imaging or wind measurements for example), it can easily be added to the input parameter list to train a new model. Secondly, this prototype is constructed using daily observations, whereas the geomagnetic index is available every 3 hours. It is equally an easy job to train a new model with a higher temporal resolution (note: the exospheric temperatures must then be re-sampled through interpolation). For the model to run operationally, the most important task will be to design a robust data screening and pre-processing procedure. These improvements are all planned for testing in the near future with the Swarm data, which will be delivered in near-real time on a best effort basis.


Development of the near-real-time model package (DA-NRTP).

The prediction model that was developed is based on analyses with the R freeware (statistical software). This software allows relatively quickly testing of different architectures of neural networks. The different phases consist in particular of building a structure, achieving a learning phase, and finally testing whether the neural network provides the expected results. With the formerly described neural network, and the algorithm that provides the prediction of the exospheric temperature (Texo) as Texo(t) = C*Ψ*(A*X+Bin)+Bout.

Where Texo(t) is the output layer. X represents the observations (input layer), and Ψ(t) = 1/(1+exp(-t)) the non-linear transfer (or activation) function. The network that we use is implicitly defined by the elements of matrices A, B, and C. They are adjusted in the sense of least squares and it is necessary to have a set of coefficients for each prediction interval of 24h, 48h and 72h.

The software is written in shell and FORTRAN 90, and needs only 3 kinds of information:
• the data, which is formed of observations (Texo) and proxies (F10.7 and Ap) of the previous days,
• the matrices that constitute the neural network,
• and, of course, the dates of the forecasts.
The shell procedure that manages the prototype needs only to set the start and end date, and also to choose the type of mission (Stella, Starlette, CHAMP, GRACE, GOCE). The Matrices forming the neural network are fixed; they were adjusted to data from Stella, CHAMP, and GRACE over the period from November 1993 to December 2012. GOCE was kept separate to perform independent tests.
In practice, the execution of the prediction is performed in three steps: the preprocessing phase (selection and data formatting ); the forecast of the exospheric temperature using the neural network; and the conversion of Texo into the mean corrected solar flux (which is included in the forecast program as a subroutine).

During the execution of the program, succinct information is provided directly on screen, but the user who wants more details can refer to the file texo_forecast.info which gives, for example, the selected observations and the potential reasons behind the problems encountered.


Thermospheric data assimilation using a physical model (DA-GA)

An ensemble optimal interpolation (EnOI) data assimilation scheme was developed to assimilate in-situ CHAMP density observations into a physical model.

This work was completed in two parts – initial results demonstrating the analyses produced by the system were followed by further work to obtain a fully-functioning assimilation system. For convenience, here we review the completed results, summarising the work done on the key components in the assimilation – how the density observations are localised and combined with a model background density field and an ensemble-derived background error covariance matrix to create a density analysis, and how this density analysis is then used to restart a model run, thus completing the assimilation cycle.

Observations
The key to developing the assimilation system was interpolating the satellite in situ density observations – the satellite orbits mean these come from a range of different altitudes, but to be suitable for assimilation, these needed to be interpolated onto a fixed model pressure level, ideally the pressure level which corresponds most closely to the satellite altitude. The code to do this was written such that the exact level used is flexible, so can be adapted to the satellite used – here CHAMP is used, but in the future this can be extended to GOCE or GRACE.
In order to avoid creating strong gradients in the analysis, the CHAMP data are decimated from their original 10s cadence to 120s cadences – at present no other quality control measures have been implemented, as those originally developed were for the raw observations, and cannot be applied to the interpolated observations used here. However, new measures could be developed relatively rapidly in the future, and incorporated into the assimilation cycle. Density observation errors should also be more fully addressed: at present these are assumed to be 5% of the observation value, and fully independent of other (e.g. neighbouring) observations.

Localisation
The impact of these density observations on the density analysis is limited to the observations’ vicinity using a bell-curve-based localisation matrix. This not only reduces the numerical cost of the assimilation, but also effectively increases the rank of the ensemble, while the bell-curve smoothing avoids introducing gradients in the analysis liable to cause instabilities in the model. Currently this localisation matrix operates solely in the horizontal, at the main model pressure level used for the observations – the vertical localisation is performed in an ad hoc fashion, scaling the increment from the main pressure level to adjoining levels. This means that currently the vertical coupling is not as realistic as it could be, but this could be improved in the future, as the scaling method can be applied at an earlier stage, to ensure the localisation also operates in the vertical, making it fully consistent.

Ensembles and the background error covariance matrix
The EnOI scheme was developed using a temporal ensemble to provide the density background error covariance matrix which determines the relative impact of the model background density on the density analysis. This temporal ensemble could either be over a fixed period (e.g. one month), or a running period, centred on the observations, which improves the characterisation of short-lived temporal effects (such as storms) in the model’s solar or geomagnetic drivers (F10.7 and Kp).

Nevertheless, to avoid the temporal smearing both of the above methods inevitably end up producing (as the backgrounds used come from different times, with different driver conditions), a more rigorous ensemble was created. This “random” ensemble currently takes nine independent models, and runs them for an hour at a time. At each hour, the temperature field for each ensemble member is perturbed with smoothed random temperature fluctuations, which differ between the ensemble members (and which are repeatable, the randomisation depending on the model time as well as the ensemble member). Each model ensemble member is then run forward in time, to the next hour, when it is perturbed again. The resulting differences between the density fields of all ensemble members can be used to determine a background error covariance matrix at any particular time where model density fields have been output. The advantage of this approach is that as all members have the same geomagnetic and solar driver values at any particular point in time, the temporal characterisation of dynamic effects like storms should be improved. The disadvantage is that the ensemble member runs are not affected by the observations being assimilated into the main model run, so the background error covariance matrix does not show any benefit from the observations over time. However, such coupling could be added in the future, thus extending the EnOI method to an ensemble Kalman filter (EnKF) method.

Closing the assimilation loop
In order to complete the assimilation cycle, the analysis needs to be used to initialise a model run. This cannot be done directly with the density analysis, as this is a derived field in the models investigated here. Rather, much as for the random ensemble method, the density analysis is converted into a temperature analysis. The difference between this temperature analysis and the temperature background is then added to the temperature background. The resulting temperature field (here equal to ) is then used (together with other unaltered fields) to initialise a model run, which continues until the time for the next assimilation cycle. The density field at that point is then used as the model background for the subsequent assimilation cycle.

It is worth to note a key point of the results presented here, that the observations used do not span the entire six hour assimilation cycle, only the one hour period around the assimilation. This restriction is made principally to ensure that the observations have a reasonable temporal correspondence to the model fields – if observations extended to the midpoint of the assimilation cycle, they could represent conditions up to three hours out of step with the model when they are assimilated (as the observations are assumed to represent conditions at the assimilation time). A side benefit is that the remaining observations, which are not assimilated, are left as “independent” data against which the assimilation effects can be tested. However this is moot, as the observation decimation leaves many observations in the assimilation period untouched (hence also available for testing against), and as will be seen the currently limited impact of the assimilation means the assimilation cycle will need to be shortened, probably to hourly intervals, meaning no observations will be outside the assimilation period.

Models used: CMAT2 & TIEGCM
The initial data assimilation package was developed for CMAT2, however this has been extended to also be able to use the thermosphere-ionosphere-electrodynamics general circulation model (TIEGCM, developed by the National Center for Atmospheric Research in the USA), as this provides various advantages for development, speed, robustness, and scientific return.

Specifically, TIEGCM is not only significantly faster on a desktop than CMAT2, but also has the MPI bindings allowing it to be run on the Met Office’s supercomputers, allowing large data sets to be analysed much more easily. It is also a generally robust code, running with no issues during periods where CMAT2 often crashes, e.g. during solar maximum, or particularly stormy conditions such as the Halloween storms.

Scientifically, including TIEGCM is also of interest not only for intermodel comparisons, but also as it potentially allowing easier assimilation of GRACE data in the future, as it extends up to higher altitudes than CMAT2 (TIEGCM’s lower boundary at ~97 km extends up to ~500 - 700 km depending on solar activity, where CMAT2’s lid is ~350 - 500 km). TIEGCM’s grid differs too, with latitudes ranging from -87.5° to 87.5° in 5° increments and longitudes from -180° to 180° in 5° increments (respectively, CMAT2’s increments are 2° and 18°). Vertical coordinates are also pressure-based, increasing in half scale-height increments (rather than CMAT2’s third scale-height increments). For consistency with CMAT2 input parameters, TIEGCM has been run using the Heelis high-latitude ion convection model as a magnetospheric input, which like CMAT2 uses F10.7 solar flux and Kp geomagnetic index as inputs.

Comparison of analyses for TIEGCM & CMAT2
The density analysis and derived temperature analysis have been compared to the background field in both cases. The TIEGCM density values also show the decimated CHAMP density observations from the hour centred on the assimilation time, where the values have been interpolated onto the TIEGCM pressure level being shown.

It is seen that these CHAMP values are less dense than the background for either model, and so result in analyses which are less dense in the vicinity of the observations, as might be expected. The discrepancy between model and observations is greater for TIEGCM than CMAT2, so results in a larger impact in the TIEGCM analysis, densities reducing by up to 25%. The result is a larger temperature increment for TIEGCM – when this is used to start the next model cycle, the temperatures in the vicinity of the observations will be 100K (~25%) warmer.

Assimilation results, and the need for shorter cycles & incremental analysis updates
A final comparison of CHAMP density results over 1 day and interpolated to the satellite position from both TIEGCM and CMAT2 models in free-running (non-assimilative) mode, and interpolated results from the assimilative TIEGCM run, after five days of assimilation, to attempt to “spin up” the assimilation model, is also addressed. Assimilation results for TIEGCM are done whereas those for CMAT2 are underway. It is shown that the performance of a “forecast”, where the assimilative results from a given point (12Z) are allowed to run free, with no further data assimilation, only driver inputs, much as might be expected in a future forecast system, running assimilative results into the future using predicted driver values.

The free-running models show that overall TIEGCM typically shows better agreement than CMAT2 with CHAMP, with the exception of certain parts of the orbit, which seem to be largely associated with the night local time coverage. It is worth noting that the 06Z results seen before coincide with one such period where CMAT2 does better, so those results should not be taken as being fully representative.

Looking at the density differences with CHAMP, it turns out that the differences between the free-running and assimilative TIEGCM version are very minor – inset boxes are provided to show them better. The left-hand box shows that although the assimilative results are closer to CHAMP (i.e. the difference is nearer to zero), this effect is very small (a change of ~3%, as might be expected from the previous results), and is confined to the period (yellow boxes) where CHAMP data are being assimilated – once the region where data are no longer assimilated is reached, the assimilative results rapidly become identical to the free-running results, certainly with no discernable differences by the end of the assimilation cycle.
Similar behaviour is seen for the forecast (right inset box): this also returns to the free-running state as fast as the underlying assimilative model.

A closer examination (not shown) of the model fields reveals that the limited impact of the assimilation is due to TIEGCM swiftly acting to reduce the impact of the analysis – what went in as a ~25% increase in temperature near the observations reduces to ~3% within 15 minutes of assimilation. The subsequent decrease is slower; nevertheless the impact becomes negligible (~0%) 105 minutes after assimilation. As no impact remains by the end of the six hour cycle (i.e. at minute 360), the following cycle effectively starts from the same point as a free-running model, rather than a desirable “spun-up” state where the model is closer to the observations, thus requiring a lesser adjustment from the assimilation.

One obvious improvement to cope with this effect is to shorten the assimilation cycle, so that the impact of assimilation in one cycle persists to positively affect the following cycle. Work is underway to reduce the current six hour cycle to one hour.

A more subtle point is that the drastic reduction in the analyses’ impact is likely to be due to the large change that these represent, upsetting (say) hydrostatic balance in the model. At present, the change between the background and analysis temperature, , is being added to the background all at once, at the start of the model run. An alternative would be to take an incremental analysis update (IAU) approach, where would be split into smaller chunks, the first chunk being added to the background at the start of the model run, the second chunk being added at the next timestep and so on. Such an approach perturbs the model by a lesser amount, and is more likely to lead to the increment having a greater overall impact, so would be worth pursuing in the future.


Assessment and Comparison of Thermospheric Data Assimilation results

CMAT2, DTM_2012, and TIEGCM results have been compared with CHAMP observations at solar minimum (2009 March), and solar maximum (2003 March). DTM_2013 was not available for comparison at the time of analysis, so DTM_2012 was run for the particular altitude, latitude, and longitude of the CHAMP spacecraft at time of observation. TIEGCM was run with 6-hourly data assimilation, while CMAT2 was run without data assimilation. Note that the TIEGCM and CMAT2 results were interpolated to that particular altitude, latitude and longitude of CHAMP for comparison purposes. The interpolation procedure used was the same as that used within the data assimilation code for consistency. CHAMP measured densities and DTM_2012 resulting densities were available every 10 seconds, while TIEGCM and CMAT2 resulting densities were saved every 15 minutes.

The results of this comparison at solar minimum that DTM_2012 performs the best at solar minimum, with its resulting densities closest to the actual observations from CHAMP. However, TIEGCM also performs very well, with densities only slightly higher than DTM_2012 overall. CMAT2 performs the worst here, with densities generally higher by at least ~2 × 10-12 kg m-3.


The same analysis was performed on CHAMP and the three models at solar maximum, concluding that TIEGCM generally outperforms CMAT2, and more so DTM_2012, at solar maximum. However, DTM_2012 does tend to mirror increases/decreases in the CHAMP densities better than the physical models. While TIEGCM densities do not change significantly, DTM_2012 densities mirror the sharp increase, and CMAT2 densities in fact decrease rather than increase (DTM_2012’s data assimilation perhaps improving
results here).

It is important to note that DTM_2013 will likely improve upon the DTM_2012 results shown here. The results for CMAT2 may also improve once the data-assimilation technique has been applied to it. It is intended to compare DTM_2013 results with TIEGCM and CMAT2 results from runs using 1-hourly data assimilation as well as incremental analysis updates. Comparisons will be made with GRACE and GOCE data as well as CHAMP over longer time periods than just one month at solar maximum and minimum. This will give a fairer comparison of the ‘best results’ from these models.

Forecasted F10.7 solar flux developed in Work Package 2 has been tested with TIEGCM and DTM_2012. The models were first run using the actual observed F10.7 values, and then run using the forecasted values. Results were compared at both solar minimum and solar maximum. A mean change of only ~2% was found when using the forecasted F10.7 indicating the proxy is certainly accurate enough for general use. It is intended to eventually test the Am forecasts also developed in Work Package 2, in order to evaluate the impact these have on the models in addition to the F10.7 forecasts.

Potential Impact:
1. Potential Impact
The ATMOP was aimed at providing a breakthrough in the field of satellite drag modelling, with direct benefits to several categories of users. The main foreseen impacts are:
- Strategic impact: Enable Europe to take a strategically leading position in the field of satellite drag modelling by making the DTM a reference model in this field.
- Scientific impact: Go beyond the state of the art in the understanding and modelling of the thermospheric variability, with an emphasis on satellite drag determination.
- Technical impact: Provide improved now- and forecasts of the solar EUV and geomagnetic forcings that affect the thermosphere-ionosphere system. Deliver an improved version of the DTM drag model.
-
1.1 Strategic Objectives and Their Impact

Europe is one of the three major players in space research, engineering and commercialization. Its joint space activities have grown considerably over the past decades, to rival those of the United States and Russia and exceed those of Japan and other spacefaring nations. Considerable investment, contributed by many European countries and associated partners, has been made to launch and maintain facilities in space for the benefit of Europe, for research as well as for public and private services. For example, the European Space Agency, ESA, has been extremely active in the research sector, the European Meteorological Satellite organization, EUMETSAT, in the public service sector and several telecommunication companies in the private service sector. Developing schemes and measures to secure European space assets is thus an important but also challenging task. It is important because the costs incurred by the damage or loss of space equipment are considerable. It is challenging because a number of natural processes are known, ultimately linked to solar activity which continuously change the space environment and thus generate situations that pose a threat to space borne technology.

The ATMOP project addresses the security of space assets through the development of an improved near real-time thermosphere model with forecast capabilities which has the potential to be converted into an operational model. An accurate thermosphere model with timely updates is the basis for satellite drag computation, in real time (operational nowcast) as well as in a predictive manner (operational forecast).

Semi-empirical thermosphere modelling, presently the only way toward operational thermospheric drag calculation, is pursued in the US and in Europe. The US efforts, enjoying strong national support, progress continuously, and we consider it important for Europe to keep pace with such external developments. Similar to GNSS where Europe seeks to establish independence from the US owned GPS system by developing its own Galileo system, Europe would be well advised to establish independence from US models of the thermosphere and maintain its own capability for thermosphere modelling. This will lead in the end to enhanced expertise in Europe and independence from the US in thermospheric drag calculation and prediction.

In spite of this independency, efforts for collaboration with other players in regards to standardisation of methods and drag evaluation approaches are beneficial, and have also been addressed by the ATMOP activity. Collaboration with american MURI/NADIR activity has resulted in such an international collaboration, as explained in the dissemination chapter below.

Thermospheric drag poses a problem to all orbiting space objects, as it is a problem concerning LEO satellites only. The armada of television satellites (in geostationary orbit) and GNSS satellites (around 20,000 km altitude) are not affected by thermospheric drag. Meteorological, land and sea observing satellites, however, tend to be in LEO and are therefore vulnerable to thermospheric drag. Space debris, e.g. dead LEO satellites and upper stages of launchers are also typically in LEO and pose a collision threat to active satellites. The enhanced orbit determination capabilities provided by ATMOP directly contribute to improve the security of space assets from on-orbit collisions and in that sense will also be a major asset to the European SSA programme.

Even if we were to neglect the risk of collisions we find unexpected drag changes, an effect of space weather events upon the thermosphere to be a considerable problem. Satellite tracking would become uncertain if drag is not accounted for, accurate spatiotemporal identification might become impossible, and in the worst case ground stations might lose contact. The ATMOP project has develop a thermosphere model which allows accurate near real-time computation of atmospheric drag in order to improve the precision of orbit determination.

Thermosphere drag nowcast and forecast has a wider field of application than merely the terrestrial environment. Precise drag assessment plays an important role in planetary mission planning and execution, in particular if a lander is employed. Monitoring EUV and assessing the influence of its variability on planetary atmospheres will be a task of utmost importance. Among the many future planetary missions the intended manned mission to Mars is a major enterprise which cannot be safely executed if accurate drag prediction is not available.

1.2 Innovative and Scientific Impact

The ATMOP project has been the first ever attempt in Europe to combine elements from a semi-empirical and a physical model of the upper atmosphere (DTM and CMAT2, respectively) and work toward a new model which takes advantage of the best of both and in addition incorporates near real-time data via advanced assimilation techniques. Comparing DTM and CMAT2 and identifying the elements of each which may serve to improve the other constitutes a novel approach to European thermosphere modelling. This approach, supported by research into electrodynamic and ion-neutral coupling, is expected to aid in the selection of the best possible proxies in semi-empirical modelling and in defining boundary conditions and input parameters for physical modelling. It is thus to the mutual benefit of both ways of modelling.
As a general rule, building a physical model which, if given the correct input, produces the correct output provides reassurance that key physical processes are understood. The benefit of comparing physical and semi-empirical modelling lies in end product, namely a semi-empirical model, driven by proxies proven to be the most appropriate in terms of the underlying physical principles, which is fast and accurate enough to enable transition beyond the project end date into an operational model. In the particular case of the new thermosphere model built in ATMOP the identification and confirmation of the most appropriate proxies on its own is a significant result which may turn out to be applicable to other research areas.
While the ATMOP project had the specific primary objective to build a new thermosphere model which has the potential to be used for drag computation and orbit prediction, the outcome of the project has also improved our general understanding of the thermosphere, its 3-D density distribution, external forcing and dynamic variation. Results obtained from the improved thermosphere model built in the ATMOP project are of value for the mitigation of potentially hazardous space weather effects on LEO satellites and also for general improvement in the precision of orbit determination computations for all kinds of LEO objects.
It should be stressed that the results obtained in ATMOP benefit a much large user community than the one concerned by satellite drag only. Indeed, the state of the thermosphere and that of the ionosphere are so closely connected that a better understanding of the former inevitably impacts the latter. In particular, the improvement to solar and geomagnetic proxies directly benefits the semi-empirical nowcast and forecast of the ionosphere, which matters for radio communications and satellite operations. The experience gained within ATMOP enables its partners to improve the security of space assets beyond the mere effect of atmospheric drag and stay at the forefront of research and development to mitigate undesired space weather effects.
The solar EUV flux now- and forecast developed within ATMOP, for example, is currently a product that is being requested by several user communities. The Galileo project plans to install EUV flux monitors on two of its satellites in order to mitigate the effect of solar irradiance variability. The solar EUV flux and its proxies are also major inputs to coupled thermosphere-ionosphere models and receive increasing attention in relation with the impact of EUV-UV radiation on climate through processes such as ozone production.
The new geomagnetic indices developed in the framework of the ATMOP project have a granularity down to 15 minutes. This 15 minute granularity better suits with the time characteristics of the geomagnetic activity than the one (3 hours) of the currently used Kp index: the indices developed in the framework of the ATMOP project therefore benefit to all the space weather communities that currently use Kp to characterize the geomagnetic activity.

Neither major socio-economic impact nor wider societal implications of the project have been identified.

2. Dissemination activities and Exploitation of Results

2.1 Dissemination
In order to disseminate the outcome of the activity, several means have been put in place:

1 - Publication in WEB, where relevant information about the project and results of WPs are made publicly available through this website, so as to enhance timely dissemination of intermediate and final results. In particular, the proxies and the semi-quantitative thermosphere model (DTM_2012, and DTM_2013) developed in the frame of the project are made available through the ATMOP web site so that it is possible to run on-line the thermosphere model produced in the framework of the project. This task includes the development of mechanisms for data export and interactive running of models.
In regards to the execution of the models, the web system provides the capability of executing:
- Single run of the models at a given position and time
- Atmosphere profile computation
- World map density calculation
The user may define the solar and geomagnetic proxies to be used or select those developed during ATMOP activities. These proxies are routinely updated from the scientific partners’ location towards the web site location to ensure up-to-date information. The on-line model provides tabular data with the results and plotting capabilities for the case of atmosphere profile (density and components concentrations) and atmospheric density maps.

2 - Brochures and Presentations: A number of different kinds of publicity material such as PowerPoint presentation templates aiming at scientists and at potential users, brochures addressing different levels of expertise are needed to address different partners and users of the thermosphere model. A public scientific document with a summary of the objectives and approach of ATMOP project was created and has been provided to external entities for a better description and communication about the project.

3 - Attendance to conferences, Oral and poster presentations: The results of different WPs have been presented under the responsibility of their respective leaders through communications at national and international conferences, publications in peer-reviewed journals, and dissemination of results among stakeholders in Europe. More than 30 conference and events have been attended with a significant presentation of ATMOP outcome.

4 - International journals: The ATMOP partners have published in various journals concentrating on different fields such as solar physics, space physics, atmospheric physics, radio science, meteorology and more. The advantage of this option is that project results will reach a wide (but still specialized) audience. The project results are archived and are easily accessible world-wide and also by future generations. Nine peer-reviewed papers have been published as result of the research done during the project
Special mention must be done to the collaboration with MURI/NADIR activity. This dissemination activity tries to enforce collaboration with relevant scientist from USA working on similar topic. After several discussions with NADIR representatives, four topics are selected for this collaboration:
-Modeling
-Drag and drag coefficients
-Independent validation
-Data Assimilation
The DTM2012 model is being evaluated by the US Air Force Research Laboratories.

2.2 Exploitation Plan
ATMOP seeks the solution of a problem which will be useful to space agencies and other public and private organizations that own or operate LEO satellites and require enhanced security for their assets in space. The new semi-empirical atmosphere model developed in ATMOP can be used to perform near real-time LEO satellite drag computation. It is therefore quite natural that project results are advertised for exploitation by potential users. The potential users are:
- The European Space Agency, ESA, both satellite operational teams and Space Situational Awareness (SSA) representatives
- National space agencies which are involved in LEO satellite operation are also potential users, among them the French Space Agency, CNES, which is actually a consortium partner of ATMOP.
For that, contacts with ESA have been established, both with the Space Situational Awareness team and the Space Debris Office to inform on the objectives and capabilities of the models. Contacts within operational teams at CNES have also been addressed.

List of Websites:
www.atmop.eu
final1-atmop-partners.pdf

Documents connexes