Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Programming Model INTERoperability ToWards Exascale (INTERTWinE)

Periodic Reporting for period 2 - INTERTWINE (Programming Model INTERoperability ToWards Exascale (INTERTWinE))

Période du rapport: 2017-04-01 au 2018-09-30

Computers play a critical role in all areas of science. They are used to perform virtual experiments when a traditional laboratory experiment is impractical or impossible. Many of these virtual experiments require vast amounts of computing power, far beyond what is available on a typical desktop computer. These experiments can only be performed on a supercomputer.

Up until around 2005, manufacturers made supercomputers faster by shrinking the electronic components in the central processor and thus increasing the rate at which it could compute—the so-called clock-speed. However, in the last ten years, it has not been possible to increase clock speeds further because electronics have reached a level of miniaturisation that makes it problematic to dissipate the heat generated within them.

Instead, manufacturers exploit parallelisation techniques, combining lots of CPU cores to increase the headline performance of the computer. However, in reality, for the scientists’ software that runs on these many-core computers to exploit the CPUs potential speed, it must generate enough concurrent computations to keep all of the cores busy.

In the next ten years, the world’s fastest computers are expected to reach what is often referred to as “the Exascale”, which means they will have the potential to perform one quintillion calculations per second. We expect Exascale computers will be constructed from tens of millions of CPU cores, requiring a huge number of concurrent tasks. Current scientific software cannot produce enough concurrent tasks to keep an Exascale machine busy, even though it has enough work for the machine to do.

The INTERTWinE project takes on the challenge of enabling scientific software to run on at Exascale, by helping scientists to expose enough parallelism within their software for it to run on a supercomputer with tens of millions of cores.

There is a vast repository of scientific software in use today and it is not realistic to rewrite all of this software for the Exascale. INTERTWinE has adopted a progressive approach, building on the programming models that have proved their worth through their widespread adoption in current software. Approaches such as threading, distributed-memory and accelerator programming will remain, but will be enhanced with more effective runtime integration and optimised by best-practice techniques for hybrid programming.

The INTERTWinE team has worked with real software and popular programming techniques, to ensure the focus is aligned to scientists’ pressing needs. Further, INTERTWinE has prioritised training and knowledge exchange, to help disseminate the essential skills needed by the European science community.
In September 2018, the INTERTWinE project came to the end of three years of highly productive work on the important but often neglected topic of parallel programming model interoperability. Highlights include:

• The project designed several APIs to support resource management between multiple runtimes that might be active in the same application.
• Using the task pause/resume API, the project has developed task-aware versions of the MPI and GASPI communication libraries. These permit the extension of the natural style of programming using asynchronous dependent tasks to encompass inter-process communication as well as computation. They remove the possibility of deadlock which can occur if this style is attempted using standard MPI and OpenMP tasks, say.
• The use of shared memory windows provides a convenient migration path from pure MPI to hybrid MPI + GASPI applications. The project has improved the support for shared memory windows in GASPI, which has resulted in very high performance implementations of some common communication patterns, including halo exchanges and certain collective operations.
• Distributed dependent task models go some way towards the “sliver bullet” single API approach by moving interoperability issues away from the application into the runtime. To support this approach, INTERTWinE has designed and implemented a directory/cache interface which allows the decoupling of some distributed task runtimes from the underlying communication layer. In recognition of the difficulties of automatically scheduling tasks in a distributed system, the project has explored a new approach: the Event-Driven Asynchronous Tasks (EDAT) model provides much of the convenience of the tasking model, while still retaining full programmer control over which nodes the tasks will execute on.
• Project partners have been actively engaged in the international standards bodies for MPI, OpenMP and GASPI, promoting interoperability issues and supporting the adoption of new features in these APIs to solve interoperability issues.
• The Developer Hub on the INTERTWinE website provides resource packs to support developers interested in different API combinations, containing Best Practice Guides and example codes. The project ran two very successful Exascale Application Workshops: findings from these are also available on the website.

INTERTWinE has made important advances in solving interoperability problems between parallel programming APIs, and hence also towards the ultimate goal of providing practical but performant ways of programming the upcoming generation of very highly parallel machines. Although the project itself is now over, many of the ideas and implementations developed are being taken up by other projects or being developed further by the project partners.
INTERTWinE has made a difference to the supercomputing landscape in Europe and beyond, helping to forge a path towards the first Exascale supercomputers.

The project team is embedded in the most important standards bodies and runtime development teams associated with parallel programming. We have a leadership role in the MPI Forum Sessions working group, are energising the OpenMP Interoperability Subcommittee, and are collaborating with the GASPI Forum on effective data sharing between distributed-memory models. However, our impact extends beyond standards bodies to the wider Exascale research community, including the European PRACE organisation and the US Exascale Computing Project (ECP). The project team has provided guidance and input for the ETP4HPC Strategic Research Agenda, which informs policy makers and funding bodies, to focus investments on removing critical obstacles in the path to better computational science.

We have establishing a comprehensive body of advanced training material that has been delivered at key HPC events, conferences and workshops across Europe, to current and future generations of scientific software developers in both industry and academia, distilled into five Resource Packs, which have been made publicly available on the Developer Hub part of the INTERTWinE web page. Thee Resource Packs include introduction and motivation of a given API combination and its relevance to both academia and industry; a Best Practice Guide; codes examples and tutorials; applications and kernels with short guides and links to their public repositories.

We have endeavoured to redress the under-representation of women in HPC, raising awareness across the supercomputing community of the importance of embracing and promoting the benefits of diversity in the European workplace and ensuring our outputs follow best practice in terms of eliminating bias or stereotyping.
intertwine-logo-rgb.jpg