Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Contenu archivé le 2024-06-18

Applying Pilot Models for Safer Aircraft

Final Report Summary - A-PIMOD (Applying Pilot Models for Safer Aircraft)

Executive Summary:
The air accident and flight safety literature reports on the many still-open issues in relation to automation design. For example: Flight Air France 447 (2009), Flight Spanair 5022 (2008), Flight Helios Airways HCY 522 (2005), Flight China Airlines 140 (1994), and Flight Air Inter 148 (1992). Critically, several human factors problems have been documented. This includes: automation surprises, degraded situation awareness, unintentional blindness, workload concerns and issues pertaining to over-reliance on automation.

Today’s automation is indifferent to the emotional and cognitive state of the crew. Automation only supports the crew based on explicit and static task assignments, with no adaptive capabilities. However, it is necessary that human operators and automated systems act together, cooperatively, in a highly adaptive way. They have to adapt to each other and to the context in order to guarantee fluent and cooperative task achievement maintaining safety at all times.

The A-PiMod project developed a new architecture for an adaptive cockpit which will reduce human error and make substantial progress in relation to Europe’s strategic vision of reducing the accident rate by 80%. Adaptiveness is the key concept and the main argument for better human-automation cooperation within this project. The developed A-PiMod architecture is adaptive in three complementary ways: adaptive mission monitoring and completion, adaptive automation and adaptive crew-automation interaction.

A Multimodal Navigation Display was developed that allows for adapting the crew-automation interaction as it supports different modalities to modify the mission of the aircraft. Pilots can use speech, touch, a cursor-control-device or a keyboard. Speech and gesture recognition software were developed as part of A-PiMod.

To support the adaptation of the mission, software modules were developed to assess the risk of the current flight plan, to detect if a modification of the flight plan is necessary to ensure safety and to generate possible alternatives. The alternatives can be detailed manually using the multimodal navigation display or automatically.

To allow for an adaption of the automation, software modules were developed to detect task that have to be executed, generated possible task distributions and assess these in terms of error propensity. Finally, these modules propose one task distribution to the pilots. Important Input for these modules is provided by the Crew State Inference which infers workload, intentions and situation awareness of the crew based on observations of pilot’s behaviour. A dedicated display was developed to allow for a crew-automation cooperation regarding the adaptation of the mission and the task distribution.

As a spin-off, a Training Tool was developed to improve training of flight crews . The Training Tool shows information about the monitoring behaviour of pilots and offeres the possibility for instructor ratings.

Hugh effort was invested in validation activities during the project. Overall, 25 validation sessions were conducted with the Community of Practice. This provided an on-going and quick feedback during the project. Two validation cycles with overall 13 pilots were conducted during the project to allow for a detailed evaluation of the architecture and the developed components. Furthermore, an overall assessment of the safety impacted was conducted. Overall, the results were positive. Pilots considered the architecture and tools as helpful and expected that they will help to reduce and mitigate errors. The assessment of safety impact indicates a 43% reduction in the accident rate by applying the A-PiMod Approach.

Project Context and Objectives:
Motivation

The improvement of aircraft safety is one of the fundamental goals which must be addressed to cope with the expected increase in air transportation for the future. About 1.8 times more aircraft movements are expected for 2030 compared to 2009. In total, this number refers to 16.9 million flight movements under instrumental flight rules (IFR) predicted per year for 2030.

However, still today, most would agree that between 60-80% of aviation accidents are at least in part, due to human error. It is stated in the “Flightpath 2050, Europe’s Vision for aviation”, that “the occurrence and impact of human error is”, -by then-, “significantly reduced through new designs and training processes and through technologies that support decision making”. If this vision is not reached, and if the accident rate cannot be significantly reduced, experts are expecting a serious aircraft accident once a week.

Based on posthoc analysis of accidents such as the China Airline Flight 140 crashed in Nagoya in 1994 and the Air France Flight 447 crashing in the Atlantic Ocean in 2009 it was argued in the A-PiMod proposal that current human-automation cockpit design is lacking a cooperative and highly adaptive human-automation interaction. Therefore, the objective of the A-PiMod project was to provide means of improvement to the safety of a flight, by designing a novel and highly adaptive and cooperative cockpit architecture.

The goal of A-PiMod was to develop such a novel approach by a hybrid of multimodal pilot-crew interaction, crew modelling, real-time risk assessment and integrating a new training approach as a spinoff. Adaptiveness is the key concept and the main argument for better human-automation cooperation within this project. The A-PiMod architecture is adaptive in three complementary ways: adaptive mission monitoring and completion, adaptive automation and adaptive crew-automation interaction.

The A-PiMod concept will contribute to improved human centred design of cockpits and enable automation to adapt to the crew so that it can be considered as a member of the cockpit team. The ultimate goal is to decrease the number of accidents by reducing human errors in the cockpit and mitigating their consequences.

Objectives

The main objective of A-PiMod was to develop a new approach adaptive cockpit architecture that accounts, in real-time, for the crew’s dynamic and contextual dependent behaviour. It should utilise specifically designed and developed crew models, which are able to infer the mental and physical state of the pilots, from their manifestation of behaviour and to calculate/predict possible future crew activities. The crew models were planned to be connected with a dynamic risk assessment of possible outcomes of the human-machine interaction. The risk assessment was intended to be based on a real-time model of what the aircraft as a whole (crew and automation) has to do at all time.

This should enable determining dynamically whether the current task distribution between the crew and automation is suitable, addressing warnings to the crew when they are not performing correctly, and taking control (e.g. upset recovery) in case of absence of reaction. A further objective was to develop an appropriate interface to suggest possible actions and management activities aimed at ensuring safety of flight and effective performances to the pilots.

Further, an objective of A-PiMod was to develop an innovative training tool for flight crew training based on the A-PiMod crew model. The tool assist flight simulator instructors to improve training of Crew Resource Management and other competencies required on today’s and tomorrow’s flight deck.

Finally, the effectiveness of the Approach should be validated and the overall impact on safety of the proposed approach assessed. Moreover, A-PiMod envisaged training pilots to utilise such a new type of adaptive system and interfaces.

Project Results:
1 A-PiMod Results – Overview
The A-PiMod project developed a new architecture for an adaptive cockpit which will reduce human error and make substantial progress in relation to Europe’s strategic vision of reducing the accident rate by 80%.
The A-PiMod project developed a novel framework for human-automation cooperation which is referred to as the A-PiMod Architecture. This Architecture provides a general framework to provide adaptive crew automation performance for future cockpit design. Adaptiveness is the key concept and the main argument for better human-automation cooperation within this project. The developed A-PiMod architecture is adaptive in three complementary ways: adaptive mission monitoring and completion, adaptive automation and adaptive crew-automation interaction. The Architecture was developed in work package 1. More details about the Architecture are provided in section 2.
During the A-PiMod Project, the modules foreseen in the Architecture were implemented and validated. When combined, these modules allow for an adaptation of the mission and of automation. The development of these modules was the task of work package 2. They are described in section 3.
Additionally, technologies and interfaces for multimodal interaction were developed. These allow for an adaptation of the crew-automation interaction. These were mainly developed in work package 3. However, some work related to the interfaces for the modules was also done in work package 2. The technologies and interfaces for multimodal interaction developed in A-PiMod are described in section 4.
As a spin-off, a Training Tool was developed. This is software used by an instructor during simulator flights which utilizes output of other A-PiMod modules. The Training Tool was developed in work package 4. Section 5 provides more details about this tool.
All these technologies and tools were validated within A-PiMod with various validation activities. Details and results of these activities are given in section 6.

2 A-PiMod Architecture
The whole A-PiMod architecture is based on a 3-layers hierarchy of tasks (see Figure 1). The highest level is the Mission Level. The overall mission of the aircraft (e.g. flying from A to B) is considered as a task at mission level. The middle level is the Cockpit Level. The mission can be seen as a series of steps, and for each of them, there are specific things to achieve. These are the cockpit level tasks. The lowest level is the Agent Level, where tasks are executed by the agents in the cockpit: the crew and automation.
The A-PiMod architecture introduces the components that are required for an adaptive cockpit as envisaged in by A-PiMod. All these components are expected to be joint systems of crew and automation. The word module will be used to refer to the software behind the automation.
The A-PiMod architecture is based on 8 components and 2 separate software modules. A peculiarity of the A-PiMod architecture, inherited for the general architecture, is the inherently cooperative nature of the components: the components are not only software. In the A-PiMod architecture, a component is made of a software module and of the human crew. Thus, each component is a small cooperative system in itself. These components are accompanied by two exclusive "software only" modules, which realize the interaction between the human crew and the automation. The A-PiMod architecture is shown in Figure 2.
Each component has a specified function and contributes to the overall goal of the A-PiMod project:
1) The Situation Determination at Mission Level (SD@ML) component is in charge of determining the current state of the mission and providing the context in which it is executed. This includes the progress on the Flight Plan (F-PLN) in terms of mission phase and sub-phase, the state of the A/C and its systems and the environment in which the A/C operates (e.g. weather, runway availability at destination airport, and ATC).

2) The Risk Assessment at Mission Level (RA@ML) component is in charge of determining the risk of not being able to achieve the mission as intended (e.g. can the current F-PLN be flown safely to the destination?).

3) The Situation Modification at Mission Level (SD@ML) component is in charge of reducing the risk associated with the current situation to an acceptable level, if an unacceptable risk is detected. For example, this will entail solving any threatening issue with the A/C systems (e.g. engine fire) or modifying the F-PLN, e.g. to avoid bad weather, or chose an alternate destination. This component is therefore inactive when the risk level is acceptable.

4) The Task Determination at Cockpit Level (TDet@CL) component is responsible for the determination of cockpit tasks, i.e. the tasks that the cockpit as a whole has to do in a given situation. This includes all F-PLN execution tasks, all F-PLN monitoring and adaptation tasks and all task distribution tasks (e.g. choosing which cockpit agent has to do what).

5) The Situation Determination at Cockpit Level (SD@CL) component assesses the state of the cockpit, taking into account the state of the agents, pilots and automation, in terms of availability and current capabilities (e.g. crew is fatigued).

6) The Task Distribution at Cockpit Level (TDis@CL) component has two duties in the A-PiMod architecture. First, the component produces possible distributions of the tasks between the crew and the automation, based on an initial filtering of who is authorized of doing which tasks. Second, the component proposes the task distribution that is associated with the lowest risk.

7) The Risk Assessment at the Cockpit Level (RA@CL) component assesses the risk associated with the possible task distributions. This risk assessment is based on the crew state and automation functioning.

8) The Crew State Inference (CSI) component permanently monitors the crew in order to infer their intentions, situation awareness, and taskload.

a) The Multimodal HMMI component is the interface between the virtual and the human pilots. It may consist of auditory, gesture and traditional (e.g. visual) interfaces.

b) The Interaction Manager component realizes for the virtual pilot which interaction modality is the best suiting one for a given situation.

The components above are the components of the cooperative system that achieves the mission. The components work together and in total harmony. Each component itself (except No. 8 above) is thus made of the crew and a dedicated software module that work with the crew to perform the tasks assigned to the component (and in most collaboration schemes, one could say that the module assists the crew to perform these tasks). The crew superposes the functions of each component it is involved in.
The implemented hard- and software instance of the A-PiMod architecture is shown in Figure 3. There are some minor differences between the conceptual Architecture and the implemented software. One of these differences is that the Situation Determination and Situation Modification are combined to one module. This was done as the output of both modules has a similar structure and the communication effort between modules could be reduced. Further, the Situation Determination at Cockpit Level was not implemented. This module collects information about the availability of automation and information about the crew provided by the Crew State Inference. As it was decided that A-PiMod scenarios will not consider automation failures, this module provided no additional functionality and was not necessary in these scenarios. A further difference is, that the implementation differentiates between the multimodal navigation system, which provides a new user interface for an already existing system and the Mission Level and Cockpit Level Management display (ML/CL Management Display, short: MCD) which provides a user interface for the newly developed systems. A Multimodal Interaction Manager has been implemented which manages the interaction with the Navigation Display. Additionally, the connection of the Training Tool developed as a Spin-Off is shown in Figure 3. The various modules are explained in the following.
All software modules exchange data via the Data Pool application. The Data Pool can be considered a switch realizing the transfer of data between the modules. Data is encoded as messages on the basis of the Java Script Object Notation (JSON) format. JSON is an open standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs.

3 Mission Level and Cockpit Level Modules

3.1 Situation Determination and Modification at Mission Level (SDM@ML)
The SDM@ML module is responsible for providing the flight situation at mission level and consists of two parts. One part is integrated as a plug-in into the simulator X-Plane. This is called the SIM-IO and provides all required data already available in the simulator. This includes the progress on the F-PLN, the state of the A/C and its systems and the environment in which the A/C operates. The other part, the Flight Phase Detection (FPD), detects the current state of the mission (mission phase and sub-phase) and alternative missions.
The SIM-IO part of the SDM@ML module provides data available in the simulator to other modules by sending them as JSON messages to the DataPool. It sends three messages. The first message provides data about the aircraft state, flight control unit (FCU), and primary flight display (PFD). The second message provides flight plan and trajectory data and progress data. Further the module writes the status of the push-to-command button into a third message.
The current flight phase and mission alternatives are determined by the Flight-Phase-Detection part of the Situation Determination component. This component is implemented as an expert system based on CLIPS. CLIPS is an open-source framework for developing rule-based expert systems. The determination of the flight phase based on the actual flight parameters is robust to deviation from the standard parameters of a flight phase. For example, the SDM@ML module is able to determine flight phases even if the aircraft is not correctly configured for these phases. E.g. the module will detect the flight phase go-around even if the thrust level is not set to TO/GO. This allows generating warnings to inform the pilots about this mismatch.
Alternatives missions are added for the specific flight phase detected by the expert system. These are internally stored in a look-up-table. Furthermore, the FPD received the information for which alternative a risk assessment was requested by pilots on the MC Display. If risk information is requested to be calculated by the risk assessment is also included in the output of the FPD. All these information is combined and sent to the DataPool.

3.2 Risk Assessment at Mission Level (RA@ML)
The general objective of the risk assessment components, both at mission and at cockpit level, is to produce a risk evaluation of the current situation, considering crew states, system availabilities and contextual dependences.
Particularly, the RA@ML module’s main purpose is the estimation of risks jeopardizing the achievement of the mission in the current and possibly forthcoming situations. Its output is a list of hazards and associated risk values per each analysed mission profile.
In aviation, the classical risk assessment is a static approach: each organisation identifies the most significant hazards in its working context (safety issues) and then calculates the associated risk values by utilising classical approaches and methodologies based on the suggestions and guidelines provided by the most powerful associations, such as EASA and ICAO. The idea of utilising a completely dynamic approach is very far from reality and very difficult to implement. In A-PiMod project, we have decided to reach this goal by steps starting from the development of a quasi-static (or quasi-dynamic) approach in the RA@ML module.
Each module needs to run on rules. In the RA@ML module, the company defines these rules and specifies them in the Lookup Table. This means that, like in classical risk assessments, also in A-PiMod the quality of data and the variety of incident sequences strictly depend on the company Safety Management System (SMS) expertise and analyses strengths.
On the basis of the ICAO (International Civil Aviation Organization) definitions of the elements building aviation risk assessment, the methodology underlying the RA@ML module pursues a purely prospective approach. It is a perspective estimate of possible evolutionary processes, or “occurrences”, which originate from a hazard. It allows to identify possible hazards that may be encountered during the mission, and to assess the levels of risks.
This approach involves the combination of a qualitative analysis of the possible evolutions, to which is associated a quantitative assessment of the likelihood of occurrence and of the severity of the consequences (e.g. in terms of damage to people, environment, systems or facilities). The name of the approach is RAMCOP, which stands for Risk Assessment for Managing Company Operational Processes, and, in particular, the RA@ML module focuses on Step 2 (Qualitative event Assessment and Quantification (including assessing Risk and Severity) of the approach.
For each specific hazard, all possible incidental evolutions are developed and associated consequences are identified. The existing control measures (i.e. barriers) that contribute to reduce either severity of consequences or probability of occurrence are detected. The probability of each, entire incident sequence is calculated by taking into consideration both the basic probabilities of hazards and the effects of barriers on them. Different barriers can prevent the hazard itself to be activated, while others limit the escalation of the incident sequence, one generated. The effectiveness of barriers depends on organisational, environmental and, in general, external conditions and this can be translated into a reducing factor on probability. On the contrary, boundary conditions might trigger the hazard itself, indeed augmenting its basic probability of occurrence.
The same analysis process is also applied with regard to consequence severities.
The overall risk can therefore be determined by combining severity and probability associated to each sequence, thus identifying the relative risk index. The tool commonly used to this purpose is called Risk Matrix. KITE Solutions has developed a specific risk matrix for RA@ML purposes, on the basis of the one proposed by ICAO in its Safety Management Manual (SMM) Doc. 9859 – 3rd Edition. The A-PiMod risk matrix, shown in Figure 4, has a five colour code and a number has been assigned to each cell, in order to rate them. This is particularly useful for the risk assessment of cells falling within the same colour code (i.e. dark green, light green, yellow, orange or red).
The RA@ML module is composed of two separate tools, named Look-Up Table Component and Risk Component. The first one, installed “at home” by the airlines, allows them to precompile all data needed for the mission evaluation later performed in real-time in the cockpit. The second one, instead, is the actual module installed in cockpit and provides the real-time calculation of the risk, based on the current mission and the possible forthcoming hazards that the crew could meet during the flight.
The common element linking the two components is the underlying database, which is fed by the company through the Look-Up Table Component and is read by the Risk Component to obtain the basic structure and values for real-time risk calculation. Indeed, it will recognize and assess only known hazards and incident sequence, which derives from the expertise and risk knowledge developed within the company. Through the Look-Up Table component, we can give the chance to airlines to actually design risk analyses tailored on their experience and on their safety know-how.
The recognition is bases on the inputs the Risk Component module receives, such as factors describing the current surrounding conditions, data about the actual crew status and flight phase. These factors intend to include as many operative characteristics as possible, descriptive of different aspects (e.g. weather, but also involved A/C systems and human factors). The identification of these factors and of their weight in different circumstances arises from expert judgment and retrospective analyses’ results elaborated by the company safety officer(s) responsible for filling the Look-Up Table component “at home”. These factors are also responsible for describing the incident sequence, i.e. a story, starting from a specific hazard and ending into a specific consequence. In fact, one hazard could end into different consequences or it could end into the same consequence through a different sequence of events. This makes each “story” unique. No intervention on the database is required in flight by the crew, given that its update becomes a task of company software maintenance.
With reference to the background concepts of aviation risk expressed in the previous paragraph, in the RA@ML module the “static part” of the approach is represented by an architecture based on the “recognition” of situations reproduced in the Lookup-Table. On the other side, the dynamicity of the approach is given by the fact that the recognition of the situation and then the calculation of the risk are based on factors the module dynamically receives as inputs of other A-PiMod modules.
The output of the module is a list of hazards and associated risk values. They will be expressed in terms of colours and numbers, arranged from the highest risk value to the lowest.
Following the same approach and sequence described above, the tool also allows for the simultaneous evaluation of more than one mission profiles, i.e. the current mission and the alternative missions. These are evaluated on temporary flight plans designed by the pilots.

3.3 Task Determination at Cockpit Level (TDet@CL)
The TDet@CL module is responsible for the automatic determination of pertinent cockpit level tasks for a given flight situation. The pertinent cockpit level tasks are the tasks that have to be achieved by the cockpit as a whole. This means they are executed by the human crew or by the automation or a combination of both.
We apply a rule satisfaction approach to determine the pertinent cockpit level tasks. Rules pre-defined, containing an IF-statement and a THEN-statement. The module permanently checks the state of the surrounding environment against a set of rules to identify which tasks have to be performed on the cockpit level. The cockpit level includes the human pilots and the automation.
The TDet@CL module consists of a rule processor that processes a formally defined parameter base, a task base, and a rule base. The parameter base describes the parameters of the situation at the mission level, e.g. flight phase, mission level tasks, wind direction, and aircraft fuel level. The task base contains formally defined cockpit level tasks, e.g. retract landing gear and initiate go around. The rule base defines the circumstances under which certain tasks are pertinent. The parameter base, task base, and rule base are encoded in specific documents in XML notation. The TDet@CL module imports these documents once when the module is started. The input are messages about the Flight Situation, the Aircraft State the, flight plan, and selected elements on the A-PiMod user interface which are provided by the SD@ML component and the Mission level-Cockpit level Management Display via the Data Pool. The output of the TDet@CL module is a message containing a list of pertinent cockpit level tasks and their current urgency. As agreed by the project consortium the data exchange format with the Data Pool is JSON.

3.4 Crew State Inference (CSI)
The CSI module is responsible for the automatic inference of the cognitive state of the pilot crew. The cognitive state is a composition of different cognitive aspects. We decompose the cognitive state of each pilot (PF, PM) in the cockpit into the four target measures:
- Intentions (What tasks does the pilot intent to perform?)
- Monitoring behaviour adequacy (Does the pilot monitor the relevant displays?)
- Situation awareness (Do the tasks that the pilot intents to perform correspond to the tasks she/he is responsible for?), and
- Taskload (Is the pilot over- or underloaded by the tasks she/he is responsible for?).
In the following, we provide detailed information on the background and our approach for each of these target measures.

3.4.1 Taskload Assessment
The taskload of a pilot is described by a multi resource model which is similar to the one described by Wickens (1984). The dimensions of our taskload model are Visual, Auditory, Cognitive, and Psychomotor
According to this model the cognitive and physical capacities of pilots in the different dimensions are limited. The execution of tasks consumes some of these capacities. The sum of the consumed capacities of a dimension is the dimension’s taskload. The Cockpit Level tasks were identified and described during a task analysis. Results of this task analysis can be found in D2.2. Due to changing scenarios during the project several additional tasks where identified. The description of every identified task is stored in the task base document. The task description contains, among other elements, information about the taskload which is induced by a task in relation to the different dimensions. These taskload values were collected by interviewing several domain experts. To estimate the current taskload of a pilot the currently assigned tasks are taken into account. The taskload values stored in the task base document for each dimension are summed up over the pilot’s currently assigned tasks, for each dimension separately. These aggregated taskload scores reflect the actual taskload of a pilot in the different dimensions.
In order to determine whether the assigned tasks generate under- or overload, we consider the possibility to define under- and overload thresholds for each dimension for the future. These thresholds could vary for different pilots, e.g. due to their experience levels. We envision to integrate this concept and to provide an online function to modify these thresholds through the pilots in upcoming versions of the demonstrator. So far, thresholds are not yet defined and not implemented.

3.4.2 Intention Inference
The intentions of the pilot refer to the goals he or she is trying to achieve. To accomplish a goal one usually has a plan. Plans can be more or less complex. Complex plans usually can be separated into sub-plans. This way the complex plans become goals of their sub-plans.
Tasks at Cockpit Level or Agent Level also serve the achievement of a goal. Complex task can be separated into sub-tasks and can become the goals of their sub-task, too. So a task of a pilot can be interpreted as equivalent to a plan or a goal. To actually execute a task a pilot has to show a specific behaviour which consists of certain actions. This means for each task exist some set of actions which is typical for this task. Many of these actions can be observed e.g. interactions with cockpit instruments or interactions with the HMMI, while the corresponding flight tasks can’t be observed directly. The observable interactions can be associated with one or more than one task. A task can also be temporarily interrupted by other tasks and be continued later. This means we can observe sequences of interactions which were created by one or more tasks.

3.4.3 Monitoring Behaviour Adequacy Assessment
In order to assess the pilots’ adequacy of monitoring behaviour, we use an attention supply-demand model (ASDM) for the assessment of the demand for attention of atomic information elements and the overall monitoring behaviour adequacy. In the ASDM, the pilots are considered as attention suppliers, and the information elements that can be attended to via the cockpit displays are considered as attention demanders. The pilots can supply visual attention to the displays in order to satisfy the demand for attention of associated information elements. For example, by supplying visual attention to the Primary Flight Display, the pilots can satisfy the demand for attention of information elements, such as current speed. The demand for attention of information elements depends on their relevance in context of the situation (pertinent tasks) and their attendedness. The relevance level represents the relevance of an information element in context of a given set of pertinent tasks at a specific point in time. The attendedness level derives from the ratio between the actual neglect time of an information element and the maximum neglect time of an information element. The demand for attention of an information element depends on these two parameters. Monitoring behaviour is considered as adequate if the demand for attention of information elements is low and inadequate if their demand for attention is high.

3.4.4. Situation Awareness Assessment
In the A-PiMod project, we focus on the assessment of the levels 1 and 2 of Situation Awareness as defined in Endsley's model. Level 1 SA is covered by the introduced approach for assessing the demand for attention of atomic information elements, and by the approach for assessing the pilot’s monitoring behaviour adequacy.
Level 2 SA is covered by a set-based approach, which is adopted from Tversky‘s (1977) ratio model similarity function. This function allows comparing entities represented as vectors of Boolean features. We apply this measure to assess the fitness between what pilots should do and what they are actually doing. What the pilots should do is determined by the current distribution of pertinent cockpit level tasks. This information is provided by the TDis@CL module. What the pilots are actually doing is driven by their intentions. This information is provided by the CSI module’s intention inference.

3.5 Task Distribution at Cockpit Level (TDis@CL)
The TDis@CL module is responsible for the selection of a suitable distribution of cockpit level tasks among the human crew and the automation. The Task Distribution at Cockpit Level proposes distributions of the pertinent cockpit level tasks among the individual agents (human crew members and automation). Thereby, restrictions of the tasks, preferences, taskload, strengths and weaknesses of the agents are considered. Different distribution strategies have already been proposed (first attempts go back to 1950s (Fitts, 1951)) and commented in deliverable D2.2. These strategies give some general ideas how tasks could be distributed between the human crew and the automation, with some respect to strengths and weaknesses of the agents. Our approach is to rely on cockpit level risk assessments that are associated with possible task distributions and select that task distribution that is associated with the lowest calculated risk. The cockpit level risk is an evaluation of the risk associated with the current task distribution for achieving the cockpit level tasks assigned to the cockpit. TDis@CL looks for the task distribution that minimizes that risk. If an appropriate task distribution, with an acceptable risk level, cannot be found, than the current cockpit levels tasks cannot be executed safely by the cockpit, which means that the current mission cannot be flown safely. This should mandate a mission modification (e.g. divert to alternate airport), because the cockpit cannot cope with the demand.
The TDis@CL module works in close cooperation with the RA@CL module. Basically, the module produces for a given list of pertinent cockpit level tasks all possible task distributions between the available agents. The RA@CL module produces for each possible task distribution a list of risks and forwards this list to the TDis@CL module. The TDis@CL module checks the results of the risk assessment and selects that task distribution that is associated with the lowest calculated risk. This is done by the Task Distribution Selector.
The TDis@CL module will also receive data from the Mission level-Cockpit level Management Display if a pilot modifies a task distribution manually. The modified task distribution is interpreted as some preference for agent assignment for tasks. For each task from a manually edited task distribution the Task Distribution Selector will respect this information for following task distributions as long as a task stays pertinent. When a task is no longer pertinent the agent assignment preferences are reset. If no preferences are available for a task the Task Distribution module will try to assign it at first to a human agent, under the condition that a human agent is able to fulfil this task. An exception is made if the criticality of a task exceeds a certain threshold. In that case the Task Distribution Selector tries to intervene by immediately choosing a task distribution where the concerned task is assigned to the automation.

3.6 Risk Assessment at Cockpit Level (RA@CL)
The primary scope of the RA@CL module is the calculus of risks threatening the successful completion of tasks at cockpit level. After receiving from the TDis@CL module all possible tasks distributions for the execution of the specific mission, the RA@CL module assesses them with a prospective and dynamic approach to extract the best one from a risk point of view (i.e. the one with the minor probability of an erroneous performance). The underlying methodology pursues a prospective approach, assessing the error propensity of not fulfilling a task distribution, combining qualitative and quantitative analyses, and considering organisational, technical, environmental and human factors. Moreover, the RA@CL is dynamic since it is based on variables changing in time evaluated on current conditions.

4 Multimodal Interaction

4.1 Mission and Cockpit Level Management Display (MCD)
The Mission and Cockpit Level Management Display (MCD) is a central component of the A-PiMod project, as it provides access to several functions of the investigated team-centred concepts for pilot-automation interaction. The MCD runs on a Microsoft Surface tablet. The MCD consists of a dedicated mission level view (top) and a cockpit level view (bottom). The display is shown in Figure 5.
In the mission level view, pilots can monitor the mission status in terms of risks and activate alternative missions if the risk for a mission turns out to high. In the upper row, the MCD provides information about departure airport, current flight phase and destination. Next, there is overall risk information. Below that, the display provides information about the risks of the current mission and – when requested – of one alternative mission. The risk of the current mission is shown on the left, the possible alternatives in the middle and the risk of the alternative on the right. In the current implementation, five different risk states are possible. These states are color-coded from green (non-critical) to red (critical).
On the cockpit level view, the MCD supports the management of tasks that are associated to the current mission. This includes, e.g. an overview of the pertinent tasks and the distribution of these tasks between the pilots and the automation. Tasks assigned to the crew are shown on the left; tasks assigned to automation are shown on the right. When selecting a task, a button appears to move it to the other agent. Above the list of task, the risk associated with this task distribution is given. On the left, the interface provides buttons to control which tasks are shown (all task – only non-obvious tasks – only critical tasks). On the bottom, the interface provides buttons to accept or reject the modification made to the task distribution. If changes to the task distribution are made, the Task Distribution component will considers these for following calculations of possible task distributions.
The MCD implements two strategies to augment a pilot’s monitoring performance during flight. On the one hand, a notification is issued by the MCD to prompt an inattentive pilot to check certain information sources if DFA-scores fall below a defined threshold. In the current implementation, five different escalation states are possible. Similar to the risk states in the mission level view, the escalation states are color-coded from green (non-critical) to red (critical). The displayed colour represents the escalation state of the information source with the currently highest DFA-score. On the other hand, a pilot can always enter a view showing the current DFA-scores for each information source (see Figure6). The view can be entered by pressing the button labelled “>” next to the task label named “Monitor”. The feedback view allows the pilot to get feedback on demand, e.g. in situations where there is less to do in the cockpit. The view shows a bar for each information source. The length and the colour of a bar represent the DFA-score of the associated information source.

4.2 Multimodal Interaction Manager
The Multimodal Interaction Manager realizes an interaction loop between pilot and aircraft systems. It interprets the meaning of multimodal events, plans adaptation, processes multimodal output strategies and then execute the output through multiple output modalities. It consists of Multimodal data fusion, Multimodal input interpreter, and Multimodal output processor. As input data sources, Cursor Control Device (CCD), Multifunctional Keyboard (MKB), speech and object under the cursor were used. As multimodal output, response in navigation display has been implemented.

4.2.1 Multimodal data fusion
The Multimodal data fusion takes separated data input from each input modality channel as input. It realizes low-level data/events fusion, such as: Time alignment of all input channels, recognizing events in each input channel, identifying groups of events in different input channels that are possibly a multimodal command (e.g. events in different channels that happened at approximately the same time). This module outputs possible multimodal commands/actions to the interpreter to interpret their meaning.

4.2.2 Multimodal input interpreter
The Multimodal input interpreter has 2 inputs: groups of fused multimodal events and knowledge in HMI model, which is the updated status of all HMI components in the system (such as object currently under the cursor in navigation display or current state of navigation display in general)
It interprets meaning of fused multimodal events, for example, speech input “direct to” and touch input on a WPT form the command “direct to WPT”. The module can be thought of as an expert system that produces responses based on a set of multimodal input rules. Those rules define possible meaningful combinations of multimodal events. If a group of multimodal events doesn’t match any meaningful combination, it will be defined as an input error. The output of this module is pilot’s intent interpreted as a command created from multimodal inputs, for example to divert to a certain waypoint.
This module plans adaptation strategy based on predefined rules, such as crew state adaptation rules, task distribution adaptation rules etc. This is a standalone module developed by another partner that is not connected to the Final Prototype. In the future, all adaptation rules should be provided by responsible partners. For example, if the crew cognitive load is high, high priority information needs to be output to a more salient modality (e.g. speech instead of display). Task distribution rules can shift certain tasks from crew to automation, thus the output needs to be adapted accordingly. The output of this module is adaptation strategy.

4.2.3 Multimodal output processor
The Multimodal output processor has the capability of handling following inputs: Command created from pilot’s intent and HMI model – current status of all HMI components in the system.
Given pilot’s intent, this module determines the output strategy based on multimodal output rules (predefined). Multimodal output rules define multimodal output strategies in a default standard situation. Output is created solely based on multimodal output rules.

4.3 Multimodal Navigation Display
The A-PiMod Multimodal Navigation Display (MM ND) is an early, low-maturity (TRL3) interactive software prototype of multifunctional ND with Human-Machine Interface (HMI) and philosophy based on Honeywell Interactive Navigation (INAV) legacy. The prototype has been developed on existing multifunctional ND prototyping platform that has traditional HMI controlled by Cursor Control Device (CCD) and Multifunctional Keyboard, (MKB) and extended with the support for speech and touch interaction.

4.3.1 Navigation display screen layout
The Navigation display screen layout contains the following elements: Lateral map, CCD position, Range (Zoom) ring with range value label, Ownship, Flight plan and pull-down menu containing the main map features and gives access to sub-menus to customize ND contents. The ND tool bar is permanently displayed at the top of the display.

4.3.2 Functions
The A-PiMod MM ND has the following functionality for Lateral map manipulation: moving map (MOVE) , map rotation – display mode change (Heading-Up, North-Up, flexible 360°), map view change (Full, Vertical Situation Display (VSD), center map (CENTER) (cursor position, object – Ownship, Object: WPT, VOR, Airport), zoom map (ZOOM), map range change (zoom-in, zoom-out), displaying lateral map layers (Terrain, Airport, VORs, NDBs, Intersections, Hi Airways, Lo Airways, Term Airspace, Special Use, Traffic, VSA, Missed Approach, Obstacles, Cities, Roads, Minor Roads, Railroads) and displaying Weather (WX) radar.
The MM ND has the following functionality for Graphical Flight Panning (GFP): graphic representation of the flight plan, cross WPT (CROSS), hold over WPT (HOLD), direct to WPT (DIRECT TO), amend route from WPT (AMEND), delete WPT (DELETE), apply proposed GFP changes (APPLY) and activate Temporary Flight Plan (ACTIVATE)

4.3.3 Modalities
The prototype supports several input modalities, namely Multifunction Keyboard (MKB), Cursor Control Device (CCD), touch and speech. In general, the individual modalities support the following interaction patterns:
Cursor control device (CCD): pointing and selection (point-and-click paradigm), zooming the lateral map, moving map (with MKB), centering map, menu interaction, command invocation
Multifunction keyboard (MKB): entering values, switching modes of interaction, e.g. CCD mode for pointing and moving
Touch: Selection (tap), command invocation (tap), zooming the map (pinch-to-zoom), centering map, map rotation (rotate), moving map (pan), menu interaction
Speech: Entering values, selection (e.g. options), command invocation (when particularly supported), zooming the map, centering map, rotating the map (either north up or aircraft’s heading up) and menu interaction
Speech interaction is triggered via push-to-command button and the system provides the response on the bottom part of the ND in white colour as a feedback to user about executed action; or error when command is not recognized. Speech can also be used cross-modally (as described below) for commands which require an object specification (e.g. WPT). In this case, the WPT may be selected via CCD.

4.3.4 Multimodal and cross-modal interaction
The concept behind the prototype is inherently multimodal. It means that the user is allowed to use any supported modality that is available at given situation. Furthermore, the user is allowed to switch between modalities within a function, if the function itself requires more than one step to be executed. In this case each step may be conducted via any supported modality.
Besides the MMI, the Final Prototype also supports cross-modal interaction. In a cross-modal interaction, the modalities can be combined within single utterance in contrast to MMI where modalities can be switched between individual steps. The current prototype supports cross-modal interaction in Command-object use case. It means that command and the object may be specified via different modalities. For instance for “DIRECT TO KPHX” utterance, DIRECT may be specified via speech or command line and KPHX may be selected via touch or CCD. However, there must be time linkage between the time frame of command and object specification. The object may either be specified before (any time after previous utterance), during or after the command specification (but no later than 10 seconds after the command specification).
However, usage of individual modalities or combination of modalities may be subject to specific changes according to particular function. Table 1 summarizes the list of the supported modalities.

4.4 Speech Recognizer
Speech Recognizer is a module capable of recognizing a trained set of phrases consisting of single or several words, pre-processing them and sending them to Interaction Manager for further processing.
Since the number of phrases is rather large, Speech Recognizer implements two approaches in order to increase the robustness against recognition error (recognizing some other word or phrase than was spoken by the user):
1. Use of multiple grammars – instead of using one large set of phrases that can contain many phrases with similar pronunciation, it is divided into several smaller (possibly overlapping) subsets what makes it easier for the speech engine to recognize
2. Filtering based on confidence level – Speech Recognizer provides a confidence (represented as a number) that tells, how much is it sure that recognized phrase is the same as the pronounced one. The lower this confidence the higher probability that pronounced and recognized phrases differ. Speech Recognizer can filter out recognized phrases with confidence level below specified threshold; this significantly lowers a possibility of sending to Interaction Manager a different command than that was pronounced.

The speech recognizer was evaluated multiple times, in Honeywell’s multi-modal cockpit as well as in the project demonstrator during Validation Cycle II.
Experimental work was also conducted in the area of building a combined language-model & grammar-driven recognizer capable of processing free speech with embedded chunks corresponding to the recognition grammar.
Significant amount of work was done on accent classification (we prefer ‘accent’ rather than ‘dialect’ to denote variations of English), using bottle-neck DNN features and i-vector based scoring. Experimental results were produced on the Foreign Accented English database showing that this architecture is useable for accent detection and that the results correspond to linguistic intuition.

4.5 Gesture Recognition Algorithms
Technology for remote passive eye tracking and for 3D implicit gesture recognition has been developed. The technology processes input from camera(s) placed in the cockpit (simulator), records the data and processes it. The output in the case of eye tracking is the direction of the operator’s gaze, tested on the task of missed event detection. In the case of 3D gesture recognition, the output consists of recognized implicit gestures of the pilot, as she is interacting with the cockpit.
Along with the development of the algorithms and technology, datasets for training and evaluation of the methods have been collected and annotated and made public along with publication of the results. The results have been published at the IEEE Intelligent Transportation Systems Conference in the form of two papers dealing with the individual approaches. A summary article has been submitted to the IEEE Journal of Intelligent Transportation Systems and is currently under review.

5 Training Tool
The Training Tool is software running on a Table PC and intended to be used by simulator flight instructors and may be viewed as a spin-off product of the A-PiMod project. The purpose of the Training Tool is to use the capabilities of the A-PiMod concept to enhance the cockpit crew training.
An instructor working with the Training Tool will be able to:
1. Monitor real-time and post-hoc:
a. Simulator Data
b. A-PiMod system data about the crew state
c. Scenario events and self-set markers
2. Assess the crew on the basis of:
a. Instructor ratings (subjective, manual)
b. Task management
c. Crew state information
d. Flight parameters
3. Improve/support instructional feedback after the training, based on the output of the training tool

The Training Tool consists of two main Tabs – i.e. the (1) Crew Assessment Tab and the (2) Scenario Tab.
The (1) the crew assessment section is based on the SHAPE evaluation methodology. This method covers five topics: self, human Interaction, aircraft Interaction, procedures and environment. This evaluation form has been used by airlines, but can easily be replaced for other schemes. For the rating form, a distinction can be made between Pilot Flying (PF) and Pilot Monitoring (PM). Each aspect can be expanded or collapsed to maintain usability.
When an assessment is made the user can add an additional comment that will be registered together with the assessment. An historic overview of assessments made during a training session is visualized in the scenario info tab.
The (2) scenario section shows real-time information of a running training scenario. In the ‘Scenario Tab’ the instructor is able to monitor:
- Aircraft State data, such as altitude and speed
- Historical overview of the pilot’s workload
- Historic overview of the pilot’s situation awareness
- Historic overview of crew assessment events
- Historic overview of instructor comments

6 Validation
The third pilot concept and associated safety case has been advanced through the run time of the project (2013 to 2016). Overall, this has involved participatory action research – including twenty seven sessions with the A-PiMod Community of Practice (COP), and two rounds of simulator and desktop evaluation (i.e. validation cycle 1 and 2).
The assessment of potential impact/benefits has been undertaken in relation to actual end user/operational scenarios. Scenarios have been developed as part of (1) formal evaluation activities (Validation Cycle 1 and Validation Cycle 2), and ongoing research with the A-PiMod COP. In addition, several project members (Symbio, DLR, OFFIS, KITE and TCD) have worked together as part of the ‘A-PiMod User Interface Design Working Group’, to co-ordinate all relevant feedback relating to the specification and design of the MCD.
The safety impact of the A-PiMod adaptive automation and multimodal cockpit concept was quantified by a systematic approach using the Total Aviation Risk model and structured feedback on change factors for base events in this risk model. Overall it is assessed that the A-PiMod concept facilitates a reduction in the probability of fatal accidents by 43% from 4.0E-7 to 2.2E-7 fatal accidents per flight. This is about half of the FP7 Area 7.1.3 objective to reduce the accident rate by 80%.
The assessment of safety impact mostly relates to what has been advanced at a conceptual level (i.e. A-PiMod concept), rather than for its particular implementation as achieved in the A-PiMod project. In the course of the A-PiMod project a particular implementation of the concept was achieved by development of a set of tools, and these tools were used in validation experiments in a flight simulator context. This set of tools can be viewed as a first technical instantiation of the A-PiMod system, and the sophistication, scope and integration of the tools can be improved in future research and development.
In addition, research has resulted in the specification of the Training Tool. The A-PiMod crew model provides the instructor with real-time information on the mental/cognitive capability of flight crew during training, and combines this data with traditional data used in simulator training (i.e. instructor ratings and logs of simulated flight data).
The expected benefits/impact (as defined in D5.1) were validated through extensive field research including twenty seven COP sessions, Validation Cycle 1 (VC1), Validation Cycle 2 (VC2) and the evaluation at the demo day.
As validated in field research, the A-PiMod concept/approach will allow for an improved partnership between crew and automation (the "team players" idea), which will reduce human error and make substantial progress in relation to the EU aim of reducing the accident rate by 80%.
As indicated in the accident analysis research with COP members and VC2 participants, all nineteen participants indicated that A-PiMod would have played a significant role in preventing this accident.
As demonstrated in validation cycle 2 (VC2), the availability of a real-time crew model facilitates the implementation of a new training approach, for improved flight crew training in simulators. The training approach based on the A-PiMod models is particularly useful for improving Crew Resource Management. Further, the flight crew model based training will allow for more efficient acquisition and retention of skills. Moreover, it will allow for more efficient training by supporting the instructors through the application of the flight crew model monitoring the flight crews’ mental state and stress level.

Potential Impact:
With respect to the major impact, the A-PiMod System will provide benefit and contribute to reduce the accident rate and contribute to elimination and recovery from human error. The ‘Third Crew Member’ provides many operational and safety benefits. This includes: improving teamwork between crew and automation, providing task support in safety critical situations (operational risk assessment and decision support), providing task support in high workload situations (operational risk assessment and decision support), supporting workload management, improving team situation awareness, augmenting pilot monitoring performance (avoid monitoring errors – link to error chain), Providing support in relation to error detection and management.
Overall, this new concept will significantly improve the safety of flight, especially in abnormal situations and during situations of crisis management. Critically, A-PiMod will not eliminate human error – rather, it will reduce it (i.e. reduce the accident rate, given improvements in error detection and error management). Pilots considered the architecture and tools as helpful and expected that they will help to reduce and mitigate errors. The assessment of safety impact indicates a 43% reduction in the accident rate.
The major socio-economic impact of A-PiMod is seen with respect to Safety and Cost Efficiency
A-PiMod improved human centred design of cockpit displays through developing an highly adaptive cockpit architecture. First, A-PiMod developed a multimodal interface that allows crews to select the modality (Speech, Touch, Cursor Control Device, Multifunction Keyboard) they find most appropriate in the current situation. Information about the flight crew, about the tasks required in the current situation and the risk resulting from possible task distributions is consider to inform the crew about tasks and to issue warnings when tasks are not executed. Second, to improve human-machine cooperation, A-PiMod developed a real time crew models to derive the crew’s internal states in a reliable manner so that the human-machine interaction and the role of automation can be adapted according to the crews’ needs. Third, the A-PiMod system also supports the crew in managing information from different ends. Specifically, the Risk Assessment components – both at ML and at CL – collect information from different ends and present the result in a coherent manner on the MCM Display.
Also the Training Tool delivered by A-PiMod has an impact on safety as it supports and improves flight crew training on simulators. The Training Tool was connected with the Crew State Inference and can inform instructors about the current pilots’ situation awareness. By providing the instructor additional data on crew performance, the assessment of selected competencies is more accurate, resulting in more targeted improvement of competencies.
Regarding the cost efficiency, the training tool developed in A-PiMod supports the acquisition and retentions of complex competencies of the flight crew of current and next generation cockpit. More specifically, cost efficiency of training will be improved by assisting flight simulator instructors with a novel training tool and training techniques based on A-PiMod models and behavioural data collection.
In summary, the A-PiMod System, thoroughly designed as adaptive multimodal cockpit, acts as a real team player with the crew, and at the end, will allow to cooperatively flying an aircraft, in safer ways than it was possible in the past.

The A-PiMod partners disseminated the results of the project in several ways and to different audiences. The full list of dissemination activates is included in this report. The final dissemination plan, which provides more information about the individual dissemination activities and the targeted audiences, is provided with Deliverable D6.2.
Dissemination activates of A-PiMod that should be highlighted were the participation at two exhibitions: the Aerodays in London in October 2015 and the ILA Berlin Air Shown in June 2016. The modules, tools, and technologies developed in A-PiMod were integrated into a transportable cockpit simulator and presented at the exhibition. Interested visitors could experience how these components work together and support the pilots during prepared interactive scenarios.
Furthermore, a project website was set-up at the beginning of the project and kept up-to-date during the runtime of the project. Newsletters were prepared and made available on this website.
In advance of the A-PiMod DemoDay at ILA Berlin Air Show 2016, a video, which is about five minutes long, was produced to present the final A-PiMod achievements. The video was shown during the entire duration of the ILA Berlin Air Show 2016, on the background of the dedicated DLR/A-PiMod booth and later uploaded on the project website. The target audience is the interested fair visitors, domain experts, pilots and colleagues, as well as the wider public through its upload on the project website. Its main scope is to demonstrate the benefits, approach and methods of A-PiMod in an easily understandable way.

List of Websites:
www.apimod.eu

Project Coordinator
Andreas Hasselberg
Institute of Flight Guidance, DLR
Lilienthalplatz 7 - 38108 Braunschweig Germany
Tel.: +49 (0)531 295 2427
Andreas.Hasselberg@dlr.de

Dissemination Manager
Mirella Cassani
KITE Solutions S.r.l.
Via Labiena 93 - 21014 Laveno Mombello (VA) Italy
mirella.cassani@kitesolutions.it