Project description
A smart new system to support air traffic controllers
Increased AI and machine learning (ML) automation enable better performance, output, efficiency in solving problems, safety and control of processes. But technology that replaces human activity could create problems when its processes are not understandable for humans. The EU-funded MAHALO project aims to design an automated AI, ML and deep neuronal learning-based explainable system for problem solving between aircrews and air traffic controllers. Trained by the individual operator, the machine will be able to inform the operator what it has learnt. This will increase capacity, performance and safety. Specifically, MAHALO will investigate the impact of transparency (how much the AI is able to explain why it took a specific decision) and conformity (how much the decision taken by the AI is similar to the one a controller would choose). The project will be evaluated in real-time simulations for traffic difficulties, trust, acceptance and controller understanding. MAHALO’s framework will serve as a model for future AI systems.
Objective
MAHALO asks a simple but profound question: in the emerging age of Machine Learning (ML), should we be developing automation that matches human behavior (i.e. conformal), or automation that is understandable to the human (i.e. transparent)? Further, what tradeoffs exist, in terms of controller trust, acceptance, and performance? To answer these questions, MAHALO will:
• Develop an individually-tuned ML system comprised of layered deep learning and reinforcement models, trained on controller performance (context-specific solutions), strategies (eye tracking), and physiological data, which learns to solve ATC conflicts;
• Couple this to an enhanced en-route CD&R prototype display to present machine rationale with regards to ML output;
• Evaluate in realtime simulations the relative impact of ML conformance, transparency, and traffic complexity, on controller understanding, trust, acceptance, workload, and performance; and
• Define a framework to guide design of future AI systems, including guidance on the effects of conformance, transparency, complexity, and non-nominal conditions.
Building on the collective experience within the team, past research, and recent advances in the areas of ML and ecological interface design (EID), MAHALO will take a bold step forward: to create a system that learns from the individual operator, but also provides the operator insight into what the machine has learnt. Several models will be trained and evaluated to reflect a continuum from individually-matched to group-average. Most recent work in areas of automation transparency, Explainable AI (XAI) and ML interpretability will be explored to afford understanding of ML advisories. The user interface will present ML outputs, in terms of: current and future (what-if) traffic patterns; intended resolution maneuvers; and rule-based rationale. The project’s output will add knowledge and design principles on how AI and transparency can be used to improve ATM performance, capacity, and safety.
Fields of science
Not validated
Not validated
Keywords
Programme(s)
Funding Scheme
RIA - Research and Innovation actionCoordinator
00185 Roma
Italy
The organization defined itself as SME (small and medium-sized enterprise) at the time the Grant Agreement was signed.