Self-driving technologies need user-friendly AI
Self-driving technologies have seen rapid advancements thanks to artificial intelligence (AI), which can process massive amounts of data to make safe and efficient driving decisions. However, citizen concerns over how or why AI systems make these decisions need to be addressed if the widespread roll-out of automated vehicles is to be successful. “When we drive a car, we know why we steer in a particular direction, change lanes or brake,” says AIthena project coordinator Oihana Otaegui from Vicomtech in Spain. “But when we are in an automated car, it is possible that we don’t fully understand what is going on. The AI is like a black box – information goes in, and a decision comes out.”
Accountable by design
The AIthena project sought to crack open this black box, allowing humans in automated vehicles, as well as highway authorities and traffic managers, to understand why driving decisions are being made. To achieve this, the project team has pioneered a new approach to developing user-friendly AI in connected, cooperative and automated mobility (CCAM) applications. “Our methodology should help to ensure that AI systems are explainable to end users, and fully comply with European regulations such as the Data Act and AI Act,” adds Otaegui. The project’s approach covers AI development across four key phases: data collection, training, testing and deployment. A key element is documenting key details such as intended use, ethical considerations and performance metrics. “We have also advanced methods to protect sensitive data while maintaining AI performance,” continues Otaegui. “These methods included privacy-preserving techniques such as homomorphic encryption and federated learning, allowing AI to be trained without directly exposing raw data.”
Paving the road to better AI
This methodology will provide a foundation for further refinement of AI in automated vehicles, and will be tested in four practical use cases as the project enters its final year. The first of these case studies will focus on how AI systems perceive and act on raw data from sensors such as cameras, lidar and ultrasonics. A second case study will look at how information from different sensors can be integrated to create situational awareness. The project team is interested in understanding how AI interprets the driving environment, and takes account of factors such as the behaviour of other road users. “The third use case will explore decision-making processes in autonomous driving systems, focusing on understanding the reasons behind AI-driven decisions,” notes Otaegui. “The aim is to enhance transparency and trust in AI by explaining why specific driving decisions are made. This use case emphasises explainability and alignment with ethical principles.” A final use case will investigate AI-enabled traffic management systems, focusing on how automated vehicles interact with broader transportation networks. The goal here is to understand and optimise AI’s role in ensuring smooth traffic flow, efficient resource use and cooperative mobility.
A sustainable AI landscape
These case studies will be assessed through real-life driving situations, as well as simulations for more complex and potentially dangerous scenarios. The team will assess how AI perceives the environment, understands and communicates driving situations, makes decisions, and operates within broader traffic management systems. “If we can show that our methodology is a viable way of developing trustworthy AI for CCAM applications, then this could form the basis for future projects,” explains Otaegui. “By focusing on explainability, privacy and accountability, our hope is that AIthena will help ensure that AI technologies are transparent, ethical and human-centric. This will ultimately contribute to a more trusted and sustainable AI landscape for autonomous transport.”
Keywords
AIthena, CCAM, connected, cooperative and automated mobility, vehicle, AI, artificial intelligence, mobility, Lidar, transport, explainability, trustworthy, AI