ECCOMAS 2024

AI4CCAM: Trustworthy AI for Connected, Cooperative Automated Driving

  • Gotlieb, Arnaud (Simula Research Laboratory)

Please login to view abstract download link

Artificial Intelligence is considered one of the key enabling technologies that can lead to the successful deployment of automated vehicles. By leveraging the huge amount of data collected by new and powerful sensors, artificial intelligence can revolutionise the future of automated driving functions. However, the benefits of artificial intelligence in automated driving are hampered by ethical risks that can compromise its adoption by car drivers, passengers or vulnerable road users such as pedestrians, cyclists, or persons with disabilities. The EU-funded AI4CCAM project (www.ai4ccam.eu - 14 partners, 2023-2025, 5.9MEUR budget, Grant agreement No 101076911) aims to address this hurdle by developing trustworthy-by-design artificial intelligence methods and models for automated driving functions. Leveraging the Trustworthy AI guidelines – a document created by the European Commission’s High-Level Expert Group on Artificial Intelligence – and the ethical recommendations for connected automated vehicles, AI4CCAM supports (i) the development of automated driving scenarios with ethical risks involving vulnerable road users, (ii) the design of a trustworthy deep learning architecture which embeds pedestrian and cyclist behaviour anticipation models, (iii) scene understanding through qualitative constraint acquisition, (iv) visual gaze estimation and virtual reality, (v) user acceptance studies including levers and barriers for automated vehicles, (vi) simulate explainable car trajectory prediction models. Taking into account all the capabilities and potential risks of AI, AI4CCAM creates trustworthy AI models and methods that will advance the safety and user acceptance of automated vehicles. These models will be tested on scenarios illustrated in three distinct use cases which will cover the overall chain Sense-Plan-Act of automated vehicles, as well as the necessary user acceptance questions that naturally arise when AI models and methods are used. The principles of trustworthiness identified at a European level will be enriched and transposed into specific indicators and practices enabling the evaluation of the systems under test.