17th European Dependable Computing Conference
13-16 September 2021
Munich, Germany


Sep 14th, 2021:Claus Bahlman, Siemens AG, LinkedIn
Sep 15th, 2021:Arnaud Gotlieb, Simula Research Laboratory, LinkedIn
Sep 16th, 2021:Fabian Hüger, CARIAD and Volkswagen AG, LinkedIn

Challenges for Safety and Dependability from Perception Tasks in Modern Rail Applications

Dr. Claus Bahlmann
Head of AI & Principal Expert Computer Vision & Artificial Intelligence, Siemens Mobility GmbH

Siemens AG, LinkedIn

Tuesday, Sept. 14, 2021

Abstract and BIO - To be announced

Claus Bahlmann







Leveraging AI methods for testing non-testable autonomous systems

Arnaud Gotlieb
Chief Research Scientist, Simula Research Laboratory

Simula Research Laboratory, LinkedIn

Wednesday, Sept. 15, 2021

Autonomous systems are emerging entities that embed self-adapting and self-reasoning capabilities based on multiple sensors, such as for example autonomous ships and industrial collaborative robots. Autonomous technologies offer a significant opportunity to enhance industrial processes and economy, but they may also cause fatal harm if they malfunction. Thus, thoroughly testing these systems is crucial to ensure their safe and fault-free behaviour in many situations, but it is also challenging as validation engineers can hardly predict their expected behaviours. For that reason, these systems appear to be non-testable. Hopefully, several Artificial Intelligence based methods are developed to help and support validation engineers in the validation of autonomous systems but these technologies still lack support and automation. My talk will review some of these intelligent validation methods and how they are deployed to testing autonomous systems. It will also address current challenges in this area.

Arnaud Gotlieb
Arnaud Gotlieb, Chief Research Scientist at Simula Research Laboratory in Norway, is an expert of the application of Artificial Intelligence to the validation of software-intensive systems, cyber-physical systems including industrial robotics and autonomous systems. He completed his PhD on automatic test data generation using constraint logic programming in 2000 at the University of Nice-Sophia Antipolis and got habilitated (HDR) in Dec. 2011 from University of Rennes, France. Dr. Gotlieb has co-authored more than 120 publications in Artificial Intelligence and Software Engineering and developed several tools for testing safety-critical systems. He was the scientific coordinator of the French ANR-CAVERN project (2008-2011) for Inria, dedicated to the verification of software systems with abstraction-based methods and he led the research-based innovation centre Certus dedicated to software validation and verification (2011-2019) at Simula. He was awarded with the prestigious RCN FRINATEK grant for the T-LARGO project on testing learning robots (2018-2022). He leads the industrial pilots experiments of the H2020 AI4EU Project (2019-2021), dedicated to the creation of the European AI-on-demand platform. Dr. Gotlieb has served in many PCs including IJCAI, AAAI, CP, ICSE-SEIP, ICST, ISSRE, co-chaired the scientific program of QSIC 2013, the SEIP track of ICSE 2014, the “Testing and Verification” track of CP from 2016 to 2019. He co-chaired the first IEEE Artificial Intelligence Testing Conference in 2019 and he is an associate editor of the Wiley Software Testing, Verification and Reliability journal. In 2021, he has co-created RESIST, the first Inria-Simula associate team dedicated the development of resilient software-systems.

Towards Safe AI for Automated Driving

Dr. Fabian Hüger
AI Safety Researcher

CARIAD and Volkswagen AG, LinkedIn

Thursday, Sept. 16, 2021

Highly automated vehicles must be able to accurately perceive their environment and react appropriately. Reliable environment Perception including reliable identification and classification of all relevant road users is a basic prerequisite for implementing autonomous driving functions. This is especially true for the perception of the environment in complex urban traffic situations. Methods of artificial intelligence (AI) are considered the method of choice for perception functions making AI a key technology. One of the greatest challenges for integrating these technologies into highly automated vehicles is ensuring and certifying the (functional) safety of such systems without the driver as a safety fallback. Existing and established safety processes cannot directly be transferred to machine learning methods. The German publicly-funded project “KI Absicherung” is addressing this issue: For an urban L4 system with AI-based pedestrian detection, methods and measures for verifying the safety of the AI function are developed and investigated. An exemplary safety argumentation is developed and may serve as a template for the industry. This talk introduces our approach, discusses DNN-specific Safety Concerns and methods and measures for their mitigation.

Fabian Hüger
Fabian Hüger Since 2017 Fabian Hüger is working on the usage of AI in safety-critical systems and has co-authored more than 20 publications in that area. He holds a PhD in Electrical Engineering and joined Volkswagen in 2010 as a researcher for connected cars and later autonomous driving. He is the Volkswagen technical project lead for the KI Absicherung project. In 2021, he joined CARIAD as an expert for SafeAI with the mission to shape the processes, methods and tools for the usage of AI in safety-critical systems.