Updated: Fri, 10/11/2024 - 12:00

Campus/building access, classes and work will return to usual conditions, as of Saturday, Oct. 12. See Campus Public Safety website for details.


Accès au campus et aux immeubles, cours et modalités de travail : retour à la normale à compter du samedi 12 octobre. Complément d’information : Direction de la protection et de la prévention.

Event

PhD defence of Aren Babikian – System-level testing of autonomous vehicles through consistent model generation with qualitative abstractions and abstract coverage

Friday, August 23, 2024 10:30to12:30
McConnell Engineering Building Room 603, 3480 rue University, Montreal, QC, H3A 0E9, CA

Abstract

In recent years, autonomous vehicles (AVs) controlled by advanced machine learning techniques have significantly gained in popularity. With many promising functionalities, citizens are quick to transition from regular road vehicles to (at least partially) autonomous vehicles for their daily travels. However, as the number of AVs on our roads increases, so do the related safety assurance concerns.

To identify and address such safety concerns, researchers and practitioners often refer to existing safety standards for autonomous (e.g. ISO 21448) and for regular vehicles (e.g. ISO 26262-1). Of note is the level of detail in such standards: although there exist requirements for the individual components involved in AVs, safety standards place system-level requirements and restrictions on the AV-under-test. Unfortunately, research has shown that despite being efficient in their scope, component-level testing approaches often do not adapt well to the system level. Therefore, system-level testing approaches must be derived independently and may make certain assumptions wrt. the correctness of the various underlying components.

System-level safety assurance approaches for AVs are often based on adaptations of existing approaches for general software systems. However, existing research suggests that such adaptations are not adequate for AV testing. On one hand, upfront design time verification of AVs is practically infeasible considering the potentially infinite number of environments (contexts) an AV must interact with. On the other hand, runtime techniques, such as on-road monitoring, are unsustainable as they place untested or partially tested AVs on real roads, which poses a serious threat to the safety of surrounding vehicles and pedestrians. To address such testing challenges, recent safety assurance approaches adopt the scenario-based testing paradigm: they test AVs by (1) automatically deriving traffic scenarios, (2) executing them in simulation and (3) evaluating the system-level safety of the AV-under-test.

In this thesis, I propose a scenario-based AV testing approach that builds upon automated generation of critical test traffic scenarios. I provide contributions in accordance to three foundational research questions. As an initial step, I propose (FRQ1) a multi-faceted, formal scenario specification language that incorporates relevant traffic concepts at various levels of abstraction for adequate representation of AV test cases. To evaluate the (practical) relevance of traffic scenarios, I propose (FRQ2) various coverage metrics at different abstraction levels. In particular, such metrics incorporate concepts related to potential danger in traffic scenarios (e.g. collisions, near-misses).

To complete the AV testing workflow, I then propose (FRQ3) various approaches that derive consistent, simulation-ready traffic scenarios with high abstract coverage from abstract specifications given as input. I compare different approaches in accordance to the particularities of the input specification language it handles and to the relevance of traffic scenarios it derives. As a practical outcome of my contributions, I provide safety evaluation data and analysis for three state-of-the-art AV controllers (i.e. TransFuser, Dave2 and BeamNG.AI) within two simulation environments (i.e. CARLA, BeamNG.tech).

Back to top