Speakers
Description
Our research focuses on the suitability of universal model explanatory tools and methods for qualitative abstractions of embedded AI models for V&V purposes.
The rapidly spreading solutions based on embedded artificial intelligence in cyber-physical systems have defined the behavioral model of complex systems with machine learning tools. A fundamental obstacle to their prevalence is that accurate modeling can only be satisfied with complex models, whose validation phase - especially in critical systems - can be problematic in terms of the interpretability of the model and the explanation of its behavior.
Growing demand is discernable in the field of Artificial Intelligence, with the primary intention of improving the explainability of data sets and derived models (xAI). Various proposed techniques use directly explainable AI model structures, sacrificing some accuracy, or seeking universal tools that derive explanations, independently of the modeling paradigm.
Qualitative modeling plays a special role in the foundations of model-based supervisory control, in which a high-level overview is provided by a discrete model in accordance with the concept of hybrid modeling. The core idea behind the abstraction is mapping entire subsets of the continuous state space corresponding to an operational domain exposing similar behavior to a single qualitative state.
The principle of qualitative modeling has a long tradition in different fields of science as descriptive means; however its use for CPS control raises specific challenges: (i) primarily the quality of the model must be guaranteed, as the decisive factor of its faithfulness; (ii) insufficient coverage of potentially dangerous operational domains may lead to hazardous control errors, (iii) outliers need special care in critical applications.
In our submission, a general overview will be given on the major motivations, intentions, and challenges behind the concept of XAI. So far, many open-source explainable AI libraries are available, including a comprehensive set of algorithms that cover different dimensions of explanations and proxy explainability metrics. Since there is no formal definition of interpretability, it can be approached in various ways, and for each approach, a different library/algorithm is provided. Using two state-of-the-art explainability libraries (IBM AIX360, DALEX), these different approaches will be reviewed via showcasing their effectiveness on the wide scale of machine learning models equipped with different interpretability characteristics.
Our research proposes an approach that allows for qualitative abstraction-level validation of embedded models known even at the black box level. This technique combines dimensionality reduction and clustering methods that accurately separate the operational domains while also recognizing outliers using well-fitting cluster borders. Various interpretability methods will be used for highlighting the cohesive factors amongst the operating regions and guide the understanding of the functionality as well.
Our approach aligns with the current ongoing trend to shift from the extensive domain knowledge, (computationally heavy) construction, and fitting of models towards the V&V and interpretation of the model. We believe that introducing increasing (semi-)automation to the modeling part leads to a better understanding of the analyzed data.