Speakers
Description
In complex systems, the importance of empirical data analysis-based testing, verification, and validation increases to assure a proper level of service under the typically varying workload. Scaling of these systems needs reusable and scale-independent models for reconfigurability.
The limited faithfulness of speculative analytic models does not support complex system identification. This way, empirical system identification from observations is emerging in this field. The increasing complexity necessitates explainable and well-interpretable models that follow the logic of everyday thinking to validate the model and its use in operation.
This presentation highlights how the empirical model extraction can integrate into the system identification process and presents a qualitative reasoning-based approach for model generalization, consistency checking, model verification using answer set programming.
An interesting research direction is to combine complex and accurate machine learning with the easy interpretability and explainability of qualitative models. New fundamental research from the literature uses Logic Explained Networks (LEN) or Boolean rule generators to create explanations for complex ML models, like neural networks, even if they are black-boxes. Our research combines qualitative modeling with logic interpretation, a well-proven method in qualitative physics to introduce the entire repertoire of discrete formal methods to validate embedded artificial intelligence.