Speaker
Description
Automated discovery systems appeared in the 70s; full-fledged experiment designs using decision theory and causal models were reported around 2000, and subsequently, their robotic extensions with planning soon emerged. In the last decade, large-scale quantitative studies analyzed policies for the selection of scientific experiments and the evolution of science. Artificial intelligence (AI) has provided many successful models, principled foundations, and practical frameworks throughout this process. Bayesianism, causal modeling, and semantic publishing are notable examples whose combination is still unfolding. I will show that their unification allows for constructing a novel, intermediate layer of scientific knowledge between data and interpreted scientific conclusions via the systematic reporting of posteriors for properties of models. The availability of this probabilistic layer of scientific knowledge could help solve the long-standing problem of machine learning with background knowledge and allow new ways for artificial creativity, thus boosting the automation of science.