Speaker
Description
The outcomes of any given measurement of any scientific observables can be considered random variables. The crucial point is to find and apply statistical methods that enable a reasonable inference of measured observables and scientific models used to describe phenomena of interest. The statistical inference, however, is contingent on the notion of probability and how this notion is applied to the description and characterization of experimental results.
Altered Probability States then appear, based on different interpretations of the notion of
probability: frequentist interpretation based on the set theory and subjective ones called also Bayesian interpretations based on extensions of logic. Frequentist interpretation, based on well-mathematically established Kolmogorov's probability treatment, is intuitively well established - similarly to the phase transition concept, although it can appear - according to its definition - only in the thermodynamic limit. Experimental tests of theoretical predictions are always incomplete, and the measurements are limited in their accuracy. Statistical inference is the process of inferring the truth of our theories of nature based on incomplete information.
The problem of estimating and measuring this incomplete information leads to the notions of entropy and its different definition, being also a separate problem by itself.
The lecture will be devoted to comparing both approaches, frequentist, and Bayesian ones, their inference methods, and ranges of applicability. Real-life examples, from physics and astrophysics, with their pros and cons will be presented.