From the Origin of Probability Theory to the Current Problems of Machine Learning

Machine learning is mainly standing on the pillars of probability theory and statistics. It was Kolmogorov's celebrated book ``Grundbegriffe der Wahrscheinlichkeitsrechnung'' (translated as ''Foundations of the Theory of Probability'') which in 1933 laid the foundation for modern probability theory. And it buried the many different approaches to mathematically describe probabilities from this time.

Nowadays, the axiomatization suggested by Kolmogorov is mostly, in particular in the field of machine learning, unquestioned. However, machine learning, due to its large potential and use, is facing previously unnoticed problems such as discrimination, distribution-shifts, missing interpretability, etc. . Research is making progress in tackling these challenges. But it often ignores the foundations on which its machinery is standing.

What can we learn from different axiomatizations of probability for understanding the current difficulties of machine learning techniques? Are the probabilistic assumptions made to model data reasonable and meaningful and to what extent? In this project, we go back in time. Primarily, we contrast the less known axiomatization of probability and randomness by Von Mises from 1914 to Kolmogorov's approach. Our first paper already revealed a surprising statement: randomness and fairness can be considered equivalent concepts in machine learning:

[Fairness and Randomness in Machine Learning: Statistical Independence and Relativization]

Further results will be published shortly...