Since my undergraduate studies in cognitive science, I have been interested in interdisciplinary research approaches. I started my PhD with Bob in November, 2021. I'm interested in a generalization of classical probability theory known as imprecise probability where one works with sets of probabilities instead. In this context, Bob and me have studied coherent risk measures which are aggregation functionals that generalize the notion of an expectation. In particular, we have focused on their tail sensitivity. For machine learning, these risk measures can be useful in many ways, for instance in the context of distributional robustness and fairness. To establish a firm conceptual foundation, we (jointly with Rabanus Derr) have developed a strictly frequentist theory of imprecise probability: we show that to any data sequence there exists a naturally associated imprecise probability, even if no associated precise probability exists; and conversely, any imprecise probability arises from some data sequence. Recently we have further generalized this work by investigating models for data which inherent imprecision.
A secondary, although related, interest of mine is fairness in machine learning and its interplay with uncertainty. Surprisingly insurance has deep conceptual parallels to machine learning, and similar fairness problems have arisen there. In a recent paper we explore these parallels. In fact, a lot of controversy revolves around the nature of expectation (probability) again, and thus risk measures show up here, too.
Find me here: Twitter GoogleScholar
christian (dot) froehlich (at) uni-tuebingen (dot) de