2

I've Googled this question in several different ways and I get a lot of hits, but nothing answering the question.

To flesh out the subject line a little, I'm interested in understanding where, in the big scheme of things, Kolmogorov's 1933 work stands. Was it just a ``very a nice to have a sound mathematical basis for probability'' sort of thing, or did that sound basis allow for developments in statistics that likely could not have come about without it?

TonyK
  • 325
  • 1
  • 9

1 Answers1

1

This is not going to directly answer your question, but it's too long for a comment.

From Chapter 1 of "The Theory of Statistics and Its Applications" (https://www.stat.rice.edu/~dcox/Stat581/chap1-2.pdf):

Measure theory is a rather difficult and dry subject, and many statisticians believe it is unnecessary to learn measure theory in order to understand statistics. To counter these views, we offer the following list of benefits from studying measure theory:

(i) A good understanding of measure theory eliminates the artificial distinction between discrete and continuous random variables. Summations become an example of the abstract integral, so one need not dichotomize proofs into the discrete and continuous cases, but can cover both at once.

(ii) One can understand probability models which cannot be classified as either discrete or continuous. Such models do arise in practice, e.g. when censoring a continuous lifetime and in Generalized Random Effects Models such as the Beta-Binomial.

(iii) The measure theoretic statistics presented here provides a basis for understanding complex problems that arise in the statistical inference of stochastic processes and other areas of statistics.

(iv) Measure theory provides a unifying theme for much of statistics. As an example, consider the notion of likelihoods, which are rather mysterious in some ways, but at least from a formal point of view are measure theoretically quite simple. As with many mathematical theories, if one puts in the initial effort to understand the theory, one is rewarded with a deeper and clearer understanding of the subject.

(v) Certain fundamental notions (such as conditional expectation) are arguably not completely understandable except from a measure theoretic point of view.

From the start of https://statmodeling.stat.columbia.edu/2009/05/27/the_benefits_of/

Stephen Senn quips: “A theoretical statistician knows all about measure theory but has never seen a measurement whereas the actual use of measure theory by the applied statistician is a set of measure zero.”

KCd
  • 5,517
  • 19
  • 31
  • You are right that it doesn't directly answer my question but it goes a long way to giving me a sense of the importance of the measure-theoretic approach. So, thank you for that. I will check it and uptick because it deserves it, but I invite others to weigh in. – TonyK Dec 17 '22 at 01:11
  • 1
    @TonyK I wonder if the way statisticians use probability is somewhat analogous to the way physicists use math: they are more concerned with what concepts mean for descriptions of the real world and which properties about those concepts are true or false, but not the fussy mathematical details needed to make everything completely rigorous as math. I once read that some physicist's (Weinberg?) textbook account of quantum mechanics described Hilbert spaces as having an inner product and he added as a comment that mathematicians also included a completeness axiom "for some reason"... – KCd Dec 17 '22 at 02:47
  • @KCd1: You may be right. Certainly, many of the applied statisticians I met in years long gone by never fussed with measure theory. My question was prompted by by reading that Markov chains etc., pre-dated K's book in 1933. I had naively assumed that such developments were the result of the measure-theoretic approach. That made me wonder what the approach gave us beyond the sort of mathematical purity that Hilbert had challenged to profession to produce. – TonyK Dec 17 '22 at 04:08
  • 1
    In what way did you think Markov chains need measure theory in order to be conceived? The idea behind a discrete time Markov chain (the only kind Markov looked at, I suspect) can be understood without any fancy tools at all. Many special cases of abstract general concepts in math (stochastic process, metric space, manifold, vector space, group) were studied before there was an abstract general definition, and it was the experience with those special cases that motivated the general definitions in the first place. Measure theory was much more important for probability than for statistics. – KCd Dec 17 '22 at 13:40
  • 1
    For probability theory, measure theory is essential since it gives probabilists the tools to know what their central concepts "really are" in a mathematical sense (e.g., what is a random variable, really? Or a stochastic process? Or Brownian motion?) and it offered techniques to prove limit theorems in settings where pre-measure-theoretic tools were not up to the task. Of course that pre-1933 work was all essential in suggesting concepts to study or points of view that pure reason might never lead you to (e.g., random variables and independence). – KCd Dec 17 '22 at 13:48
  • 2
    Neyman's 1937 paper "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability" https://royalsocietypublishing.org/doi/10.1098/rsta.1937.0005 is an example. Of course K's 1933 book was not the beginning of measure theoretic probability theory, so much as the culmination of 3 decades of work, building on (for instance) Lebesgue and Borels contributions to measure theory. Neyman was well acquainted with measure theory. – kimchi lover Dec 17 '22 at 16:41
  • Thank you both. These comments and explanations have been very helpful. The links too look to be very useful although clearly I have not had the time to read carefully. – TonyK Dec 17 '22 at 17:41
  • 1
    -1. Even if ``Measure theory was much more important for probability than for statistics" (and where is your evidence for this, please?), it does not imply that no advances in statistics ever were brought on with the help of measure theory. Neyman's work on confidence intervals is a prime example indeed. It should be made to an answer (and then I would comment more). – Margaret Friedland Dec 18 '22 at 00:17
  • 1
    @MargaretFriedland if you can post an answer about Neyman's work, please do so. As for measure theory being more important for probability than for statistics, that is simply what I have observed, e.g., there are academic programs in statistics that don't require measure theory and there are faculty in statistics (in the applied direction) that don't know measure theory. My impression is that non-academic statisticians don't typically use measure theory in their work (even if it relies on concepts originally motivated by measure theory, but can be understood without it). – KCd Dec 18 '22 at 15:39
  • If there are specific developments in statistics attributable to the measure-theoretic definition of probability, that would be nice! That said, this discussion has already been helpful. – TonyK Dec 19 '22 at 00:05