Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Once you start working with probabilities, you quickly realize that the assumption of all possible outcomes being equally likely isn’t always reasonable. For example, if you think a coin is biased then you won’t want to assume that it lands heads with the same chance as tails.

To deal with settings in which some outcomes have a higher chance than others, a more general theory is needed. In the 1930’s, the Russian mathematician Andrey Kolmogorov (1903-1987) formulated some ground rules, known as axioms, that covered a rich array of settings and became the foundation of modern probability theory.

The axioms start out with an outcome space Ω\Omega. We will assume Ω\Omega to be finite for now. Probability is a function PP defined on events, which as you know are subsets of Ω\Omega. The first two axioms just set the scale of measurement: they define probabilites to be numbers between 0 and 1.

  • Probabilities are non-negative: for each event AA, P(A)0P(A) \ge 0.

  • The probability of the whole space is 1: P(Ω)=1P(\Omega ) = 1.

The third and final axiom is the key to probability being a “measure” of an event. We will study it after we have developed some relevant terminology.