Part of a series onstatistics |
Probability theory |
---|
![]() |
Inprobability theory, aprobability space or aprobability triple is amathematical construct that provides a formal model of arandom process or "experiment". For example, one can define a probability space which models the throwing of adie.
A probability space consists of three elements:[1][2]
In order to provide a model of probability, these elements must satisfyprobability axioms.
In the example of the throw of a standard die,
When an experiment is conducted, it results in exactly one outcome from the sample space. All the events in the event space that contain the selected outcome are said to "have occurred". The probability function must be so defined that if the experiment were repeated arbitrarily many times, the number of occurrences of each event as a fraction of the total number of experiments, will most likely tend towards the probability assigned to that event.
The Soviet mathematicianAndrey Kolmogorov introduced the notion of a probability space and theaxioms of probability in the 1930s. In modern probability theory, there are alternative approaches for axiomatization, such as thealgebra of random variables.
A probability space is a mathematical triplet that presents amodel for a particular class of real-world situations. As with other models, its author ultimately defines which elements,, and will contain.
Not every subset of the sample space must necessarily be considered an event: some of the subsets are simply not of interest, others cannot be"measured". This is not so obvious in a case like a coin toss. In a different example, one could consider javelin throw lengths, where the events typically are intervals like "between 60 and 65 meters" and unions of such intervals, but not sets like the "irrational numbers between 60 and 65 meters".
In short, a probability space is ameasure space such that the measure of the whole space is equal to one.
The expanded definition is the following: a probability space is a triple consisting of:
Discrete probability theory needs onlyat most countable sample spaces. Probabilities can be ascribed to points of by theprobability mass function such that. All subsets of can be treated as events (thus, is thepower set). The probability measure takes the simple form
⁎ |
The greatest σ-algebra describes the complete information. In general, a σ-algebra corresponds to a finite or countablepartition, the general form of an event being. See also the examples.
The case is permitted by the definition, but rarely used, since such can safely be excluded from the sample space.
IfΩ isuncountable, still, it may happen thatP(ω) ≠ 0 for someω; suchω are calledatoms. They are an at most countable (maybeempty) set, whose probability is the sum of probabilities of all atoms. If this sum is equal to 1 then all other points can safely be excluded from the sample space, returning us to the discrete case. Otherwise, if the sum of probabilities of all atoms is between 0 and 1, then the probability space decomposes into a discrete (atomic) part (maybe empty) and anon-atomic part.
IfP(ω) = 0 for allω ∈ Ω (in this case, Ω must be uncountable, because otherwiseP(Ω) = 1 could not be satisfied), then equation (⁎) fails: the probability of a set is not necessarily the sum over the probabilities of its elements, as summation is only defined for countable numbers of elements. This makes the probability space theory much more technical. A formulation stronger than summation,measure theory is applicable. Initially the probabilities are ascribed to some "generator" sets (see the examples). Then a limiting procedure allows assigning probabilities to sets that are limits of sequences of generator sets, or limits of limits, and so on. All these sets are the σ-algebra. For technical details seeCarathéodory's extension theorem. Sets belonging to are calledmeasurable. In general they are much more complicated than generator sets, but much better thannon-measurable sets.
A probability space is said to be a complete probability space if for all with and all one has. Often, the study of probability spaces is restricted to complete probability spaces.
If the experiment consists of just one flip of afair coin, then the outcome is either heads or tails:. The σ-algebra contains events, namely: ("heads"), ("tails"), ("neither heads nor tails"), and ("either heads or tails"); in other words,. There is a fifty percent chance of tossing heads and fifty percent for tails, so the probability measure in this example is,,,.
The fair coin is tossed three times. There are 8 possible outcomes:Ω = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} (here "HTH" for example means that first time the coin landed heads, the second time tails, and the last time heads again). The complete information is described by the σ-algebra of28 = 256 events, where each of the events is a subset of Ω.
Alice knows the outcome of the second toss only. Thus her incomplete information is described by the partitionΩ =A1 ⊔A2 = {HHH, HHT, THH, THT} ⊔ {HTH, HTT, TTH, TTT}, where ⊔ is thedisjoint union, and the corresponding σ-algebra. Bryan knows only the total number of tails. His partition contains four parts:Ω =B0 ⊔B1 ⊔B2 ⊔B3 = {HHH} ⊔ {HHT, HTH, THH} ⊔ {TTH, THT, HTT} ⊔ {TTT}; accordingly, his σ-algebra contains 24 = 16 events.
The two σ-algebras areincomparable: neither nor; both are sub-σ-algebras of 2Ω.
If 100 voters are to be drawn randomly from among all voters in California and asked whom they will vote for governor, then the set of allsequences of 100 Californian voters would be the sample space Ω. We assume thatsampling without replacement is used: only sequences of 100different voters are allowed. For simplicity an ordered sample is considered, that is a sequence (Alice, Bryan) is different from (Bryan, Alice). We also take for granted that each potential voter knows exactly his/her future choice, that is he/she does not choose randomly.
Alice knows only whether or notArnold Schwarzenegger has received at least 60 votes. Her incomplete information is described by the σ-algebra that contains: (1) the set of all sequences in Ω where at least 60 people vote for Schwarzenegger; (2) the set of all sequences where fewer than 60 vote for Schwarzenegger; (3) the whole sample space Ω; and (4) the empty set ∅.
Bryan knows the exact number of voters who are going to vote for Schwarzenegger. His incomplete information is described by the corresponding partitionΩ =B0 ⊔B1 ⊔ ⋯ ⊔B100 and the σ-algebra consists of 2101 events.
In this case, Alice's σ-algebra is a subset of Bryan's:. Bryan's σ-algebra is in turn a subset of the much larger "complete information" σ-algebra 2Ω consisting of2n(n−1)⋯(n−99) events, wheren is the number of all potential voters in California.
A number between 0 and 1 is chosen at random, uniformly. Here Ω = [0,1], is the σ-algebra ofBorel sets on Ω, andP is theLebesgue measure on [0,1].
In this case, the open intervals of the form(a,b), where0 <a <b < 1, could be taken as the generator sets. Each such set can be ascribed the probability ofP((a,b)) = (b −a), which generates theLebesgue measure on [0,1], and theBorel σ-algebra on Ω.
A fair coin is tossed endlessly. Here one can take Ω = {0,1}∞, the set of all infinite sequences of numbers 0 and 1.Cylinder sets{(x1,x2, ...) ∈ Ω :x1 =a1, ...,xn =an} may be used as the generator sets. Each such set describes an event in which the firstn tosses have resulted in a fixed sequence(a1, ...,an), and the rest of the sequence may be arbitrary. Each such event can be naturally given the probability of 2−n.
These two non-atomic examples are closely related: a sequence(x1,x2, ...) ∈ {0,1}∞ leads to the number2−1x1 + 2−2x2 + ⋯ ∈ [0,1]. This is not aone-to-one correspondence between {0,1}∞ and [0,1] however: it is anisomorphism modulo zero, which allows for treating the two probability spaces as two forms of the same probability space. In fact, all non-pathological non-atomic probability spaces are the same in this sense. They are so-calledstandard probability spaces. Basic applications of probability spaces are insensitive to standardness. However, non-discrete conditioning is easy and natural on standard probability spaces, otherwise it becomes obscure.
A random variableX is ameasurable functionX: Ω →S from the sample space Ω to another measurable spaceS called thestate space.
IfA ⊂S, the notation Pr(X ∈A) is a commonly used shorthand for.
If Ω iscountable, we almost always define as thepower set of Ω, i.e. which is trivially a σ-algebra and the biggest one we can create using Ω. We can therefore omit and just write (Ω,P) to define the probability space.
On the other hand, if Ω isuncountable and we use we get into trouble defining our probability measureP because is too "large", i.e. there will often be sets to which it will be impossible to assign a unique measure. In this case, we have to use a smaller σ-algebra, for example theBorel algebra of Ω, which is the smallest σ-algebra that makes all open sets measurable.
Kolmogorov's definition of probability spaces gives rise to the natural concept of conditional probability. Every setA with non-zero probability (that is,P(A) > 0) defines another probability measureon the space. This is usually pronounced as the "probability ofB givenA".
For any eventA such thatP(A) > 0, the functionQ defined byQ(B) =P(B | A) for all eventsB is itself a probability measure.
Two events,A andB are said to be independent ifP(A ∩B) =P(A)P(B).
Two random variables,X andY, are said to be independent if any event defined in terms ofX is independent of any event defined in terms ofY. Formally, they generate independent σ-algebras, where two σ-algebrasG andH, which are subsets ofF are said to be independent if any element ofG is independent of any element ofH.
Two events,A andB are said to be mutually exclusive ordisjoint if the occurrence of one implies the non-occurrence of the other, i.e., their intersection is empty. This is a stronger condition than the probability of their intersection being zero.
IfA andB are disjoint events, thenP(A ∪B) =P(A) +P(B). This extends to a (finite or countably infinite) sequence of events. However, the probability of the union of an uncountable set of events is not the sum of their probabilities. For example, ifZ is anormally distributed random variable, thenP(Z =x) is 0 for anyx, butP(Z ∈R) = 1.
The eventA ∩B is referred to as "A andB", and the eventA ∪B as "A orB".