Conditional expectation
From Wikipedia, the free encyclopedia
In probability theory, a conditional expectation (also known as conditional mean) is the expected value of a real random variable with respect to a conditional probability distribution.
Contents |
[edit] Special cases
In the simplest case, if A is an event whose probability is not 0, then
as a function of S, is a probability measure on A and E(X | A) is the expectation of X with respect to this probability PA. In case X is a discrete random variable (that is a random variable which with probability 1 takes on only a countable number of values), and with finite first moment, the expectation is explicitly given by
where {X = r} is the event that X takes on the value r. Since X has finite first moment, it can be shown this sum converges absolutely. The sum is countable since {X = r} has probability 0 for all but countable many values of r.
If X is the indicator function of an event S, then E(X | A) is just the conditional probability PA(S).
If Y is another real random variable, then for each value of y we consider the event {Y = y}. The conditional expectation E(X | Y = y) is shorthand for E(X | {Y = y}). Of course, in general, this may not be defined, since {Y = y} may have zero probability. (Reminder for those less-than-accustomed to the conventional language and notation of probability theory: this paragraph is an example of why case-sensitivity of notation must not be neglected, since capital Y and lower-case y refer to different things.)
The way out of this limitation is as follows: If both X and Y are discrete random variables then for any subset B of Y
where 1 is the indicator function. For general random variables Y, P{Y=r} is zero for almost every r. As a first step in dealing with this problem, let us consider the case Y has a continuous distribution function. This means there is a non-negative integrable function φY on R which is the density of Y. This means
for any a in R. We can then show the following: for any integrable random variable X, there is a function g on R such that
This function g is a suitable candidate for the conditional expectation.
In order to handle the general case, we need more powerful mathematical machinery.
[edit] Mathematical formalism
Let X, Y be real random variables on some probability space (Ω, M, P) where M is the σ-algebra of measurable sets on which P is defined. We consider two measures on R:
- Q is the law of Y defined by Q(B) = P(Y−1(B)) for every Borel subset B of R is a probability measure on the real line R. Now
- PX given by
If X is an integrable random variable, then PX is absolutely continuous with respect to Q. In this case, it can be shown the Radon-Nikodym derivative of PX with respect to Q exists; moreover it is uniquely determined almost everywhere with respect to Q. This random variable is the conditional expectation of X given Y, or more accurately a version of the conditional expectation of X given Y.
It follows that the conditional expectation satisfies
for any Borel subset B of R.
[edit] Conditioning as factorization
In the definition of conditional expectation that we provided above, the fact Y is a real random variable is irrelevant: Let U be a measurable space, that is a set equipped with a σ-algebra of subsets. A U-valued random variable is a function Y: Ω → U such that Y−1(B) is an element of M for any measurable subset B of U.
We consider the measure Q on U given as above: Q(B) = P(Y−1(B)) for every measurable subset B of U. Q is a probability measure on the measurable space U defined on its σ-algebra of measurable sets.
Theorem. If X is an integrable real random variable on Ω then there is one and, up to equivalence a.e. relative to Q, only one integrable function g such that for any measurable subset B of U:
There are a number of ways of proving this; one as suggested above, is to note that the expression on the left hand side defines as a function of the set B a countably additive probability measure on the measurable subsets of U. Moreover, this measure is absolutely continuous relative to Q. Indeed Q(B) = 0 means exactly that Y−1(B) has probability 0. The integral of an integrable function on a set of probability 0 is itself 0. This proves absolute continuity.
The defining condition of conditional expectation then is the equation
We can further interpret this equality by considering the abstract change of variables formula to transport the integral on the right hand side to an integral over Ω:
This equation can be interpreted to say that the following diagram is commutative in the average.
The equation means that the integrals of X and the composition over sets of the form Y − 1(B) for B measurable are identical.
[edit] Conditioning relative to a subalgebra
There is another viewpoint for conditioning involving σ-subalgebras N of the σ-algebra M. This version is a trivial specialization of the preceding: we simply take U to be the space Ω with the σ-algebra N and Y the identity map. We state the result:
Theorem. If X is an integrable real random variable on Ω then there is one and, up to equivalence a.e. relative to P, only one integrable function g such that for any set B belonging to the subalgebra N
where g is measurable with respect to N (a stricter condition than the measurability with respect to M required of X). This form of conditional expectation is usually written: E(X|N). This version is preferred by probabilists. One reason is that on the space of square-integrable real random variables (in other words, real random variables with finite second moment) the mapping X → E(X|N) is the self-adjoint orthogonal projection
[edit] Basic properties
Let (Ω,M,P) be a probability space.
- Conditioning with respect to a σ-subalgebra N is linear on the space of integrable real random variables.
- E(1|N) = 1
- Jensen's inequality holds: If f is a convex function,then
- Conditioning is a contractive projection
-
- for any s ≥1.
[edit] See also
- Law of total probability
- Law of total expectation
- Law of total variance
- Law of total cumulance (generalizes the other three)
[edit] References
- William Feller, An Introduction to Probability Theory and its Applications, vol 1, 1950
- Paul A. Meyer, Probability and Potentials, Blaisdell Publishing Co., 1966