Standard error (statistics)
From Wikipedia, the free encyclopedia
In statistics, the standard error of a measurement, value or quantity is the standard deviation of the process by which it was generated, after adjusting for sample size. In other words the standard error is the standard deviation of the sampling distribution of the sample statistic (such as sample mean, sample proportion or sample correlation).
Standard errors provide simple measures of uncertainty in a value and are often used because:
- If the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated in many cases;
- Where the probability distribution of the value is known, they can be used to calculate an exact confidence interval; and
- Where the probability distribution is unknown, relationships like Chebyshev’s or the Vysochanskiï-Petunin inequality can be used to calculate a conservative confidence interval
- As the sample size tends to infinity the central limit theorem guarantees that the sampling distribution of the mean is asymptotically normal.
The standard error of a sample from a population is the standard deviation of the sampling distribution and may be estimated by the formula:
where σ is the standard deviation of the population distribution and n is the size (number of items) in the sample.