Variance measures the spread of a sample or population, calculated by adding the squares of the difference between each item and the mean. Standard deviation is the square root of variance and is more intuitive. Regression and ANOVA use variance to predict outcomes and classify sources of variance, respectively.
Variance, like range, is a statistic related to the spread of a given sample or population. It is calculated for a given population by adding the squares of the difference between each item and the mean, then dividing that total by the number of items in the population. The closer a population is clustered around the mean, the smaller the variance will be.
A closely related statistic is the standard deviation, which is the square root of the variance. Standard deviation is used more frequently in descriptive statistics because it is more intuitive and shares the same units as the mean. In the normal distribution, which is the classic bell-shaped distribution curve common to many phenomena, just over 95% of the population will be within two standard deviations of the mean.
Variance is very useful for predictive statistical techniques such as regression or analysis of variance (ANOVA). The regression will model a variable as the sum of one or more factors affecting the variable and the variance, which represents the difference between the actual observed items and their predicted values. For example, construction employment in a city could be modeled as a baseline, plus a seasonal adjustment for the time of year, plus an adjustment for the national economy, plus the variance. Regression techniques attempt to determine a model with the smallest variance so that it is hoped that the expected value of the prediction will be close to the observed value after the observation is possible.
ANOVA, commonly used in clinical trials, is a statistical technique for classifying sources of variance. Observations are classified based on one or more factors of interest in an experiment. Least squares techniques are used to partition the variance into random error, factor effects, and interaction effects, with the goal of determining the influence that the factor or factors have on the variable. For example, a company testing a new fertilizer might use an ANOVA experiment with crop yield as the variable being studied and factors in what fertilizer was used and how much rainfall the crops received. How the new fertilizer compared to other fertilizers would be an effect factor in the experiment; if the new fertilizer outperformed its rivals for standard precipitation but not for heavy rainfall, this would be an example of an interaction effect.
Protect your devices with Threat Protection by NordVPN