What’s exp. smoothing?

Print anything with Printful



Exponential smoothing is a technique that minimizes the effect of random variation in data to reveal underlying trends, but can also eliminate important trends if used improperly. Simple exponential smoothing is the most basic form, using a recursive formula to transform data. Triple exponential smoothing is used to account for trending and cyclical changes in data, and is useful for analyzing unemployment data. Exponential smoothing is a valuable tool for presenting data and making predictions, but care should be taken with polished data.

Exponential smoothing is a technique for manipulating data from a series of historical observations to minimize the effects of random variation. Mathematical modeling, the creation of a numerical simulation for a data set, often treats observed data as the sum of two or more components, one of which is a random error, the differences between the observed value and the underlying true value. When applied correctly, smoothing techniques minimize the effect of random variation, making it easier to see the underlying phenomenon—a benefit both in presenting data and predicting future values. They are referred to as “smoothing” techniques because they remove the jagged highs and lows associated with random variation and leave a smoother line or curve when data is graphed. The downside of smoothing techniques is that, if used improperly, they can also eliminate important trends or cyclical changes within the data, as well as random variation, and thus distort any predictions they offer.

The simplest smoothing technique is to take an average of the past values. Unfortunately, this also completely obscures any trends, changes, or cycles within the data. More complicated averages eliminate some, but not all, of this darkening and still tend to lag as forecasters, not responding to changes in trends until several observations after the trend has changed. Examples of this include a moving average that uses only the most recent observations, or a weighted average that values ​​some observations more than others. Exponential smoothing is an attempt to improve on these flaws.

Simple exponential smoothing is the most basic form, using a simple recursive formula to transform the data. S1, the first smoothed point, is simply equal to O1, the first observed datum. For each subsequent point, the smoothed point is an interpolation between the previous smoothed data and the current observation: Sn = aOn + (1-a)Sn-1. The constant “a” is known as the smoothing constant; it is evaluated between zero and one and determines how much weight is given to raw data and how much to smoothed data. Statistical analysis to minimize random error usually determines the optimal value for a given dataset.

If the recursive formula for Sn is rewritten only in terms of the observed data, we obtain the formula Sn = aOn + a(1-a)On-1 + a(1-a) 2On-2 + . . . revealing that the smoothed data is a weighted average of all data with the weights varying exponentially in a geometric series. This is the source of the exponential in the phrase “exponential smoothing”. The closer the value of “a” is to one, the more responsive to trend changes the smoothed data will be, but at the expense of also being more prone to random data variation.

The advantage of simple exponential smoothing is that it allows for a trend in how the smoothed data changes. It hurts, however, to separate changes in trend from inherent random variations in the data. For this reason, double and triple exponential smoothing are also used, introducing additional constants and more complicated recursions to account for trending and cyclical change in the data.

The unemployment data is an excellent example of data that benefits from triple exponential smoothing. Triple smoothing allows you to view unemployment data as the sum of four factors: the inevitable random error in data collection, a baseline level of unemployment, cyclical seasonal variation affecting many industries, and an evolving trend that reflects the state health of the economy. By assigning smoothing constants to baseline, trend, and seasonal variation, triple smoothing makes it easier for the layman to see how unemployment varies over time. Choosing different constants will alter the look of the smoothed data, however, which is one reason economists can sometimes differ greatly in their predictions.

Exponential smoothing is one of many methods of mathematically altering data to make more sense of the phenomenon that generated the data. The calculations can be done on commonly available office software, so it’s also a readily available technique. Used correctly, it is an invaluable tool for presenting data and making predictions. If done improperly, it can potentially obscure important information along with the random variations, so care should be taken with polished data.




Protect your devices with Threat Protection by NordVPN


Skip to content