# Hodrick–Prescott filter

The Hodrick–Prescott filter is a mathematical tool used in macroeconomics, especially in real business cycle theory, to separate the cyclical component of a time series from raw data. It is used to obtain a smoothed-curve representation of a time series, one that is more sensitive to long-term than to short-term fluctuations. The adjustment of the sensitivity of the trend to short-term fluctuations is achieved by modifying a multiplier $\lambda$. The filter was popularized in the field of economics in the 1990s by economists Robert J. Hodrick and Nobel Memorial Prize winner Edward C. Prescott.[1] However, it was first proposed much earlier by E. T. Whittaker in 1923.[2]

## The equation

The reasoning for the methodology uses ideas related to the decomposition of time series. Let $y_t\,$ for $t = 1, 2, ..., T\,$ denote the logarithms of a time series variable. The series $y_t\,$ is made up of a trend component, denoted by $\tau\,$ and a cyclical component, denoted by $c\,$ such that $y_t\ = \tau_t\ + c_t\ + \epsilon_t\,$.[3] Given an adequately chosen, positive value of $\lambda$, there is a trend component that will solve

$\min_{\tau}\left(\sum_{t = 1}^T {(y_t - \tau _t )^2 } + \lambda \sum_{t = 2}^{T - 1} {[(\tau _{t+1} - \tau _t) - (\tau _t - \tau _{t - 1} )]^2 }\right).\,$

The first term of the equation is the sum of the squared deviations $d_t=y_t-\tau_t$ which penalizes the cyclical component. The second term is a multiple $\lambda$ of the sum of the squares of the trend component's second differences. This second term penalizes variations in the growth rate of the trend component. The larger the value of $\lambda$, the higher is the penalty. Hodrick and Prescott suggest 1600 as a value for $\lambda$ for quarterly data. Ravn and Uhlig (2002) state that $\lambda$ should vary by the fourth power of the frequency observation ratio; thus, $\lambda$ should equal 6.25 for annual data and 129,600 for monthly data.[4]

## Drawbacks to the H–P filter

The Hodrick–Prescott filter will only be optimal[clarification needed] when:[5]

• Data exists in a I(2) trend.
• If one-time permanent shocks or split growth rates occur, the filter will generate shifts in the trend that do not actually exist.
• Noise in data is approximately normally distributed.
• Analysis is purely historical and static (closed domain). The filter causes misleading predictions when used dynamically since the algorithm changes (during iteration for minimization) the past state (unlike a moving average) of the time series to adjust for the current state regardless of the size of $\lambda$ used.

The standard two-sided HP-filter is non-causal as it is not purely backward looking. Hence, it should not be used when estimating DSGE models based on recursive state-space representations (e.g., likelihood-based methods that make use of the Kalman-Filter). The reason is that the HP-filter uses observations at $t+i, i>0$ to construct the current time point $t$, while the recursive setting assumes that only current and past states influence the current observation. One way around this is to use the one sided Hodrick–Prescott filter.[6]