Value at risk
In economics and finance, value at risk (VaR) is a measure of how the market value of an asset or of a portfolio of assets is likely to decrease over a certain time period (usually over 1 day or 10 days) under usual conditions. It is typically used by security houses or investment banks to measure the market risk of their asset portfolios (market value at risk), but is actually a very general concept that has broad application. Other measures of risk include volatility/standard deviation, semivariance (or downside risk) and shortfall probability.
Details of the definition
VaR has three parameters:
- The time horizon (period) to be analyzed (i. e. the length of time over which one plans to hold the assets in the portfolio - the "holding period"). The typical holding period is 1 day, although 10 days are used, for example, to compute capital requirements under the European Capital Adequacy Directive (CAD). For some problems, even a holding period of 1 year is appropriate.
- The confidence level at which the estimate is made. Popular confidence levels usually are 99% and 95%.
- The unit of the currency which will be used to denominate the value at risk(VaR).
The VaR is the maximum amount at risk to be lost from an investment (under 'normal' market conditions) over a given holding period, at a particular confidence level. As such, it is the converse of shortfall probability, in that it represents the amount to be lost with a given probability, rather than the probability of a given amount to be lost.
Note that VaR cannot anticipate changes in the composition of the portfolio during the day. Instead, it reflects the riskiness of the portfolio based on the portfolio's current composition.
Example
Consider a trading portfolio. Its market value in US dollars today is known, but its market value tomorrow is not known. The investment bank holding that portfolio might report that its portfolio has a 1-day VaR of $4 million at the 95% confidence level. This implies that (provided usual conditions will prevail over the 1 day) the bank can expect that, with a probability of 95%, a change in the value of its portfolio would not result in a decrease of more than $4 million during 1 day, or, in other words, that, with a probability of 5%, the value of its portfolio will decrease by $4 million or more during 1 day.
The key thing to note is that the target confidence level (95% in the above example) is the given parameter here; the output from the calculation ($4 million in the above example) is the maximum amount at risk (the value at risk) for that confidence level.
Common VaR calculation models
In the following, return means percentage change in value.
A variety of models exist for estimating VaR. Each model has its own set of assumptions, but the most common assumption is that historical market data is our best estimator for future changes. Common models include:
- (a) variance-covariance (VCV), assuming that risk factor returns are always (jointly) normally distributed and that the change in portfolio value is linearly dependent on all risk factor returns,
- (b) the historical simulation, assuming that asset returns in the future will have the same distribution as they had in the past (historical market data),
- (c) Monte Carlo simulation, where future asset returns are more or less randomly simulated
The variance-covariance, or delta-normal, model was popularized by J.P Morgan (now J.P. Morgan Chase) in the early 1990s when they published the RiskMetrics Technical Document. In the following, we will take the simple case, where the only risk factor for the portfolio is the value of the assets themselves. The following two assumptions enable to translate the VaR estimation problem into a linear algebraic problem:
(1) The portfolio is composed of assets whose deltas are linear, more exactly: the change in the value of the portfolio is linearly dependent on (i.e. is a linear combination of) all the changes in the values of the assets, so that also the portfolio return is linearly dependent on all the asset returns.
(2) The asset returns are jointly normally distributed.
The implication of (1) and (2) is that the portfolio return is normally distributed because it always holds that a linear combination of jointly normally distributed variables is itself normally distributed.
We will use the following notation:
- means “of the return on asset i“ (for σ and ) and "of asset i" (otherwise)
- means “of the return on the portfolio” (for σ and ) and "of the portfolio" (otherwise)
- all returns are returns over the holding period
- there are N assets
- = expected value, i. e. mean
- σ = standard deviation
- V = initial value (in currency units)
- = vector of all (T means transposed)
- = covariance matrix = matrix of covariances between all N asset returns, i. e. an NxN matrix
The calculation goes as follows.
(i)
(ii)
The normality assumption allows us to z-scale the calculated portfolio standard deviation to the appropriate confidence level. So for the 95% confidence level VaR we get:
(iii) (iii)
The benefits of the variance-covariance model are the use of a more compact and maintainable data set which can often be bought from third parties, and the speed of calculation using optimized linear algebra libraries. Drawbacks include the assumption that the portfolio is composed of assets whose delta is linear, and the assumption of a normal distribution of asset returns (i. e. market price returns).
Historical simulation is the simplest and most transparent method of calculation. This involves running the current portfolio across a set of historical price changes to yield a distribution of changes in portfolio value, and computing a percentile (the VaR). The benefits of this method are its simplicity to implement, and the fact that it does not assume a normal distribution of asset returns. Drawbacks are the requirement for a large market database, and the computationally intensive calculation.
Monte Carlo simulation is conceptually simple, but is generally computationally more intensive than the methods described above. The generic MC VaR calculation goes as follows:
- Decide on N, the number of iterations to perform.
- For each iteration:
- Generate a random scenario of market moves using some market model.
- Revalue the portfolio under the simulated market scenario.
- Compute the portfolio profit or loss (PnL) under the simulated scenario. (i.e. subtract the current market value of the portfolio from the market value of the portfolio computed in the previous step).
- Sort the resulting PnLs to give us the simulated PnL distribution for the portfolio.
- VaR at a particular confidence level is calculated using the percentile function. For example, if we computed 5000 simulations, our estimate of the 95% percentile would correspond to the 250th largest loss, i.e. (1 - 0.95) * 5000.
- Note that we can compute an error term associated with our estimate of VaR and this error will decrease as the number of iterations increases.
Monte Carlo simulation is generally used to compute VaR for portfolios containing securities with non-linear returns (e.g. options) since the computational effort required is non-trivial. Note that for portfolios without these complicated securities, such as a portfolio of stocks, the variance-covariance method is perfectly suitable and should probably be used instead. Also note that MC VaR is subject to model risk if the market model is not correct.
Caveats
Unfortunately, VaR is not the panacea of risk measurement methodologies. A subtle technical problem is that VaR is not sub-additive. That is, it's possible to construct two portfolios, A and B, in such a way that VaR (A + B) > VaR(A) + VaR(B). This is unexpected because we'd hope that portfolio diversification would reduce risk.
The theory of coherent risk measures outlines the properties we'd want any measure of risk to possess. Artzner et al, wrote the canonical paper on the subject. In this paper they outline, in axiomatic fashion, the properties a risk measure should possess in order to be considered coherent. An example of a coherent risk measure is Expected Tail Loss (ETL) (also known as Conditional Value-at-Risk (CVaR)). Other names are Expected shortfall and worst conditional expectation.
To find an example of subadditivity violation of VaR like the one described above, see the above quoted paper by Artzner et al.
Criticism
Financial mathematician Nassim Taleb holds that Value at Risk is charlatanism, a dangerously misleading tool.
Further reading
- Crouhy, M. (2001). Risk Management. McGraw-Hill. pp. 752 pages. ISBN 0-07-135731-9.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - Dowd, Kevin, Measuring Market Risk, 2nd Edition, John Wiley & Sons, 2005, 410 pages. ISBN 0-470-01303-6.
- Glasserman, Paul, Monte Carlo Methods in Financial Engineering, Springer, 2004, 596 pages, ISBN 0-387-00451-3.
- Grayling, Sue (ed), VAR: Understanding and Applying Value-at-Risk, Risk Books, 1997, 398 pages. ISBN 1-899332-26-X.
- Holton, Glyn A., Value-at-Risk: Theory and Practice, Academic Press, 2003, 405 pages. ISBN 0-12-354010-0.
- Jorion, Philippe, Value at Risk: The New Benchmark for Managing Financial Risk, 2nd ed., McGraw-Hill Trade, 2001, 544 pages. ISBN 0-07-135502-2.
- Pearson, Neil D., Risk Budgeting, John Wiley & Sons, 2002, 336 pages. ISBN 0-471-40556-6.
External links
- An alternative overview of Value at Risk from investopedia.com. Part 1, Part 2
- http://www.riskglossary.com/link/value_at_risk.htm is an introductory article on value-at-risk with links to more extensive information.