# Value at Risk – the construct

International and European banking supervisors are allowing banks to rely on their own internal Value-at-Risk (VaR) models to calculate their capital requirements. However, many observers who do not belong to the inner circle of financial analysts and commentators are puzzled by the concept. On the one hand, in exactly quantifying a potential loss the methodology seems to offer a sound basis for risk management and financial decisions. On the other hand, critics are numerous and events such as the recent loss of JP Morgan’s “London Whale” raise questions and reinforce distrust.

As the title indicates the following is intended as an introduction to the basic principles and structures of the value-at-risk concept. VaR is not one model but rather a group of related models for a variety of financial risks and products. It is not the only available concept but by far the most widely used.

The following will not deal with the strengths and weaknesses of countless variants and modifications but confine itself to a mere description of key aspects of the initial methodology of RiskMetrics, the approach developed and made publicly available by JP Morgan during the 1990s which is described in the RiskMetrics™ Technical Document.

How much capital should a bank keep aside to cover potential losses from a portfolio of financial assets? Traditionally, the answer is found by a practice known as asset-and-liabilities management (ALM): Future estimated earnings are projected periodically under assumed market scenarios and the results are reported according to generally accepted accounting principles, mostly on an accrual basis.

For various reasons, with growing use of off-balance-sheet instruments such as options the ALM approach became less and less reliable:

– Option contracts are *contingent claims *based on the insurance principle. This makes the calculation of the stream of future cash flows impossible.

– The distribution of risks between the contracting parties is not symmetric as for other derivatives where the risk of loss for the one mirrors the chance of profit for the other.

– Their sensitivity to changes in market conditions is different. The relation between a price change of the underlying and the derivative is not linear but depending on the interplay of a variety of determinants leaving broad room for interpretation.

– And, those influences may change very rapidly further adding to existing uncertainties.

Beside the increasing use of off-balance-sheet instruments one driving force for the development of a new concept for measuring financial risks was the desire to build a consistent framework which allows to account for *interdependencies* between markets and instruments. VaR analysis has its roots in portfolio theory which does not focus on the performance of an individual financial instrument but on the interaction between different risks and returns:

Also known as *mean-variance approach* portfolio theory assumes that investors choose between portfolios on the basis of expected returns and their volatility with the standard deviation or variance of returns as risk measure. Taking into account return *correlations *diversification allows to maximise the expected return for a given standard deviation of the portfolio or to minimise the standard deviation for any given expected return.

When JP Morgan made RiskMetrics available to the public in 1994 the system offered information on rates and volatilities and over 100,000 correlations between more than 300 financial instruments in 15 markets. As the following figure illustrates the model consists of several *building blocks* distinguishing between various kinds of positions which differ in their valuation treatment.

Calculation of the Value at Risk proceeds in three steps, from accounting over valuation to simulation.

At the *accounting level* a line is drawn between accrual items – all those positions that are still measured at historical costs plus/minus accruals – and trading items with the latter further divided into marketable and non-marketable instruments.

At the *valuation level* those items for which a liquid secondary market exists are valued at the current value quoted in that market. Transactions for which no market value exists, but which can be decomposed into parts that do have a market value, are treated as a combination of cash flows from these parts and then mapped into so-called equivalent positions. Here, the value is approximated as the sum of market values of the component cash flows.

What is happening with non-marketable items such as *options*? In these cases, an option pricing model is used to revalue the portfolio over a set of postulated price changes. Those price changes, in turn, are gained either from a scenario approach or a simulation method. While the former focuses on a more limited number of specific price movements, the latter covers a more continuous range of changes and their effects on the entire portfolio.

For option valuation itself, again, two broad alternatives exist. One is the *Full Valuation* method where the potential loss is the difference between the value of the portfolio at potentially changed rates and at the original rate with the portfolio being continuously revalued at each price change. The other is *Delta Valuation* calculating the sensitivities of the positions to changes in rates times the potential changes in rates. Then the delta value is the net portfolio value given as the arithmetic sum of the deltas of all instruments and transactions in the portfolio. There are several possible modifications to increase the accuracy of the approach such as incorporating the gamma value in addition to take into account the risk that the delta changes with variations of the price of the underlying or considering the effects of volatility changes on option prices.

The last step is the *simulation* of the effects of expected changes in market rates and prices on the value of the entire portfolio. Again, there are two alternatives. The potential price and rate changes are gained either by designing specific scenarios or by using estimated volatilities and correlations. In those cases where the value of a position is affected only by a single rate, depending on the approach chosen the change in value of that position is a function either of the rates in each of the projected scenarios or of the volatility of that rate estimated by a model. If the potential change in value depends on multiple rates it is a function of either the combination of those rates in each scenario or of each volatility and each correlation between all pairs of volatilities.

RiskMetrics does not propagate a uniform approach to risk estimation but acknowledges the need to *combine *diverse concepts to account for different market conditions and environments. As the following figure illustrates the latter can be divided into two main categories:

First, *distributions *of rate and price movements may differ. Estimating volatilities and correlations with traditional statistical means is the best method in cases where rate and price movements can be statistically described as normally distributed, but they become unreliable if the normality assumption does not hold. Then the danger of sudden unexpected market movements should be taken into account explicitly with the help of scenarios.

Another aspect is the *functional relationship* between the value of a position and changes in rates and prices. If this relation is approximately linear, the position value can be best calculated by means of sensitivity analyses. However, for changes in non-linear positions such as options simulations are considered a more effective tool.

The result of the procedure is one number. This is an estimate of the worst loss at a given confidence level over a specified time horizon under normal market conditions. Philippe Jorion gives an example: “… a bank might say that the daily VaR of its trading portfolio is $35 million at the 99 percent confidence level. In other words, there is only one chance in a 100, under normal market conditions, for a loss greater than $35 million to occur.”

As Joe Nocera mentions, one reason why VaR is so appealing is that it can measure both individual risk – for example, of a trader’s portfolio – and firmwide risk. VaR can be reported for different categories. For example, in this Bloomberg article Jonathan Weil cites a first-quarter earnings press release of April 13 2012 where JPMorgan said the average value-at-risk figure for its chief investment office was $67 million during the three months ended March 31. Click to enlarge the following snippet to see the table he refers to:

Source: JP Morgan, p.42

RiskMetrics is not the only VaR approach, and since its introduction many other systems have been developed. Not all of them are based on portfolio theory. Some are exclusively or in addition using other risk measurement techniques. In general, several components of the VaR methodology can be distinguished:

– The *variance-covariance approach* based on portfolio theory and the assumption of normally distributed returns.

– The *historical simulation* approach based on historical data to construct the distribution of portfolio returns from which the VaR is read off. The advantage is that the method is not relying on a particular distributional assumption. The disadvantage is that its result depend strongly on the data set used.

– The *Monte Carlo simulation* where the distribution is derived from a large number of simulated random paths that returns could follow. The assumption is that with a sufficiently large number of simulations the resulting distribution, from which the VaR is interfered, will converge towards the unknown true distribution of portfolio returns.

– *Stress testing* or scenario analysis. This is not part of the actual VaR estimation but rather complementing it in order to account for the vulnerability of portfolios to unusual events. Value at Risk is a measure of potential losses due to “normal” market movements which must fail in times of financial turmoil.

Amazingly simple: One number.

And a complex framework offering a large variety of choices with ample scope for interpretation and – yes, also manipulation.

—

*This is a revised and shortened version of a text that can be found here, Appendix H.*

Reblogged this on Brian M. Lucey and commented:

a nice blog on the historical evolution of value at risk