What Risk Models are Useful?

Risk management failures have clearly taken place. It has become fashionable to criticise risk models.

A fair amount of the naive criticism is not well thought out. Too many people today read Nassim Taleb and pour scorn upon hapless economists who inappropriately use normal distributions. That’s just not a fair depiction of how risk analysis gets done either in the real world or in the academic literature.

Another useful perspective is to see that a 99% value at risk estimate should fail 1% of the time. If a VaR implementation that seeks to find that 99% threshold does not have actual losses exceeding the VaR on 2-3 trading days each year, then it is actually faulty. Civil engineers do not design homes for once-in-a-century floods or earthquakes. When the TED Spread did unbelievable things:

the loss of a short position on the TED Spread should have been bigger than the Value at Risk reported by a proper model on many days.

The really important questions lie elsewhere. Risk management was a new engineering discipline which was pervasively used by traders and their regulators. Does the field contain fundamental problems at the core? And, are there some consequences of the use of risk management which, in itself, create or encourage crises?

Implementation problems

There are a host of practical problems in building and testing risk models. Model selection of VaR models is genuinely hard. Regulators and boards of directors sometimes push into Value at Risk at a 99.99% level of significance. This VaR estimate should be exceeded in one trading day out of ten thousand. Millions of trading days would be required to get statistical precision in testing the model. In most standard situations, there is a semblence of meaningful testing for VaR at a 99% level of significance [example], and anything beyond that is essentially untested for all practical purposes.

Similar concerns afflict extrapolation into longer time horizons. Regulators and boards of directors sometimes push for VaR estimates with horizons like a month or a quarter. The models actually know little about those kinds of time scales. When modellers go along with simple approximations, even though the underlying testing is weak, model risk is acute.

In the last decade, I often saw a problem that I used to call `the Riskmetrics illusion’: the feeling that one only needed a short time-series to get a VaR going. What was really going on was that Riskmetrics assumptions were driving the risk measure. Adrian and Brunnermeier (2009) emphasise that the use of short windows was actually inducing procyclicality: When times were good, the VaR would go down and leverage would go up, and vice versa. Today, we would all be much more cautious in (a) Using long time-series when doing estimation and (b) Not trusting models estimated off short series when long series are unavailable.

The other area where the practical constraints are onerous is that of going from individual securities to portfolios. In practical settings, financial firms and their regulators always require estimates of VaR for portfolios and not individual instruments.

Even in the simplest case with only linear positions and multivariate normal returns, this requires an estimate of the covariance matrix of returns. Ever since atleast Jobson and Korkie (JASA, 1980), we have known that the historical covariance matrix is a noisy estimator. The state of the art in asset pricing theory has not solved this problem. So while risk measures at a portfolio level are essential, this is a setting where our capabilities are weak. Realworld VaR systems that try to make do using poor estimators of the covariance matrix of returns are fraught with model risk.

As an example, when we look at the literature on portfolio optimisation, there is a lot of caution about the complexity of jumping into portfolio optimisation using estimated covariance matrices. As an example, see this paper by DeMiguel, Garlappi, Nogales and Uppal, which is one of the first papers to gain some traction in trying to actually make progress on estimating a covariance matrix that’s useful in portfolio optimisation. This paper is very recent – it appeared in May 2009 – and highlights the fact that these are not solved problems. It seems easy to talk about covariance matrices but obtaining useful estimates is genuinely hard.

Similar problems afflict Value at Risk in multivariate settings. Sharp estimates seem to require datasets which do not exist in most practical settings. And all this is when discussing only the simplest case, with linear products and multivariance normality. The real world is not such a benign environment.

With all these implementation problems, VaR models actually fared rather well in most areas

There is immense criticism of risk models, and certainly we are all amazed at the events which took place on (say) the money market, which were incredible in the eyes of all modellers. But at the same time, it is not true that all risk models failed.

My first point is the one emphasised above, it was not wrong to have VaR models being surprised at once-in-a-century events.

By and large, the models worked pretty well with equities, currencies and commodities. By and large, the models used by clearing corporations worked pretty well; derivatives exchanges did not get into trouble even when we think of the eurodollar futures contract at CME which was explicitly about the London money market.

Fairly simple risk models worked well in the determination of collateral that is held by futures clearing corporations. See this paper by Jayanth Varma. If the field of risk modelling was as flawed as some make it out to be, clearing corporations worldwide would not have handled the unexpected events of 2007 and 2008 as well as they did. These events could be interpreted as suggesting that, as an engineering approximation, the VaR computations that were done here were good enough. Jayanth Varma argues that the key elements that are required are the use of coherent risk measures (like expected shortfall), fat tailed distributions and nonlinear dependence structures.

As boring as civil engineering?

In his article Blame the models, Jon Danielsson shows a very nice example of the simplest possible VaR problem: the estimation of VaR for a $1000 position on IBM common stock. He points out that across a reasonable range of methodologies and estimation periods, the VaR estimates range over a factor of two (from 1.77% to 3.26%).

This large range is disconcerting. But look back at how civil engineers work. A vast amount of sophisticated analysis is done, and then a safety factor of 2x or 2.5x is layered on. The highest aspiration of the field of risk modeling should be to become as humdrum and useful as civil engineering. My optimistic reading of what Danielsson is saying is that a 2x safety factor adequately represents model risk in that problem.

This suggests a pragmatic approach. All models are wrong; some models are useful. Risk modeling would then go forward as civil engineering has, with an attempt at improving the scientific foundations, and with a final coup de grace of a safety factor thrown in at the end. Civil engineering evolved over the centuries, learning from the cathedrals that collapsed and the bridges that were swept away, continually improving the underlying science and improving the horse sense on what safety factors are a reasonable tradeoff between cost and safety.

Fundamental criticism: the `Lucas critique of risk management’

When an econometric model finds a reduced form relationship between y and x, this is not a useful guide for policy formulation. Hiding inside the slope parameter of x is the optimisation of economic agents, which reflect a certain policy environment. When policy changes are made, these optimisations change, giving structural change in the slope parameter. When policy changes take place, the old model will break down; the modeller will be surprised at what large deviations from the model have popped up. The Lucas critique is an integral part of the intellectual toolkit of every macroeconomist.

It should be much more prominent in the thinking of financial economists also. The most fundamental criticism of risk models is that they also suffer from the Lucas critique. As Avinash Persaud, Jon Danielsson and others have argued, risk modeling should not only be seen in a microeconomic sense of one economic agent using the model. When many agents use the same model, or when policy makers or clearing corporations start using the model, then the behaviour of the system changes.

As a consequence of this fundamental problem, an ARCH model estimated using historical data is vulnerable to getting surprised by what comes in the future. The coefficients of the ARCH model are not deep parameters; they are reduced form parameters. They suffer from structural breaks when enough traders start estimating that model and using it. The reduced-form parameters are time varying and endogenous to decisions of traders about what models they use, and the kinds of model-based prudential risk systems that regulators or clearing corporations use.

In the field of macroeconomics, the Lucas critique was a revolutionary idea, which pretty much decimated the old craft of macro modelling. Today, we walk on two very distinct tracks in macroeconomics. Forecasters do things like Bayesian VAR models where there are no deep parameters, but these models are not used for policy analysis. Policy analysis is done using DSGE models, which try to explicitly incorporate optimisations of the economic agents.

In addressing the problem of endogeneity of risk, or the Lucas critique, we in finance could do as the macroeconomists did. We could retreat into writing models with optimising agents, which is what took macroeconomists to DSGE models (though it took thirty years to get there). One example of this is found in Risk appetite and endogenous risk by Jon Danielsson, Hyun Song Shin and Jean-Pierre Zigrand, 2009.

In the field of macro, the Lucas critique decimated traditional work. But we should be careful to worry about the empirical significance of the problem. While people do optimise, the extent to which the reduced form parameters change (when policy changes take place) might not be large enough for reduced form models to be rendered useless.

It would be very nice if we could now get an research literature on this. I can think of three examples of avenues for progress. Simulations from the Danielsson/Shin/Zigrand paper could be conducted under different policy regimes, and reduced form parameters compared. Researchers could look back at natural experiments where policy changes took place (e.g. a fundamental change in rules for initial margin calculations at a futures clearing corporation) and ask whether this induced structural change in the reduced form parameters of the data generating process. Experimental economics could contribute something useful: it would be neat to setup a simulated market with 100 people trading in it, watch what reduced form parameters come out, then introduce a policy change (e.g. an initial margin requirement based on an ARCH model), and watch whether and how much the reduced form parameters change.

In the field of macro, there is a clear distinction between problems of policy analysis versus problems of forecasting. Even if the `Lucas critique’ problem of risk modelling is economically significant (i.e. the parameters of the data generating process of IBM significantly change once traders and regulators start using risk modeling), one could sometimes argue that there is a problem of risk modelling which is not systemic. I suppose Avinash Persaud and Jon Danielsson would say that in finance, there is no such comparable situation. If a new time series model is useful to you in forecasting, it’s useful to a million other traders, and the publication of the model generates drift in the reduced form parameters.

Regulators have focused on the risk of individual financial firms and on making individual firms safe. Today there is an increased sense that regulators need to run a capability which looks at the risk of the system and not just one firm at a time. A lot of work is now underway on these questions and it will yield improved insights and regulatory strategies in the days to come.

Why did risk models break down in some situations but not in others?

I find it useful to ask: Why did risk models work pretty well in some fields (e.g. the derivatives exchanges) but not in others (e.g. the OTC credit markets)? I think the endogenous risk perspective has something valuable to contribute in understanding this.

There are valuable insights in the ECB working paper by Lagana, Perina, von Koppen-Mertes and Persaud in 2006. They think of liquidity as made up of two stories: `search liquidity’ as opposed to `systemic liquidity’. Search liquidity is about setting up a nice computer-driven market which can be accessed by as many people as possible. `Systemic liquidity’ is about the consequences of endogenous risk. If a market is dominated by the big 20 financial firms, all of whom run the same models and have the same regulatory compulsions, this market will exhibit inferior systemic liquidity.

This gives us some insight into what went right with exchange-traded derivatives: the diversity of players on the exchanges (i.e. many different forecasting models, many different regulatory compulsions) helped to contain the difficulties.

The lesson then, is perhaps this one. If a market is populated with a diverse array of participants, then risk modelling as we know it works relatively well, as an engineering approximation. The big public exchange-traded derivatives fit this bill. We will all, of course, refine the practice of risk modeling, drawing on the events of 2007 and 2008 much as the civil engineers of old learned from spectacular disasters. But by and large, the approach is not broken.

Where the approach gets into trouble is in markets with just a few participants, i.e. `club markets’. A typical example would be an OTC derivative with just a handful of banks as players. In these settings, there is much more inter-dependence. When a market is populated by just a small set of players, all of whom think alike and all of whom are regulated alike, this is a much more dangerous world for the use of risk modeling. The application of standard techniques is going to run afoul of economically significant parameter instability and acute liquidity risk.

Implications for harmonisation of regulation

Harmonisation of regulation is a popular solution in regulatory circles these days. But if all large financial firms are regulated alike, the likelihood of the failure of risk management could go up. Until we get the tools to do risk modeling under conditions of economically significant risk endogeneity, all we can say is that we do not know how to compute VaR under those conditions. Harmonisation of regulation will give us more of those situations.

In the best of times, there seem to be limits of arbitrage; there is not enough rational arbitrage capital going around to fix all market inefficiencies. With non-harmonised regulation, if a certain firm is constrained by regulation to not take a rational trade, some other firm will be able to do so. The monoculture induced by harmonised regulation will likely make the world more unsafe.

Acknowledgement

Tarun Ramadorai, Avinash Persaud, and Viral Acharya gave me valuable feedback on this.

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>