Taxpayers Earn 23% Return on Goldman TARP Investment

Federal taxpayers received a 23% annualized return on their investment in Goldman Sachs. On Wednesday Goldman announced that calculation by combining the interest it paid before repaying the Treasury’s $10 billion principal loan with the warrant buyback payments it made this week.

Goldman as led the way this week with surprise Q2 earnings… profits made on the back of government support for the firms.

Several analysts agree that Sachs paid fair value for the warrants. But JPMorgan Chase continues to haggle with the government over the value of its warrants. Whether or not JPMorgan ultimately settles or forces the government to auction the warrants on the open market, the TARP program is certainly turning out to be a boon for taxpayers.

What Risk Models are Useful?

Risk management failures have clearly taken place. It has become fashionable to criticise risk models.

A fair amount of the naive criticism is not well thought out. Too many people today read Nassim Taleb and pour scorn upon hapless economists who inappropriately use normal distributions. That’s just not a fair depiction of how risk analysis gets done either in the real world or in the academic literature.

Another useful perspective is to see that a 99% value at risk estimate should fail 1% of the time. If a VaR implementation that seeks to find that 99% threshold does not have actual losses exceeding the VaR on 2-3 trading days each year, then it is actually faulty. Civil engineers do not design homes for once-in-a-century floods or earthquakes. When the TED Spread did unbelievable things:

the loss of a short position on the TED Spread should have been bigger than the Value at Risk reported by a proper model on many days.

The really important questions lie elsewhere. Risk management was a new engineering discipline which was pervasively used by traders and their regulators. Does the field contain fundamental problems at the core? And, are there some consequences of the use of risk management which, in itself, create or encourage crises?

Implementation problems

There are a host of practical problems in building and testing risk models. Model selection of VaR models is genuinely hard. Regulators and boards of directors sometimes push into Value at Risk at a 99.99% level of significance. This VaR estimate should be exceeded in one trading day out of ten thousand. Millions of trading days would be required to get statistical precision in testing the model. In most standard situations, there is a semblence of meaningful testing for VaR at a 99% level of significance [example], and anything beyond that is essentially untested for all practical purposes.

Similar concerns afflict extrapolation into longer time horizons. Regulators and boards of directors sometimes push for VaR estimates with horizons like a month or a quarter. The models actually know little about those kinds of time scales. When modellers go along with simple approximations, even though the underlying testing is weak, model risk is acute.

In the last decade, I often saw a problem that I used to call `the Riskmetrics illusion’: the feeling that one only needed a short time-series to get a VaR going. What was really going on was that Riskmetrics assumptions were driving the risk measure. Adrian and Brunnermeier (2009) emphasise that the use of short windows was actually inducing procyclicality: When times were good, the VaR would go down and leverage would go up, and vice versa. Today, we would all be much more cautious in (a) Using long time-series when doing estimation and (b) Not trusting models estimated off short series when long series are unavailable.

The other area where the practical constraints are onerous is that of going from individual securities to portfolios. In practical settings, financial firms and their regulators always require estimates of VaR for portfolios and not individual instruments.

Even in the simplest case with only linear positions and multivariate normal returns, this requires an estimate of the covariance matrix of returns. Ever since atleast Jobson and Korkie (JASA, 1980), we have known that the historical covariance matrix is a noisy estimator. The state of the art in asset pricing theory has not solved this problem. So while risk measures at a portfolio level are essential, this is a setting where our capabilities are weak. Realworld VaR systems that try to make do using poor estimators of the covariance matrix of returns are fraught with model risk.

As an example, when we look at the literature on portfolio optimisation, there is a lot of caution about the complexity of jumping into portfolio optimisation using estimated covariance matrices. As an example, see this paper by DeMiguel, Garlappi, Nogales and Uppal, which is one of the first papers to gain some traction in trying to actually make progress on estimating a covariance matrix that’s useful in portfolio optimisation. This paper is very recent – it appeared in May 2009 – and highlights the fact that these are not solved problems. It seems easy to talk about covariance matrices but obtaining useful estimates is genuinely hard.

Similar problems afflict Value at Risk in multivariate settings. Sharp estimates seem to require datasets which do not exist in most practical settings. And all this is when discussing only the simplest case, with linear products and multivariance normality. The real world is not such a benign environment.

With all these implementation problems, VaR models actually fared rather well in most areas

There is immense criticism of risk models, and certainly we are all amazed at the events which took place on (say) the money market, which were incredible in the eyes of all modellers. But at the same time, it is not true that all risk models failed.

My first point is the one emphasised above, it was not wrong to have VaR models being surprised at once-in-a-century events.

By and large, the models worked pretty well with equities, currencies and commodities. By and large, the models used by clearing corporations worked pretty well; derivatives exchanges did not get into trouble even when we think of the eurodollar futures contract at CME which was explicitly about the London money market.

Fairly simple risk models worked well in the determination of collateral that is held by futures clearing corporations. See this paper by Jayanth Varma. If the field of risk modelling was as flawed as some make it out to be, clearing corporations worldwide would not have handled the unexpected events of 2007 and 2008 as well as they did. These events could be interpreted as suggesting that, as an engineering approximation, the VaR computations that were done here were good enough. Jayanth Varma argues that the key elements that are required are the use of coherent risk measures (like expected shortfall), fat tailed distributions and nonlinear dependence structures.

As boring as civil engineering?

In his article Blame the models, Jon Danielsson shows a very nice example of the simplest possible VaR problem: the estimation of VaR for a $1000 position on IBM common stock. He points out that across a reasonable range of methodologies and estimation periods, the VaR estimates range over a factor of two (from 1.77% to 3.26%).

This large range is disconcerting. But look back at how civil engineers work. A vast amount of sophisticated analysis is done, and then a safety factor of 2x or 2.5x is layered on. The highest aspiration of the field of risk modeling should be to become as humdrum and useful as civil engineering. My optimistic reading of what Danielsson is saying is that a 2x safety factor adequately represents model risk in that problem.

This suggests a pragmatic approach. All models are wrong; some models are useful. Risk modeling would then go forward as civil engineering has, with an attempt at improving the scientific foundations, and with a final coup de grace of a safety factor thrown in at the end. Civil engineering evolved over the centuries, learning from the cathedrals that collapsed and the bridges that were swept away, continually improving the underlying science and improving the horse sense on what safety factors are a reasonable tradeoff between cost and safety.

Fundamental criticism: the `Lucas critique of risk management’

When an econometric model finds a reduced form relationship between y and x, this is not a useful guide for policy formulation. Hiding inside the slope parameter of x is the optimisation of economic agents, which reflect a certain policy environment. When policy changes are made, these optimisations change, giving structural change in the slope parameter. When policy changes take place, the old model will break down; the modeller will be surprised at what large deviations from the model have popped up. The Lucas critique is an integral part of the intellectual toolkit of every macroeconomist.

It should be much more prominent in the thinking of financial economists also. The most fundamental criticism of risk models is that they also suffer from the Lucas critique. As Avinash Persaud, Jon Danielsson and others have argued, risk modeling should not only be seen in a microeconomic sense of one economic agent using the model. When many agents use the same model, or when policy makers or clearing corporations start using the model, then the behaviour of the system changes.

As a consequence of this fundamental problem, an ARCH model estimated using historical data is vulnerable to getting surprised by what comes in the future. The coefficients of the ARCH model are not deep parameters; they are reduced form parameters. They suffer from structural breaks when enough traders start estimating that model and using it. The reduced-form parameters are time varying and endogenous to decisions of traders about what models they use, and the kinds of model-based prudential risk systems that regulators or clearing corporations use.

In the field of macroeconomics, the Lucas critique was a revolutionary idea, which pretty much decimated the old craft of macro modelling. Today, we walk on two very distinct tracks in macroeconomics. Forecasters do things like Bayesian VAR models where there are no deep parameters, but these models are not used for policy analysis. Policy analysis is done using DSGE models, which try to explicitly incorporate optimisations of the economic agents.

In addressing the problem of endogeneity of risk, or the Lucas critique, we in finance could do as the macroeconomists did. We could retreat into writing models with optimising agents, which is what took macroeconomists to DSGE models (though it took thirty years to get there). One example of this is found in Risk appetite and endogenous risk by Jon Danielsson, Hyun Song Shin and Jean-Pierre Zigrand, 2009.

In the field of macro, the Lucas critique decimated traditional work. But we should be careful to worry about the empirical significance of the problem. While people do optimise, the extent to which the reduced form parameters change (when policy changes take place) might not be large enough for reduced form models to be rendered useless.

It would be very nice if we could now get an research literature on this. I can think of three examples of avenues for progress. Simulations from the Danielsson/Shin/Zigrand paper could be conducted under different policy regimes, and reduced form parameters compared. Researchers could look back at natural experiments where policy changes took place (e.g. a fundamental change in rules for initial margin calculations at a futures clearing corporation) and ask whether this induced structural change in the reduced form parameters of the data generating process. Experimental economics could contribute something useful: it would be neat to setup a simulated market with 100 people trading in it, watch what reduced form parameters come out, then introduce a policy change (e.g. an initial margin requirement based on an ARCH model), and watch whether and how much the reduced form parameters change.

In the field of macro, there is a clear distinction between problems of policy analysis versus problems of forecasting. Even if the `Lucas critique’ problem of risk modelling is economically significant (i.e. the parameters of the data generating process of IBM significantly change once traders and regulators start using risk modeling), one could sometimes argue that there is a problem of risk modelling which is not systemic. I suppose Avinash Persaud and Jon Danielsson would say that in finance, there is no such comparable situation. If a new time series model is useful to you in forecasting, it’s useful to a million other traders, and the publication of the model generates drift in the reduced form parameters.

Regulators have focused on the risk of individual financial firms and on making individual firms safe. Today there is an increased sense that regulators need to run a capability which looks at the risk of the system and not just one firm at a time. A lot of work is now underway on these questions and it will yield improved insights and regulatory strategies in the days to come.

Why did risk models break down in some situations but not in others?

I find it useful to ask: Why did risk models work pretty well in some fields (e.g. the derivatives exchanges) but not in others (e.g. the OTC credit markets)? I think the endogenous risk perspective has something valuable to contribute in understanding this.

There are valuable insights in the ECB working paper by Lagana, Perina, von Koppen-Mertes and Persaud in 2006. They think of liquidity as made up of two stories: `search liquidity’ as opposed to `systemic liquidity’. Search liquidity is about setting up a nice computer-driven market which can be accessed by as many people as possible. `Systemic liquidity’ is about the consequences of endogenous risk. If a market is dominated by the big 20 financial firms, all of whom run the same models and have the same regulatory compulsions, this market will exhibit inferior systemic liquidity.

This gives us some insight into what went right with exchange-traded derivatives: the diversity of players on the exchanges (i.e. many different forecasting models, many different regulatory compulsions) helped to contain the difficulties.

The lesson then, is perhaps this one. If a market is populated with a diverse array of participants, then risk modelling as we know it works relatively well, as an engineering approximation. The big public exchange-traded derivatives fit this bill. We will all, of course, refine the practice of risk modeling, drawing on the events of 2007 and 2008 much as the civil engineers of old learned from spectacular disasters. But by and large, the approach is not broken.

Where the approach gets into trouble is in markets with just a few participants, i.e. `club markets’. A typical example would be an OTC derivative with just a handful of banks as players. In these settings, there is much more inter-dependence. When a market is populated by just a small set of players, all of whom think alike and all of whom are regulated alike, this is a much more dangerous world for the use of risk modeling. The application of standard techniques is going to run afoul of economically significant parameter instability and acute liquidity risk.

Implications for harmonisation of regulation

Harmonisation of regulation is a popular solution in regulatory circles these days. But if all large financial firms are regulated alike, the likelihood of the failure of risk management could go up. Until we get the tools to do risk modeling under conditions of economically significant risk endogeneity, all we can say is that we do not know how to compute VaR under those conditions. Harmonisation of regulation will give us more of those situations.

In the best of times, there seem to be limits of arbitrage; there is not enough rational arbitrage capital going around to fix all market inefficiencies. With non-harmonised regulation, if a certain firm is constrained by regulation to not take a rational trade, some other firm will be able to do so. The monoculture induced by harmonised regulation will likely make the world more unsafe.


Tarun Ramadorai, Avinash Persaud, and Viral Acharya gave me valuable feedback on this.

Gold Oil And Your Stomach

Humans eat or humans die and with the Peak Oil specter looming this issue is becoming very pressing for about 923,000,000 people.  The Internet is an amazing series of tubes.  A friend told me that Nate Hagens, MBA, former Managing Director at Salomon Brothers, Lehman Brothers and editor of The Oil Drum, used my liquidity pyramid near the end of his presentation at the June 2009 Oil Drum/ASPO Conference in Alcatraz, Italy.  The presentation was extremely interesting.  Continuing the theme on 8 July 2009 he authored an article:  CFTC – Futures Position Limits On Energy?


Mr. Hagens has some key insights.

Let’s return to a central theme: that finite resources are being quantified by infinite money. … Unfortunately, this ’speculation’ issue is one of many red herrings that ignores the widening fundamental disconnect between financial and real assets. … As long as energy and resources were cheap, more long term gearing/profits were to be had from the vanilla ‘derivatives’: stocks and bonds (these are derivatives of our real capital: naturalbuiltsocial and human that underpins them), than from the commodities themselves.

This is one of the reasons Brazil and Russia have such positive prospects is because of their tremendous endowment of real assets.  The Industrial Age allowed for obfuscation of information and inefficiencies of epic proportions.  Ironically during the Information Age there will be a return to all things real.  This is because the Internet pulls back the curtain and exposes with bright luminescence everything formally hidden in the darkness at the speed of light.

Additionally, stocks are in a long-term secular bear market while commodities are in a long-term secular bull market.  Like the seasons on Jupiter; these take a couple decades to completely cycle through.  As commodities represent the antithesis of financial assets this is likewise a vote of no confidence in the political structures of the earth.  And why not when Bloomberg reports that Neil Barfosky, special inspector general for the TARP, says ‘US taxpayers may be on the hook for as much as $23,700,000,000,000 to bolster the ecnomy and bail out financial companies’.  The European banking system is worse condition.


Mr. Hagens continues:

As we are mired in a deepening recession, the roots of which lie in the generation long replacement of tangible things with paper and digits, the logical human reaction to oil moving back from $40 to $70 is to blame someone, in order that it retreat some and not act as economic headwind. … So what does this mean? Energy, particularly liquid fuel, is the hemoglobin of modern civilization. Price signals based on the marginal unit create long term distortions for utilities and energy policymakers.

I agree that the world has a very serious problem.  Because it has used a fiat currency with no definition for nearly 100 years and because oil production was constantly increasing during that time the effects of unwise capital investment were masked.  Energy Return On Energy Invested (EROEI) calculations were not performed and because of the central bank gold price suppression scheme it was probably impossible to accurately do so.  But the damage to the economy has already been done and is a sunk cost.  What can be done going forward?


Mr. Hagens wrote, “Like M. King Hubbert, I am in favor of an energy based currency and no futures trading at all other than for producers and those taking delivery.”  After some digging around I think I found Mr. Hubbert’s plan which involved ‘energy certificates‘.

On this basis our distribution then becomes foolproof and incredibly simple. We keep our records of the physical costs of production in terms of the amount of extraneous energy degraded. We set industrial production arbitrarily at a rate equal to the saturation of the physical capacity of our public to consume. We distribute purchasing power in the form of energy certificates to the public, the amount issued to each being equivalent to his pro rata share of the energy-cost of the consumer goods and services to be produced during the balanced-load period for which the certificates are issued. These certificates bear the identification of the person to whom issued and are non negotiable. They resemble a bank check in that they bear no face denomination, this being entered at the time of spending. They are surrendered upon the purchase of goods or services at any center of distribution and are permanently canceled, becoming entries in a uniform accounting system. Being nonnegotiable they cannot be lost, stolen, gambled, or given away because they are invalid in the hands of any person other than the one to whom issued. [emphasis added]

This solution is not practical because it is immoral and inefficient at allocating resources.

First, I am not sure who Mr. Hubbert intends the ‘we’ and ‘our’ pronouns to refer to.  There are only individuals endowed with certain unalienable rights such as the right to live and by extension the right to eat their food.

Second, the use of energy certificates in this manner would amount to a price control or form of rationing.  It is basic Austrian economics that price controls cause shortages with Zimbabwe being the latest tragic example.  How would ‘we’ implement these ‘energy certificates’ based upon ‘identification’ that are ‘non-negotiable’?  With a barrel of a gun.

Even Vladimir Putin understands what happens:  ”The only problem:  your results were poor and this will always be the case because the work you do is unfair and immoral.  In the long run immoral policies always lose.”

The issuance of energy certificates whose use is enforced with the barrel of a gun is immoral and will always fail.  Also because it relies on force to implement along with identification, price controls and currency controls it is an inefficient solution to allocate energy resources.  What would be a tenable solution?


A fiat currency attempts to sustain the unsustainable while a commodity-based currency employs the strict laws of reality to ensure the unsustainable is not encourage.  At one of the Cambridge House investment conferences while talking with a CEO of a mining company I asked some questions on this topic.  He succinctly responded, “Mining is converting energy into metals.”

The storage and spoilage costs of the metals is far lower than wheat, cattle, oil, lumber or other commodities.  Consequently, using metals as currency in ordinary daily transactions seems like the most efficient option.

During the Industrial Age the middle class survived and thrived but taxation exploded and concepts like economies of scope and scale led to bloated Welfare State governments.  As Mr. Hagens wrote, “We have a monumental problem – a system whose claims on the future are higher than its real assets.”

Governments around the world will massively default on their promises; Social Security, Medicare, Medicaid, the Pension Guaranty Benefit Corporation and the equivalents in Europe, Australia, etc.  What is needed is not larger organizations and institutions but smaller ones.  Individuals are like the plankton of the investment world they will likely consider how to buy gold or silver to protect themselves from the insidious inflation tax which is a surreptitious form of confiscation without due process of law and is a prime reason to Raze The Fed.  Plus, the Federal Reserve’s quantitative easing is failing.

Digital commodity currencies, like GoldMoney, which allow gold, silver and platinum to circulate as currency in ordinary daily transactions are tremendous monetary evolutions made possible by technology in the Information Age.  Like Facebook, Twitter, Google, etc. they will explode into world commerce.  They are tremendously more efficient than the fiat paper franchises.

For example, from 2001-2007 the cost of storing $1,000,000 of capital in gold versus T-Bills was approximately $1,254 per month lower.  Second, the use of these private currencies is voluntary and does not rely on the barrel of a gun via legal tender laws or capital gains taxes to remain competitive.  Third, as a result the market would choose, voluntarily, commodities as currency and civil liberties would be protected.  There is a great book on this topic by Jörg Guido Hülsmann called The Ethics Of Money Production.


As of 20 July 2009 there is slight gold backwardation, again.  When I chronicled the chronic nine week silver backwardtion we watched the spot price rise from FRN$11 per ounce to almost FRN$15 per ounce or a 36% rise.  Usually the summer is very weak for gold but it appears to be trading primarily as a currency right now so the backwardation situation is intriguing.

Additionally, as I recently recommended buying silver around FRN$12.50 and buying platinum around $1,115; those who followed the advice may need a snorkel to breath because of the ~10% gains in less than a month.  The speculator may want to take some gains and wait for the last pullback before this fall.

While the gold to oil ratio does matter perhaps it will not return to previous patterns but instead normalize to a higher price in gold.  But even with perpetually higher average energy costs this autumn and winter will likely see a tremendous fall in the price of the DOW in gold because of the relationship between unemployment, gross revenues, net incomes, PE ratios, etc.  In September the first batch of unemployment benefits expires out for about 500,000 people.  Then each succeeding month the numbers will continue piling up just like the job losses which began 18 months ago.

Perhaps because we are 70 years removed from the Great Depression we think of it as an event; but in reality it took place over a couple decades and the first few years of it were marked by increased unemployment which caused people to burn through their savings before they became destitute.

A great book is Wealth War and Wisdom by Barton Biggs.  Mr. Biggs, former chief global strategist for Morgan Stanely, advises ”you want your sanctuary to be remote enough to be inaccessible to the dispossessed hordes. Assume a breakdown of the civilized infrastructure. Your safe-haven must be self-sufficient and capable of growing food. It should be well-stocked with seed, canned food, wine, medicine, and clothes. There should be a stash of automatic weapons that you know how to use in case roving bands of hungry barbarian brigands show up.” (p. 347-8)


The world has a very serious problem regarding energy and food production.  The Information Age will continue to reveal the inefficiences and flaws in the current monetary structure with barbarous relics known as central banks.  As I assert in my book, The Great Credit Contraction, capital is moving down the liquidity pyramid seeking safety and liquidity and it has only begun.  There will likely be less capital available for metal or oil investment.

But there are digital commodity currencies that function in ordinary daily transactions.  Each ounce of silver will likely buy a nice steak dinner.  A couple ounces of gold will usually pay rent on a nice abode while 50-70 ounces of platinum will purchase a nice house.  Based on historical averages the gold to oil ratio should decline.  In the meantime you may want to prepare for survivalism in the suburbs.

The role of the monetary metals, particularly gold, is to perform mental calculations of value; pricing.  The sooner you begin to calculate value in terms of real things the sooner you will be able to understand the relationship between gold, oil and your stomach.

How will this all play out?  I do not think anyone really knows.  The collapse of the gold rig will not be a garden variety exchange rate adjustment but the collapse of a worldwide monetary system.  The decline of supply of cheap and abundant energy, the ‘hemoglobin of the world economy’, will have tremendous implications for every cell in the food chain.  It would be nice if humanity acted morally.  But have you ever watched a pack of hungry wolves on the Nature channel?  Keep in mind that governments need less hungry mouths and lower unemployment.

Disclosures:  Long physical gold, silver and platinum.

Join the forum discussion on this post - (1) Posts