Is this a time for monetary policy easing?

`Headline inflation’:
The year-on-year change in CPI-IW,
with target zone superposed.

The best price index in India is the CPI-IW. `Headline inflation’ in India corresponds to the widely watched year-on-year change in the CPI-IW. The above graph shows us the experience of inflation in India from 1999 onwards. The informal target of policy makers is for inflation to lie between four and five per cent. These are the two blue lines on the graph.

In February 2006, inflation breached the upper bound of five per cent. It has never come back in range. Things are so bad that even the overall average inflation of this period (the red line) is now above the upper bound of five per cent. We don’t just occasionally fail to stay within the target range of inflation; we persistently fail to get there. This inflation crisis is a major failure in Indian macroeconomic policy, and is holding back India’s growth.

Many explanations like supply side factors or droughts are offered. They fail to persuade when we see this time-series experience. Did we have fewer droughts before 2006? Or that supply side factors were not a problem before 2006? Sustained failures on inflation are always rooted in monetary policy. In the long run, inflation is always and everywhere a monetary phenomenon.

There is some tiny progress in the latest months in this graph, but we cannot claim that the inflationary spiral has been broken. Policy rates are 7 and 8 per cent, and inflation is almost surely above 8 per cent, so the policy rate is likely to be negative when expressed in real terms.

Smoothed month-on-month inflation
(annualised, based on seasonally adjusted CPI-IW)

The year-on-year change is a moving average of the latest 12 month-on-month changes. We obtain information about current conditions by looking at more recent month-on-month changes. This requires seasonal adjustment. The graph above shows the 3-month moving average (3mma) [source]. Just as the y-o-y change shows average inflation over the latest 12 months, this graph shows average inflation over the latest 3 months.

There is some progress in recent months. But at the same time, in the entire period, we see many such short periods of decline in inflation. Eyeballing the graph does not give us confidence that there has been a qualitative change in inflationary conditions. As an example, consider the previous dip in inflation. We could have become quite excited by the drop in this 3mma CPI-IW inflation down to 2%. But this was a temporary gain which was quickly reversed.
We should hence be cautious about reading too much in the recent decline in month-on-month CPI-IW inflation. While it is great news if inflationary pressures in the economy are declining, and it will be great news when the cycle of high inflationary expectations will be broken, there isn’t enough evidence as yet to announce that the mission — of achieving low and stable inflation — has been achieved.

Big Picture, Small Cap Investing: Jim Letourneau

Jim Letourneau Examining the macro-economic environment is how Jim Letourneau, publisher of the Big Picture Speculator, likes to begin his stock-picking process. However, his understanding goes beyond headline news to reveal surprising investment themes with profit potential. In this exclusive interview with The Energy Report, Letourneau talks about the hype and commodity investment cycles and where to dig for blue-sky stocks.

The Energy Report: You publish the Big Picture Speculator. What does that title imply?

Jim Letourneau: I believe the macro context is often more important than the details about an individual company. I read a broad range of material every day that helps me form my views, and one of my best skills is putting together the big picture and connecting the dots for audiences. A recent example of my method is my coverage of the natural gas sector, which focused on how the abundant supply of natural gas has led to a complete shift in the types of companies that people should be following. Rather than natural gas producers, investors should find companies that are consuming natural gas, like Methanex Corp. (MEOH:NASDAQ; MX:TSX; METHANEX:SSE), Westport Innovations Inc. (WPT:TSX) or Energy Fuels Inc. (EFR:TSX). These companies are in great shape because their costs are significantly lower. That’s a huge big-picture shift, but people get bogged down in all of the debates about fracking and other controversies.

The bottom line is the U.S. now has the cheapest natural gas in the world, and that’s not a horrible problem to have. When I talk to technical people, we just look at each other and think this is a miracle. No one saw this coming.

TER: As a geologist, how does your technical knowledge shape your investment decisions? What do you look for in potential investment opportunities?

JL: Technical knowledge includes pluses and minuses. In general, the types of companies I look for are usually going to have a market cap of under $100 million (M) and for me to get excited about them, they have to have the potential to surmount that $1 billion (B) market cap. So there’s a potential tenbagger upside in them, if everything pans out. That potential could be in the form of a new technology backed by a critical management team or a higher-quality mineral property. Either way, management teams are critical for these types of things to play out.

TER: How far down in market cap do you go when considering investments?

JL: Sometimes I go down too far, but I think $50M is better than $5M. While you can argue that it’s easier for a $5M market-cap company to go to $50M, your odds start to dwindle. It’s a matter of finding that balance point. Obviously, it’s nicer to buy a company cheap and have it grow into something bigger, but the company is usually cheap for a reason. I don’t want to have to write about 50 companies a year that didn’t quite make it. I’d rather go up the food chain a little bit and follow ones that are going to survive, and whose progress we can track year by year.

TER: You spoke at the Cambridge House Energy and Resource Investment Conference in Calgary on March 30 and 31. What subjects did you cover?

JL: My keynote talk was called “Making Money Using Commodity and Hype Cycles.” I overlayed two kinds of cycles: The commodity cycle is a longer cycle that we’ve been in for over 10 years now. Hype cycles refer to heightened public awareness of a new technology or a particular element on the periodic table that hasn’t been speculated on yet. A recent example would be graphite. Uranium is another really good example of a hype cycle; there was a huge amount of interest about eight years ago and hundreds of companies were formed. Investors were making lots of money with uranium stocks. Then it all withered away. There is still opportunity because some of those companies are still around and advancing their businesses.

I also did a workshop called “How to Find Billion-Dollar Companies,” where I mentioned some of the companies I like that have market caps near $100M with the blue-sky potential to get up to the $1B level.

TER: What do you think the potential is on a percentage-wise basis of finding billion-dollar companies?

JL: The odds are challenging. This is more speculative and it’s much higher risk than a nice dividend-paying stock with cash flow. These companies have lower market caps for a reason; there is either skepticism about the technology or a lot of competition. We don’t need 100 new rare earth mines, but maybe we have 100 rare earth companies. So which companies are going to win that race? It’s a bit like horse racing; you pick your favorites. The odds are you’re not going to win on every one of them.

TER: For a company to get to a $1B market cap these days is probably going to involve some acquisitions and consolidations, unless it really has some amazing property or technology.

JL: That’s very true. Sometimes companies just lay it out and if you can see that it can get the sales and the trajectory, it is certainly possible, and it does happen. It’s a challenge, and that’s what we’re looking for.

The other important part of the stock-picking process is the timeframe. The commodity cycle has a long-term timeframe, whereas the hype cycles can be pretty brief. Eventually, the market turns and the interest goes away. The challenge for these companies, if they have something real, is to keep moving the project forward until the next hype cycle comes around, when people get really interested again. If you’re investing in equities related to commodities, you’re speculating both in the market and on commodities. Sometimes you can have the right commodity, but the company you pick doesn’t follow that commodity’s price performance very well.

TER: Can you point to any companies you’ve seen in the last few years that have turned out that way?

JL: There are a few. To be honest, the other part of this strategy is that for every company that I talk about and like, there are probably 100 that I don’t. There’s a lot of screening and filtering to get rid of the ones that don’t have the potential. One company that I like right now is a biotech that I think we’re at a triple on right now called biOasis Technologies Inc. (BTI:TSX.V). It has a protein that can cross the blood-brain barrier. Therapeutic molecules can be conjugated to this protein, allowing it to cross the blood-brain barrier. This can dramatically increase an existing drug’s effectiveness. That’s one. We found it under $0.50, and now it’s in the $1.40–1.50 range.

DNI Metals Inc. (DNI:TSX.V; DG7:FSE) has also performed really well. While it’s down now, it had gone from around $0.20 to more than $0.60. I like it because it’s pushing the frontiers a little bit. It has a very large, black shale metal deposit in northeastern Alberta, a bit north of the oil sands. Historically, very few geologists studied shales, but they’ve become more popular now because of shale oil and gas. The Alberta Geological Survey has done numerous studies going back to the early 1990s that mention an anomalous metal content in the Second White Speckled Shale. The grades are really low, but the deposits are very extensive. There are huge resources in place containing a whole cocktail of meterials, including rare earths, nickel, iron, vanadium, uranium, zinc, copper, cobalt and molybdenum. It’s almost a conceptual play in some ways. Although the grades are not stellar, they are a little bit higher than we’d expect anywhere else.

So it’s a resource-in-place story, but it’s also a technology story because we’ve seen other industries dealing with a low-grade resource that suddenly become economic plays because of technological breakthroughs. The best example of that is probably shale gas, where people knew for a long time that there was gas in these shales, but nobody was really making any money from them. New technology comes along, and suddenly these shale deposits are worth a lot of money.

For DNI Metals, the challenge is how to get the metals out and make money doing it. The best method to extract these metals is pointing to a technology called bioleaching, which is being used by a company called Talvivaara Mining Co. Plc. (TALV:LSE) in Finland. That’s the exciting part that’s pushing the frontiers.

TER: Are these metals pretty much disseminated throughout this whole deposit, or are certain metals concentrated in certain areas?

JL: The metals are widely disseminated within a fairly uniform and consistent material. That makes it similar to coal or potash mining, where the ore bodies are tabular in shape. They may not be exciting, but at least you know what to expect and you can plan very large operations around that.

TER: With bioleaching, is in situ recovery (ISR) an option?

JL: There may be some way to use ISR, but the bioleaching at Talvivaara involves actually digging it up, piling it onto pads and leaching it by letting the bugs do their work and make acid. But there may be a way to apply in-situ technology in the upper zone. Bioleaching in heaps seems to be the approach with the most potential at the moment.

The value of the minerals in this shale is probably $40 per ton (/t). Extracting the metals for less than $30/t is the challenge. No one’s done it before, so there’s a lot of skepticism. I think a really big mining company would eventually take interest in this because it’s the kind of project that, if it can get up and running, has a life-of-mine potential of over 100 years.

TER: You mentioned uranium earlier. Despite Fukushima, people are realizing that nuclear is here to stay and one of our best sources of energy generation for the foreseeable future. Is there still life after its hype cycle has ended?

JL: I think uranium’s future is very bright and it is a critical part of the world’s energy matrix. We can’t really afford to just turn it off. There actually are a lot of benefits to using it. In terms of the actual price of uranium, the market may not be as excited about it yet, but Russia said it will not renew its supply agreement with the U.S. so analysts are anticipating shortages starting in 2013, which isn’t that far away.

TER: What other companies would you like to comment on?

JL: I like the uranium companies that use ISR technology. The main plays I’ve been considering are either in Wyoming or Texas, where you don’t get the really high grades that you find in the Athabasca Basin. There were hundreds of uranium explorers in the Athabasca Basin and the only one that’s really been successful for investors was Hathor Exploration Ltd., which was recently acquired by Rio Tinto (RIO:NYSE; RIO:ASX). With an ISR uranium project, you have a degree of certainty that a company will actually be able to build the mine and get it into production.

There are three companies in that space that I like. Going from the smallest market cap to the biggest, there is Ur-Energy Inc. (URE:TSX; URG:NYSE.A), in Wyoming. It’s on track to be a producer very soon with expected permitting for its Lost Creek mine early this summer. Then it will be able to get its mine into production probably within six months.

Uranerz Energy Corp. (URZ:TSX; URZ:NYSE.A) is a similar company in Wyoming. It has actually started its mine construction and is looking to start producing 600–800 thousand pounds (Klb) uranium/year very shortly. Both are very near-term production stories.

The last one, Uranium Energy Corp. (UEC:NYSE.A), is currently producing in Texas. It has an inventory of projects coming online and the company announced property acquisitions in Paraguay and Arizona earlier this year. These are all companies with uranium resources that, once their facilities are built, enable extremely long production runs. Typically, they’ll have a centralized uranium processing plant and all of the mines around it will be satellite projects.

The challenge for all of these companies has been permitting. The various U.S. government regulatory bodies didn’t really have anyone qualified to evaluate ISR projects because there haven’t been any new ones developed for decades. The absence of a competent regulatory structure has slowed down progress on getting these mines built. These companies have typically spent a year or two longer than they expected on the regulatory process; it’s not a reflection of any gaps in the quality of their projects.

TER: At least the regulators are willing to permit these operations, which apparently was quite a problem for a while.

JL: That’s a very good point. These are viable, useful industries with quite good safety records and low environmental impact. Again, I like to talk about the big picture.

TER: What sort of capital costs do these uranium ISR projects have?

JL: There’s a range, but the costs are usually $20–30/lb. But these companies are pretty comfortable that they can eke out a living at the current uranium price, which is not going to encourage a whole bunch of new projects to come along. They’re anticipating higher longer-term prices, which should make them quite profitable.

TER: Do you have any thoughts on the current gold market?

JL: I just tell people to look at a 12-year gold chart. Gold is probably the best-performing investment product over that timeframe. I personally don’t think gold has that critical a role in the monetary supply, but it is a place to preserve wealth and look for protection. This recent consolidation pullback is probably an opportunity, but people need to remember that bull markets don’t last forever. However, gold still has legs right now, and the trend is your friend.

TER: Looking at the “big picture,” what do you suggest people do to figure out how they should invest their money these days?

JL: Investors have to do their research and be informed. We are in dangerous times. A lot of assets are correlated so it’s hard to find safety. Sometimes maybe the best safety is not even being in the market, which I hate to say. I like finding good companies that are going to grow into viable businesses. But the markets are not kind, and we’ve seen what can happen when the flow of capital gets turned off. The valuations of publicly traded companies, big and small, in all sectors, tend to drop in unison, even precious metals prices. It’s important to be mindful of the downside. I look for upside opportunities because I’m an optimist and I assume that life will go on.

We do have some structural issues in the financial system. If that breaks down, you really don’t want to own anything that’s not tangible. That’s the strongest investment thesis for owning hard assets. That doesn’t mean owning shares in a hard asset company; that means owning the physical hard asset. If you own a car, a house or some gold, those things will still be around no matter what happens to the money supply and currency valuations. The monetary system is a wild card, and that’s the thing that keeps everybody nervous. We can make informed guesses, but nobody really knows how that’s going to play out.

TER: We appreciate your time and input today.

JL: My pleasure.

Jim Letourneau is the founder and editor of the Big Picture Speculator and is a professional registered geologist living in Calgary, Alberta. He has over 20 years of experience in the oil and gas sector.

Nothing new in macroeconomic methodology? (wonkish)

Simon Wren Lewis who is a professor of economics at Oxford University has an interesting piece (hat tip: Mark Thoma) on the distinction and choice between micro founded macroeconomic models and top-down models such as the IS/LM (Keynesian) or other variants such as Modern Monetary Theory (MMT).

I think this is an interesting discussion and I have penned my own thoughts on microfoundations in macroeconomics here. Mr Lewis is balanced but seems to be on Paul Krugman’s side (by and large) who has been devastating in his critique of modern macroeconomics especially in the wake of the financial crisis.

For good order, my own views are summarized in the following snippet.

Two obvious questions impose themselves at this point. One is whether the use of representative agents in macroeconomics has something, in general, to do with the recent soul searching among macroeconomists and the critique against the profession. And the second is whether the study of macroeconomics and demographics in particular calls for the non-use of representative agent modelling.

On the first I don’t necessarily think that it exists to the detriment of macroeconomics as a discipline, but I do think that a couple of points need mention. First of all I will echo the point made in Hartley (1997) that given the widespread use of representative agent modelling in almost all corners of macroeconomics and the almost religious devotion to it in graduate and PhD economics I think it is highly problematic that we have not had a more serious debate of its methodological merits. I would emphasize this in particular in the context of the fact that the use of representative agents leads to very inflexible (although rigorous) mathematical models and the blind faith in these models tend to steer macroeconomics onto a very narrow methodological path. During my research and initial ground work for the thesis I actually did write my own representative agent model to suit my specific agenda, but found in the end that I was paying more tribute to the laws of calculus than the connection between ageing and capital flows/open economy dynamics and as I set up the problem I ended up very close to the original benchmark problem.

It is interesting in this respect that Mr. Lewis spends quite a bit of time to come up with a name for what he calls the alternative to traditionally micro founded general equilibrium models (in either dynamic or static form). It seems that despite the fact that such “ad-hoc” models have been around for a long time, we have been able to come up with a name for them. Here is Mr Lewis.

The issue I want to discuss now is very specific. What is the role of the ‘useful models’ that Blanchard and Fischer discuss in chapter 10? Can Krugman’s claim that they can be more useful than micro founded models ever be true? I will try to suggest that it could be, even if we accept the proposition (which I would not) that the micro foundations approach is the only valid way of doing macroeconomics. If you think this sounds like a contradiction in terms, read on. The justification I propose for useful models is not the only (and may not be the best) justification for them, but it is perhaps the one that is most easily seen from a micro foundations perspective.

For those un-initiated in the taxonomy of modern economic teaching this will seem odd. But it isn’t.

I would venture the claim then that the general Keynesian framework of IS/LM (or “curve shifting” models in general) is still seen as an undergraduate tool or a tool for business students with little or no foundation in mathematics. If this is the informed view of the economics profession as a whole (which I think it is) then there is certainly no need to elevate such models to the honour of being alternatives to conducting real and serious micro founded macroeconomics. At this point, the sarcasm is obvious I hope.

The general equilibrium framework in its dynamic form with dynamic programming problems and sophisticated econometric methodology to estimate the optimized equations is largely outside the scope for most people. As a result, the ivory tower in which many (if not most) academic economists do their research serves as an incubator for skepticism (even pity) towards those who might have the audacity to argue that what they are doing is wrong. Indeed, I would argue that despite signs that a genuine critique towards micro- and pure mathematical founded economics has emerged, the general trend is still one of “physics envy” in economics.

Hence, the debate for and against micro founded models very quickly turns into a discussion between those who do not understand the language of modern macroeconomics and those who do. Obviously, in Mr Lewis’ case this is not the case. Indeed, the financial crisis seems to have given birth to a growing critique from within the macroeconomic research community towards blind reliance on the micro founded framework. Krugman’s piece from 2009 (linked above) is already a classic example of the anti-thesis, but there have been others.

Buiter had a go in relation to monetary economics back in 2009;

Charles Goodhart, who was fortunate enough not to encounter complete markets macroeconomics and monetary economics during his impressionable, formative years, but only after he had acquired some intellectual immunity, once said of the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling: “It excludes everything I am interested in”. He was right.; It excludes everything relevant to the pursuit of financial stability.

Menzie Chinn from Econbrowser did a useful overview of the initial flurry (see also the Economist) as well and ended up arguing that the “modern macroeconomic apparatus” should not be jettisoned. Indeed, Mr Chinn points out (using his own experience) that his own PhD experience was not doctrinate. I believe him of course, but I would still argue that right from the early steps as an undergraduate those who do not devote considerable time and effort into making their research proposals on the basis mathematically rigorous micro founded models may find their chance of proceeding as academics diminished.

In my own work as an economist which centers on demographics and economics the issue on micro foundations is acute. The life cycle framework (or even the Permanent Income Hypothesis) are both micro founded models which have been widely used to form aggregate models. But we also know that many of the most obvious conclusions from such exercises are untrue (e.g. the extent of dissaving as a population ages). Perhaps these inconsistencies with economic realities can be explained on the micro level (and I certainly think we should try to address them there), but there is also a need for a pure macroeconomic theory of how population dynamics affect complex macroeconomic processes.

On the state of macroeconomics itself, an colleague once told me that economics was fine mainly because it stuck to doing what it did best. I only conditionally agree. Learning economics is a brilliant way to cultivate a sharp mind and it is also offers a reasonably good framework to make sense of the processes which govern society and human behavior. However, the way economics is often narrated as sub-discipline of math and physics is unfortunate. I am all for quantitative analysis and use it every day in my own work (mainly empirical work), but I would think that the reason Mr Lewis finds it difficult to come up with a name of the alternative to mainstream macroeconomics is precisely because such an alternative does not currently exist. That is a pity.

Macroeconomics: A reading list

Economics is a rich and fascinating subject. But all too often, the teaching process forces young people in the field to look at the tail
of the elephant, to think about macroeconomics as the game of solving dynamic models. There is actually much more going on. (On a related note, you might like to see Books that should be read before starting a Ph.D. in economics on this blog, 18 May 2011).

In this blog post, we walk through the evolution of the key ideas in historical order, and offer suggestions to interesting readings,
which will help you see the fuller picture. Many of them are on your reading list, but some are not.

The old paradigm

Nobody tells it better than The age of uncertainty by John Kenneth Galbraith.

The old paradigm is now in the dustbin of history. But in order to comprehend the revolution in macroeconomics, it is rather useful to start from there. One encounters these arguments from time to time, so it’s worth knowing about the furniture of that mind.

The revolution of modern macroeconomics

The starting point is a speech : The role of monetary policy by Milton Friedman, American Economic Review, 1968, which had enormous influence in arguing that the mainstream Keynesian paradigm was fatally flawed, and that it was not going to work as a guide to policy on a sustained basis. By the early 1970s, the empirical evidence was showing that Friedman was on the right track, which led to everything that followed. This speech is arguably the beginning of modern macroeconomics. At the same time, this was only an argument conducted in English, and not a model.

The next big milestone was the Lucas critique: Econometric policy evaluation: A critique by Robert Lucas, Carnegie-Rochester Conference Series on Public Policy, 1976. This devastated traditional macroeconomics. In addition, it’s a remarkably elegant idea.

Lucas, Sargent and others mapped out a work program in a series of non-technical pieces, which were enormously influential. They set a generation of economists going to build a class of models that were rooted in the intuition of Friedman, 1968, and were invulnerable to the Lucas critique. You should read: Understanding business cycles by Robert Lucas, Carnegie-Rochester
Conference Series on Public Policy
, 1977; After Keynesian Macroeconomics by Lucas and Sargent, Federal Reserve Bank of Minneapolis Quarterly Review, 1978; Methods and problems in business cycle theory by Robert Lucas, Journal of Money, Credit and Banking, 1980.

As important as the Lucas Critique was Rules rather than discretion: The inconsistency of optimal plans by Kydland and
Prescott. An accessible set of materials on this work is found in their 2004 Nobel Prize page.

This work came to fruition in the early 1990s in the form of the NK-DSGE model with a policy rule. Important tools got developed in a
classical setting (the RBC model), and then Keynesian frictions were put in, to give the NK-DSGE model. It has many problems, but with this, the Lucas program did work out. Nice readings on the NK-DSGE model are The science of monetary policy: A new Keynesian perspective in the JEL by Clarida, Gali, Gertler (1999), and their Monetary policy rules and macroeconomic stability: Evidence and some theory in the QJE in 2000.

The new macroeconomics is nicely showcased in Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? by Jordi Gali in AER, 1999. This is a wonderful example of confronting empirics with theory, plus a fundamental (if highly controversial) contribution in the eternal quest for the sources of business fluctuations.

On the other side, there is a powerful critique of the micro-founded approach to macroeconomics: The scientific illusion of empirical macroeconomics by Larry Summers, Scandinavian Journal of Economics, 1992.

By the late 1990s, there was a lot of progress to report. There is a nice article: Thirty-Five Years of Model Building for Monetary Policy Evaluation: Breakthroughs, Dark Ages, and a Renaissance by John B. Taylor, Journal of Money, Credit and Banking, 2007. There is the best single book on monetary policy: Monetary Policy Strategy by Frederic S. Mishkin, 2007. And, there are two other nice articles: A stable international monetary system emerges: Inflation targeting is Bretton Woods, reversed by Andrew K. Rose, Journal of International Money and Finance, 2007, and How the World Achieved Consensus on Monetary Policy, by Marvin Goodfriend, Journal of Economic Perspectives, 2007.

The second stage

Once the basic plan was laid, important work emerged in connected fields. A critical issue that came to fore was the role of finance in macroeconomics. Agency costs, net worth, and business fluctuations by Bernanke and Gertler, AER 1989, is the most elegant illustration that financial structure matters for macroeconomics.

We close this off with a canonical reference about fiscal policy from a macro perspective. A good recent treatemnt is Activist fiscal policy to stabilise economic activity by Auerbach and Gale, from the 2009 Jackson Hole symposium.

Post-crisis revisionism?

On this, see Monetary policy and financial stability: Is inflation targeting passe? by Takatoshi Ito, July 2010.

Demographics and Macroeconomics – Part 2 (Wonkish)

I don’t suspect anyone remember part 1 of this series so if you want to refresh your memory, you can have a look here. In that note, I treated some of the more theoretical issues in the form of how demographics might affect long run growth as well as open economy dynamics. In particular, I discussed the broad tenets of the life cycle framework and how it relates to savings and investment behavior as a function of ageing. In particular, I discussed where I think there was room for improvement and further study.

So, in this one I would that I would look at an all together more practical topic in the form of asset demand and prices as a function of demographics. Again, this is a substantial area in the finance and macroeconomic literature and I will not give a detailed literary review here. Besides, if you want to move straight to investment and portfolio implications this piece by Alicia Damley and this piece by Ed Dolan are really spot on in terms of what you need to think about. Basically, you want to buy the young guns and sell the old farts and the key to obtaining this insight is to remove the focus from population size to population structure (age structure). I have been harping about this since this blog’s inception 5 years ago, I am doing a PhD about it, so it is with pleasure that I see the discourse hitting the tapes of Seeking Alpha which indicates that it is grabbing hold of other people than those stuck in the university ivory tower.

In this sense, this is hardly a new story . Emerging markets represent the main investment story in a post Lehman context. Everyone wants to buy India, China (although she is quite different), and Brazil and as a result of a myriad of ETFs and other types of market trackers, you don’t need to know your way around the streets of Bangalore to gain exposure to the Indian growth story.

This is a turkey shoot then. And I largely agree with the main thrust of the argument.

The real maturing of the emerging world which began some 10-12 years ago and which will continue for the next decades is undeniably a force of good for savers and investors and the real question is whether it is too good, and thus whether there will end up being too much capital chasing too little yield. In order to understand this link, you would need the second part of the equation (see part 1) and understand how demographics affect capital flows and the transfer of savings between economies as a function of demographics.

In this note I will talk about the idea of a life course but in the way that it is traditionally narrated. As such, the life course is a sociological theory which describes phases of life and in this sense it is more topical than the idea of a life cycle which only describes the flow of investment and savings. Indeed, in finance and economics you only hear about the life cycle even if scholars who investigate for example the dynamics of house prices as a function of demographics essentially are deploying a life course framework.

What is the Life Course then?

Well, Wikipedia does a good job of explaining it for the layman and this small snippet also captures the essence quite well especially

In particular, it [Life Course Theory] directs attention to the powerful connection between individual lives and the historical and socioeconomic context in which these lives unfold. As a concept, a life course is defined as “a sequence of socially defined events and roles that the individual enacts over time” (Giele and Elder 1998, p. 22). These events and roles do not necessarily proceed in a given sequence, but rather constitute the sum total of the person’s actual experience. Thus the concept of life course implies age-differentiated social phenomena distinct from uniform life-cycle stages and the life span.

The only mental leap you need to perform here is to replace socially defined events with economically defined events and you have yourself a working model. Now, if the finance geeks out there think that I am turning soft and if the sociologists believe that I am reducing their complicated theory of human lives into numbers and equations, both groups have my symaphaties.

Yet, this is a part of my master plan to elevate ageing and the change in age structure to the ultimate unit of analysis on a macroeconomic level. And in order to do this, we need more than merely the life cycle or the life course. We need them both. In fact, only by fusing the two will be able to develop a framework which is rich enough to deal with the complexities of ageing and macroeconomics. Indeed, I am betting a good deal of my academic oevure on this.
Consequently, if a socially defined event of interest to a sociologist or demographer might be the age of marriage, age of first child birth, age of first encounter with alcohol, age of sexual debut etc, then an economically defined event be something along the lines of age of maxmimum borrowing relative to asset value, age of purchase of first home, purchase of durables as a function of age as well as of course, the main topic in the financial literature as it currently stands; portfolio choice as a function of age (stocks and bonds basically, but you can vary the portfolio here as much as you like, at least in principle).
So, this inclusion of life course into the general thinking of macroeconomics is crucial and even though economists always talk about the life cycle, they are often implicitly assuming a life course perspective.
In the end, I will keep it short here.
There is a myriad of sources on aging and asset prices and demand in general. The main man in the world of economics and finance is James Poterba from MIT (just check list of papers) and I would emphasize in particular the strand of literature that deals with housing and demographics (I have a paper coming here).

Other Alpha Sources for July 2, 2010

Steve Waldman has a very good post this week about the folly about the austerity vs non-austerity discussion which seems to be going the rounds at the moment. In fact, it you take a mental picture of the current financial market discourse most arguments can be bracketed along the two axes of austerity vs non-austerity (as a matter of preference) and inflation vs deflation (as a matter of prediction). Note in particular the following from Steve;

I think the austerity debate is unhelpful. There are complicated trade-offs associated with government spending. If the question is framed as “more” or “less”, reasonable people will disagree about costs and benefits that can’t be measured. Even in a depression, cutting expenditures to entrenched interests that make poor use of real resources can be beneficial. Even in a boom, high value public goods can be worth their cost in whatever private activity is crowded out to purchase them. Rather than focusing on “how much to spend”, we should be thinking about “what to do”. My views skew activist. I think there are lots of things government can and should do that would be fantastic. A “jobs bill”, however, or “stimulus” in the abstract, are not among them. If we do smart things, we will do well. If we do stupid things, or if we hope for markets to figure things out while nothing much gets done, the world will unravel beneath us. We have intellectual work to do that goes beyond choosing a deficit level. The austerity/stimulus debate is make-work for the chattering classes. It’s conspicuous cogitation that avoids the hard, simple questions. What, precisely, should we do that we are not yet doing? What are the things we do now that we should stop doing? And how can we make those changes without undermining the deep social infrastructure of our society, resources like legitimacy, fairness, and trust?

Elsewhere, in the world of academia, I also noted this piece by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne Grody, Bill McKelvey, and Stanley W. Trimble in the Chronicle of Higher Education on the avalanche of poor research. The authors point towards a growing problem of sub-par research in general pointing to, as far as I can see, three things. First, that the growing amount of poor research is a strain on the system of peer-reviewed work (too many articles to review by too few able reviewers); secondly, that the pressure to produce in academic circles leads to quantity over quality and thirdly that the increasing tendency of money to flow to the amount of publications by default exacerbates the problem.

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.


Our suggestions would change evaluation practices in committee rooms, editorial offices, and library purchasing meetings. Hiring committees would favor candidates with high citation scores, not bulky publications. Libraries would drop journals that don’t register impact. Journals would change practices so that the materials they publish would make meaningful contributions and have the needed, detailed backup available online. Finally, researchers themselves would devote more attention to fewer and better papers actually published, and more journals might be more discriminating.

In the context of the world of academic economics which I am accustomed to I can see most of the issues the authors point. Especially, I would point towards the pressure to produce which is extensive in the context of economics. However, I am not sure about the point that a large bulk of research is bad because it, in itself, takes a lot of time to digest. I like to think that a study which might not be deemed relevant today may find its day in the sun in the future if the consensus and discourse changes.

Economist Kartik Athreya from the Richmond Fed (Virginia) is not too fond about econbloggers voicing their opinions on macroeconomic because, as he says, it is a topic much too complicated for econbloggers to understand (the original link to the essay is gone, but FT Alphaville and Scott Sumner provide good coverage and quotes). Now, I don’t even know where to begin here but as both an econblogger and a semi-academic economist I naturally ought to be able to muster some opinion. But really, where do you start here? Well, I especially noted this;

So far, I’ve claimed something a bit obnoxious-sounding: that writers who have not taken a year of PhD coursework in a decent economics department (and passed their PhD qualifying exams), cannot meaningfully advance the discussion on economic policy. Taken literally, I am almost certainly wrong. Some of them have great ideas, for sure. But this is irrelevant. The real issue is that there is extremely low likelihood that the speculations of the untrained, on a topic almost pathologically riddled by dynamic considerations and feedback effects, will offer anything new. Moreover, there is a substantial likelihood that it will instead offer something incoherent or misleading.

Let me be very, very clear here. The ability to solve dynamic optimization problems, to solve complex differential equations, to derive, on paper, various statistical estimators do not make a good economist. You do all this in order to become a part of the initiated crowd and in order to speak a language which dazzles colleagues and the greater public by its complexity and, crucially, is the main reason why economists today still form a gated community. This is natural since it takes half a mathematics degree to say anything which your fellow colleagues will accept as a real economic argument.

But I digress (and rant too). Math is not the problem as such but a symptom of some of the problems with modern economics. In general though, Math makes you smart and helps to build rigorous arguments which helps in any scientific context. As such, I will reciprocate Mr. Athreya’s point; just as the econbloggers are not stupid neither are academic economists (they are devilshy smart for the most part). Yet, the latter have remained stuck too long and too far up the ivory tower to see that the econbloggers are not leeches who prey on the public through simplification of a complex topic, but in fact helps to bring an otherwise unworldly macroeconomic discourse down to earth.

We as economists should encourage this, not move further up the ivory tower.

The Macroeconomic Effects Of Stimulus Spending

Robert Barro and Charles Redlick wrote an op-ed in WSJ (link) on their original paper (link) where they discuss the macroeconomic effects of fiscal stimulus and construct long-term time-series on U.S macroeconomic data to examine whether real GDP increases follows the spending multipliers and whether reductions in marginal tax rates, rather than spending increases, tend to exert a stronger effect on GDP growth.

“Our research also shows that greater weakness in the economy raises the estimated multiplier: It increases by around 0.1 for each two percentage points by which the unemployment rate exceeds its long-run median of 5.6%. Thus the estimated multiplier reaches 1.0 when the unemployment rate gets to about 12% … For data that start in 1950, we estimate that a one-percentage-point cut in the average marginal tax rate raises the following year’s GDP growth rate by around 0.6% per year. However, this effect is harder to pin down over longer periods that include the world wars and the Great Depression.”

Rok Spruk is a supply-side economist and a libertarian.
He (currently) lives in Slovenia where he studies economics
and business. His fields of research are economic growth,
macroeconomics, international economy, global competitiveness,
and tax reforms. His views, observations and ideas are posted
on his blog.

The Father Of Macroeconomics

June 5th is the birthday of John Maynard Keynes, a brilliant economist whose influential work during the 1930’s changed the course of history. He has had a great deal of influence on generations of economists, including advisers to our current president and congress. It’s too bad he was wrong in virtually all of his innovations.

Keynes is considered the father of macroeconomics, one of the two major divisions of modern mainstream economics. Microeconomics is the description of reality, the study of how people interact and how markets work. Macroeconomics, on the other hand, is the study of how government can efficiently manipulate markets and people.

In the present world, economic reality and truth is largely ignored. The vast body of brilliant intellectuals involved in economics occupy themselves with building and analyzing macro models for government to more easily control the economy. They use their massive mathematical and analytical brainpower to try to develop more clever and complex models to predict the future and show politicians which strings to pull.

It can be clearly seen that the macroeconomists have failed miserably with their interventions to achieve a stable economy and well being for the people. It was a vast experiment over many decades and is a profound and horrible tragedy. All macroeconomists who promoted the interventionist state should be ashamed that they brought this great country to its knees. They should be crawling under a rock in embarrassment. That is not the way of the intellectual, however. The problem, they say, is that they didn’t intervene enough.

All of the macro models and manipulation are built on false premises. The first one is that government intervention can be successful at bringing long term to people in an economy. The second one is that they should intervene, even if success was possible.

Keynes’s conceived that, by measuring and controlling aggregates, such as aggregate demand, total unemployment and gross domestic product, the central planning gurus pulling the strings could make everything coordinate, put everyone to work and advance toward a post scarcity utopia.

The coordination problem is one that central planners have always had to deal with, and the former Soviet Union was one of the clearest examples of the problem and its results. The abolition of voluntary markets and the institution of central planning after the Bolshevik Revolution resulted in mass starvation and deprivation for many millions of people. Lenin was forced by reality to enact the New Economic Program in 1922, the limited reinstitution of markets, to prevent further deaths and possible overthrow of the regime.

Macroeconomics is, in its very essence, the rationalization of central planning. The core fallacy with all of macroeconomics is that data aggregated over a large, diverse area can be used to coordinate the activities in each locality and each transaction between actors in the markets. Each locality in a vast economy has its own peculiarities of weather, geography, demographics, culture and a host of other characteristics. The people each have their own goals, hopes, dreams, advantages and limitations.

It is not possible to impose a uniform solution on 300 million different people over millions of square miles of coastline, mountains, deserts and tundra. The problems and opportunities for small desert communities is vastly different than those of northern metropolitan centers. Macroeconomic policy is necessarily a generic solution to particular problems. The inevitable result is discord, waste and conflict. Because macroeconomics is inherently political, the macro solutions pit one group against another for control of the strings.

This brings us to the second inherent weakness of macroeconomic policy. Even if it was possible to have efficient macro solutions, it is wrong to impose those solutions. A slave owner might become an expert at wringing the most productivity from slaves. That he is able to do so does not mean he should. He should, rather, not enslave them. He should respect their rights and only enter into voluntary trade.

The same applies to national governments. Many people assume that it is a proper role of government to use coercion and confiscation to make people do things that will increase employment, aggregate income, gross domestic product or any other artificial measure. People in a free country, however, are not slaves of the state. Whether a policy will increase GDP or not does not give a politician the right to interfere with the voluntary interaction of market participants.

J.M. Keynes was indeed a brilliant man. Like so many brilliant people today, he was profoundly wrong and arrogant in his wrongness.