One more time for the world: There is no simple relationship (if any) between Taylor-rule coefficients and policy preferences

The lack of a relationship between the size of the coefficients in a Taylor rule for monetary policy conduct and the underlying preferences for stabilization of macroeconomic goals is well known. I often have it as a check subject in my exams in monetary economics. When I present the result to students first time—it is fleshed out in Lars Svensson’s “Inflation Forecast Targeting: Implementing and Monitoring Inflation Targets” (European Economic Review 41, 1997, 1111-1141) for a simple backward-looking IS/AS model—I often state that many tend to overlook this, and that it is a common misconception that, e.g., a relatively high coefficient on the output gap in the rule indicates a relatively high preference for output gap stabilization.

I sometimes fear that I thereby make the classic mistake of putting up a straw man in lack of better motivation for why this is an important result to emphasize. But I just became aware of a new example of the peculiar endurance of this misconception. In John Cochrane’s positive and very appetizing review of John Taylor’s new book, “First Principles” (W. W. Norton & Company, Inc., 2012), Cochrane writes:

The Taylor rule actually stands quite a bit to the left of the “inflation targeting” tradition that says central banks should only respond to inflation, ditching the whole GDP response — because, in John’s words (p. 127)

‘Some Federal Reserve officials worry that a focus on the goal of price stability would lead to more unemployment. But history shows just the opposite.’

John answers that the  “dual response” really is a “single mandate.” It is a a worthy effort, but one I find strained. The reason for the GDP response is, explicitly in the models, to accomplish a tradeoff between inflation and output volatility.

There are two intertwined mistakes here. As Svensson’s model clearly shows, even a strict inflation-targeting central bank, i.e., one that only cares about inflation stability, would respond optimally to the output gap. So following Taylor’s rule would be a good idea (given that the “magic numbers” 1.5 and 0.5 somehow were appropriate for all countries over the globe). Why is that? Well, output may be a good predictor for future inflation. So it serves as an intermediate target worth responding to—even for a completely right-wing “inflation nutter”. It is not a goal variable per se.

Therefore, it is also false to claim that the reason for the GDP response is “explicitly in the models, to accomplish a tradeoff between inflation and output volatility”. To emphasize this point with a different model example, note that in the simple New Keynesian model, a Taylor rule with only a response to inflation will actually be one that may secure the optimal tradeoff between inflation and output volatility. This has been known at least since Clarida, Galí and Gertler “The Science of Monetary Policy: A New Keynesian Perspective” (Journal of Economic Literature 1999, 1662-1007).

All of these matters are simple facts which are completely orthogonal to your own policy preferences, to whether you favor the Taylor-rule approach to monetary policymaking or not, and so on. So one more time for the world: There is no simple relationship (if any) between Taylor-rule coefficients (their size or existence) and policy preferences.

But at least I apparently don’t put up straw men during teaching on this point. Hopefully the point will sink in over time, however.

Apologies up front in advance if I have misinterpreted Cochrane’s post in this dimension.

Share
Posted in Economists, Monetary policy | Tagged , , , , | Comments Off on One more time for the world: There is no simple relationship (if any) between Taylor-rule coefficients and policy preferences

Countdown for Krugman

I recently wrote that I thought Paul Krugman wrote slightly too many blog posts, and that too many spent time commenting on them, and commenting on others’ comments and so on and on—a “Krugman multiplier“. Now an explanation for Krugman’s exceptional blog productivity is beginning to offer itself.

The New York Times, who hosts Krugman’s blog, have introduced a counter on their web edition such that you can only read ten articles per month. So, only ten Paul Krugman blog posts per month if that’s your only reason for visiting NYT. A rough guesstimate tells me that this amounts to around only ten to twenty percent of his output. The problem with this, of course, is that you then has to be careful when picking a post to read (if you don’t feel that you want to pay for potential scientific commentary). With Krugman’s indisputable talent for picking inviting titles, this is a daunting task. I just spent one of my ten shots at the post “Eurodämmerung“, which is mainly a link to a YouTube video of the final of Wagner’s Götterdämmerung. Great music, but a little disappointing.

And, of course, I had to see what “Raygunomics” was. As you can see from the screen shot, this will be the last I can get from Krugman this month. As you can also see, it was indeed really funny, and well worth the click:

Copyright: New York Times, 2012. All rights reserved.

Good luck to the New York Times with this new pricing initiative. I will consider the offer, but maybe I will be able to free-ride on the Krugman multiplier, and get his more substantive posts elsewhere?

Share
Posted in Economists | Tagged , , , , | 2 Comments

When ”failure” in economics is ”success” and vice versa?

This post fully lives op to the mantra of the blog, as it contains a lot of “stochastic ramblings” (a nod to Greg Mankiw’s mantra of “random observations”). “Stochastic” as I wander unplanned around important subjects on the developments of economic sciences, and “ramblings” as most of it is scientifically unsubstantiated talk with little coherency, which just emerge from my gut. You have been warned.

The backdrop of the following is a festive occasion. An occasion I am truly and deeply happy about. The Institute of New Economic Thinking (INET), which is partly funded by George Soros, has given a grant to establish a center on Imperfect Knowledge Economics (IKE) at the University of Copenhagen. It was therefore a deservedly happy colleague, Katarina Juselius (who will be director of the center) that opened the program marking the launch of the centre. Katarina Juselius and Søren Johansen have for a few years now worked on applying Søren Johansen’s econometric methods (which could have got him a Nobel Prize in my honest opinion) to asset pricing behavior together with Robert Frydman and Michael D. Goldberg. They, in turn, have developed a new concept of rational behavior that takes into account that agents may not know the true underlying probability distributions (i.e., Knightian uncertainty). This theory is presented in their 2007 book “Imperfect Knowledge Economics: Exchange Rates and Risk”. The new center is devoted to this line of research.

The executive director for INET, Robert A. Johnson, gave a speech on the necessity of new thinking (well, isn’t new thinking always needed?). The audience was presented with a standard statement of the kind “I know mathematics, and I like math, but . . .” followed by the now well-established, and politically correct, tirade against excessive use of mathematics in economics, and the potential blame one can put on the profession for not having foreseen the current financial crisis, since it mistook math for beauty and/or the truth. I could not help smiling. Not so much because this kind of statement is an extremely cheap shot at economics, but mostly because the IKE modeling and the co-integrated VAR model are both using math with hair on the chest. But maybe the math used is sufficiently ugly to be politically correct? I don’t know, but Robert A. Johnson concluded his talk with some words on economic education, where the punch line was that any economics student should learn to be critical towards the models and concepts they are presented with. I can only concur, and is happy that this is what we actually teach at my department. Indeed a group of our students has just won an international econometrics game, and the spokesperson for the group indeed said that the Copenhagen students were better because they were more critical towards the assignment, methods, and so on. In all fairness, some students, representing “critical students” (I don’t hope we have non-critical students), later talked about what they saw as a dogmatic current teaching with undue emphasis on mathematics and rational expectations. I wonder how these students will receive IKE and the co-integrated VAR: these are models made by very dogmatic persons (indeed, I hope the authors believe in what they are writings). Right now it seems that to some, “anything different” is better.

Roman Frydman talked about limits of knowledge and, of course, promoted his research with Goldberg, Juselius and Johansen. Most of the time was devoted to a critique of current paradigms in economic thinking and why it is important to break them. On this occasion, Frydman was understandable in a good mood, so the critique against “mainstream” thinking was not as stern as in his writings with Goldberg. Occasionally, in these writings they get quite vile and confrontational. As emphasized by Kevin Hoover in his review essay of Frydman and Goldberg’s recent book, “Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State“, there is something about talking about “The Orwellian world of ‘Rational Expectations’ ” that doesn’t make people embrace your new ideas right away. I may add that they also use metaphors as “A World of Stasis and Thought Uniformity”. Even though their thoughts here wander towards Germany, they thoughtfully do not draw the parallel to other kinds of (earlier) German thought uniformity. In any case, they apparently feel it necessary to be insulting to help sell their new, very interesting, ideas. I would probably have spent more time trying to pitch my ideas, instead of labeling a whole profession mindless sheep. (They do, e.g., write that the Rational Expectations Hypothesis “leads economists to imagine a world of perfect knowledge and universal thought uniformity” (p. 65); i.e., it is a theory that controls scientists. Please count me out.)

Something that was definitely new in Frydman’s talk was an announcement that the most recent paper he and Goldman, Juselius and Johansen had completed had just been submitted to the American Economic Review. Yes, he actually mentioned the name of the journal, which is a bit unusual. Of even more interest was his ensuing comment, said with a smile, that he did not have high hopes for acceptance. It got a laugh from the audience, but it struck me as being quite serious. It touches on the basics about quality measures in research. Normally, publishing in a journal like AER would be taken as a good sign. But clearly, in the case of these authors, who challenge Stasi thought uniformity, an acceptance would be a failure. It would mean that they are part of the uniformity they try to escape. So my guess is that Frydman actually hopes for a rejection, as this will confirm that he is up against Orwellian forces. If they are accepted at the AER, all he and Goldberg have written about the profession is at worst wrong, or at best outdated. So, conventional success would be failure for the new thinkers. This is the core issue: How is quality measured of truly new thinking? Is it of good quality when it is acknowledged by some academics, but not the majority? Is it bad quality if the majority embraces it?

All these important scientific matters did not get much attention in the press (and what they did pick up, they mostly got all wrong). What did get a lot of attention, on the other hand, was that George Soros himself came for the opening. This fact made me arrive in good time, as I anticipated that the conference venue and surroundings would be blocked by “Occupy Wall Street”-type protesters. After all, we are talking about a financial speculator of Olympic proportions who more or less singlehandedly brought down the British Pound in the early 1990s and who is convicted for insider trading in France. I reckoned he would be the symbol of all the financial speculation everybody is turning against after the crises started. But there were no protesters in sight. I found out that I was completely out of sync with reality. Soros is now considered a “good” speculator (whatever that is), and his criticism of financial speculation in recent times and his philanthropic activities have apparently made him politically correct. The most left-wing newspaper in Denmark gave him relatively good press, and they are normally hunting down anybody earning more than a million a year (in whatever currency).

Soros was featured in a conversation with my colleague Niels Thygesen, where the subject was mainly the current European crisis. Soros was not optimistic for the Euro, and conveyed a quite strong aversion towards Germany and the leading role it plays. Also he criticized the ECB’s alleged adoption of a “German Bundesbank model” of anti-inflationary policies. Much to my surprise, Soros saw such a price-stability objective as one that could be compatible with deflation. In relation to the main subject of the event, Soros mentioned that he had never examined rational expectations as he found it unrealistic. But he mentioned quite a few alternative theories he had developed over the years.

Finally, it was interesting that several speakers felt a need to emphasize their nationality, ethnicity and religion. This is normally, for good reasons, considered irrelevant at academic events. I will, however, follow suit and note that I am as pale as one can be, and born in the outskirts of Aarhus, which is located in Jutland, Denmark (and I have no God that I know of). A prominent feature of people from Jutland is humility. That is why I cannot help ending my ramblings by noting that this new center is simply doing what such a center should do (and is going to do): Carry on basic research on a high level. But apparently, to get attention and money, one must as a minimum present one’s work as revolutionary and as a complete change of paradigm, while desecrating years of “failed thinking” in the profession. With my ethnic background, I am not cut for that game.

But I do acknowledge that for a festive occasion, the truth that “we are going to do what we have always been doing” is definitely not a good sales pitch.

Share
Posted in Economic Sciences, Economists | Tagged , , , , , , , , , , , | 4 Comments

US Output Gap: Still negative

John Taylor recently showed how the United States is currently much farther away from returning to “potential output” compared with the recession of the early 1980s, where above-average output growth during the recovery secured a return to the potential output path. Apart from the obvious implications for the evaluation of the current US recovery, this has led to a deeper discussion about the dangers of extrapolating “potential” output from past values (e.g., maybe the 2007 value was just too high?).

James Bullard of St. Louis Fed argues (pdf of speech) that the financial crisis lead to a very persistent negative wealth shock that has pushed potential output down. Hence, the output gap is not necessarily as negative as simple statistical detrending methods may suggest. In consequence, there could be a danger that the US is about to repeat the mistakes of the 1970s where the output gap was believed to be negative, but in retrospect was determined not to have been. The result was a too loose monetary policy with ensuing high inflation. This is, e.g., argued by John Cochrane here, although he doesn’t endorse the wealth shock idea. Neither does Paul Krugman here, based on the strong argument that fall in asset prices doesn’t destroy productive capacity.

The heart of the matter is that the output gap is a phantom which is immeasurable. We observe output, but we don’t observe what it is compared to in order to produce an output gap. And what it is compared to is also quite different from writer to writer and named more or less appropriately. For example, “potential output” is a strange term. I read that as output when all resources are used to full extent. So in the strong version it is an outcome of a centrally planned slavery economy (where all work, say, 16 hours per day); in the milder version it is the efficient level of output. But nobody, and in particular not monetary policymakers, would attempt to steer the economy such that output should match efficient output.

So, what most have in mind—but don’t say much in the US—is a version of the concept of the “natural rate of output”. Since that term was coined by Milton Friedman, many Keynesians shy away from it; for mostly the wrong reasons. In New-Keynesian theory, the natural rate of output is a well-defined concept: Output in the absence of distortions created by price and wage rigidities. Targeting this output level is feasible for monetary policy. The level may for many reasons be lower than efficient output. These reasons, however, predominantly arise from frictions that are outside the scope of monetary policy. New Keynesian theory also has a theoretical shot at how to measure the output gap using only observables. The idea was synthesized by Jordi Galí in his 2010 Zeuthen Lectures at University of Copenhagen, now published as “Unemployment Fluctuations and Stabilization Policies. A New Keynesian Perspective” (The MIT Press), and it builds on the ideas put forth by Galí, Gertler and López-Salido in “Markups, Gaps, and the Welfare Costs of Business Fluctuations”, Review of Economics of Statistics 89 (2007), 44-59.

The approach is based on theoretically identifying the welfare reducing fluctuations in an economy, and in Galí (2011) he narrows them down due to the presence of monopoly power in the goods and labor market. Specifically, the associated markup’s involved in price- and wage setting result in too low output and employment, but, more important for monetary policy, cause inefficient fluctuations in output when prices and wages are subject to nominal rigidities. These fluctuations will be reflected in variations in output relative to the efficient level. As noted above, measuring the associated output gap requires knowledge about the unobserved efficient output level or the natural rate of output (output under flexible prices, which may be inefficient). However, the theory shows that fluctuations in either output gap measure will be driven by fluctuations in the price and wage markups. These, in turn, are theoretically shown to be proportional to observables: Labor’s share of income (in logs) and the unemployment rate, respectively.

Output relative to the efficient level can then be computed as a weighted average of these measures. This requires calibration of just two theoretical parameters. The Frisch elasticity of labor supply and the decreasing return to labor in production. Below, I choose the values of the benchmark in Galí (2011), which imply a Frisch elasticity of 1/5 and decreasing return to labor at 1/4 (which secures a reasonable average price markup). The computed output gap is sensitive to the choice of these two values, but only with respect to the average of this output gap measure. The fluctuations, which are of interest for monetary policy, are largely invariant to changes within realistic bounds.

Based on data from the St. Louis FRED database, I compute the theory-based, welfare-relevant output gap for the US, 1959q1-2011q4. This is shown in the figure below, which is essentially a version of Galí’s (2011) Figure 2.1. It is seen that output is always inefficiently low, and that there are substantial fluctuations around an average welfare-relevant output gap of -4,8%. The fluctuations around the average is what I consider relevant for monetary policy, and one sees that currently, output is below this average. So, the output gap is still negative; and of a non-negligible magnitude; by this theoretically-based approach, US output is around 2% below the natural rate of output (and nearly 7% below the efficient level—but this latter number is sensitive to calibration).

“Going backwards,” one can use this output gap measure together with actual output data in order to recover the natural rate of output. This is done in the figure below:

In the last two figures, I take the above figure and zoom in on the periods that John Taylor focuses on, in order to assess the differences between the current recession and the recession in the early 1980s. So I present below the same numbers for the sub periods considered by Taylor: 1981q1-2011q4 and 1981q1-1985q4. Note that the scaling are the same, such that the vertical distance of the figure is 10% of output in both instances (i.e., the distance between the horizontal lines is 1% of output).

It appears correct that the US recovery is slower currently than in the 1980s. However, the recovery only begins in the middle of the considered 5-year span (third quarter of 2009), whereas it starts earlier in the 5-year span in the 1980s considered by Taylor. Moreover, the simple “potential output” figures used by the CBO and Taylor are very smooth, and it is notable that by using the theory-based measure, there was still a negative output gap in the fourth quarter of 1985 of around 0.75% . Only in mid-1987 was the gap closed.

So, while the recovery now indeed seems slower, it did take more time in the 1980s than what the “potential output” figures may suggest. Interestingly, the theory-based natural rate measure grows equally slow in both periods. From 2007-2011 the natural rate of output grew on average at 1,4% annually, while from 1981-1985 the number was 1.2%. In both periods, growth in the natural rate is hampered by periods of no growth.

Share
Posted in Economics, Economists, Monetary policy | Tagged , , , , , , , , , , , | Comments Off on US Output Gap: Still negative

That strange feeling of Déjà Vu: EU’s New Fiscal Compact

The new “Fiscal Compact” of the European Union is now ready to be signed. The purpose of the compact is to strengthen fiscal discipline among member countries (at least those who sign). The desire for enhancing discipline is obviously triggered by the debt crises felt by many EU countries recently. It is, however, still an open question to which extent the current debt performance is due to the global recession or prior fiscal indiscipline. As debt is cumulated deficits it is hard to separate these matters. As seen in the data, it is nevertheless clear that the crisis itself is associated with a substantial worsening of the average government deficit relative to GDP (numbers are for EU-17):

The issue is then whether deficits before were too big—at the average level they are not dramatically big, but the average, of course, hides some with worse and some with better performance. And those with worse performance may need some discipline, as lack of discipline has negative externalities on other countries (in the monetary union).

But didn’t the EU already have a Stability and Growth Pact with an excessive debt procedure that opened up for monetary punishment of countries whose deficit exceeded 3% of GDP and/or debt exceeded 60% of GDP? Well, yes, but obviously those laws were not really used (perhaps, and just perhaps, because one of the first countries to violate the Pact was Germany?). And laws and treaties on which you agree, but do not enforce, tend to lose their meaning.

Hence, in the midst of the debt crises, European politicians felt they had to do something. So this new Fiscal Compact was negotiated. Reading this “TREATY ON STABILITY, COORDINATION AND GOVERNANCE IN THE ECONOMIC AND MONETARY UNION” (pdf here) gave me that strange feeling of Déjà Vu—that sensation so eloquently explained, and dramatically experienced, by Michael Palin in the Monthy Python sketch above. The Treaty offers really not much substantially new compared to what is already in place. Perhaps saying the same things twice is believed to make up for past times’ lack of discipline in EU governance (there, lack of discipline or indecisiveness is indisputable as I see it).

Actually, the new Treaty is in economic terms broadly identical to existing rules (it makes ample reference to the existing excessive deficit procedure) One new element, is a peculiar “Balanced Budget Rule”, which is a misnomer if there ever was one. It stipulates that the structural balance of every governments must not exceed -0.5% of GDP, unless exceptional circumstances occur. Now, what is “structural balance”? The Treaty states:

” ‘annual structural balance of the general government’ refers to the annual cyclically-adjusted balance net of one-off and temporary measures. ‘Exceptional circumstances’ refer to the case of an unusual event outside the control of the Contracting Party concerned which has a major impact on the financial position of the general government or to periods of severe economic downturn as defined in the revised Stability and Growth Pact, provided that the temporary deviation of the Contracting Party concerned does not endanger fiscal sustainability in the medium term” (Article 3(3))

It is therefore not a balanced budget requirement (budgets can fluctuate within flexible bounds as under the old rules), and it introduces a new measure which will be prone to interpretation and endless debates. “Cyclically adjusted”? We know there are million methods to measure that, and the Treaty does not acknowledge that such a measure, will be time varying and country dependent (maybe they will use ECOFIN’s measure, but that doesn’t cause less speculation). Referring to the figure above, I can understand that the Commission’s intention is to push up the average of the whole curve, but you hardly accomplish that by introducing an immeasurable concept. And by naming it a “balanced budget rule” it will make it quite difficult to “sell” to those who are skeptic towards budget rules in general.

The Treaty also focuses on the need for more policy coordination. It states

“. . . the Contracting Parties shall take the necessary actions and measures in all the domains which are essential to the good functioning of the euro area in pursuit of the objectives of fostering competitiveness, promoting employment, contributing further to the sustainability of public finances and reinforcing financial stability” (Article 9)

A lot of positive words, but the numerical rules on which the compact is based, have not anything to do with policy coordination. Only in very simple setups would a (credible) numerical bound on a variable represent a truly coordinated outcome. Generally, it is a way of saying “every country for themselves”. The Treaty even seems to encourage whistle blowing by this weird paragraph:

“If, on the basis of its own assessment or of an assessment by the European Commission, a Contracting Party considers that another Contracting Party has not taken the necessary measures to comply with the judgment of the Court of Justice referred to in paragraph 1, it may bring the case before the Court of Justice” (Article 11(2)) 

In terms of policy coordination the “Fiscal Compact” is extremely unambitious. What would be needed to coordinate, is a fiscal transfer system that insures member states against asymmetric shocks (Big symmetric shocks which cause fiscal problems for all presumably mean that all countries pay a fine to each other; cf. the quoted Article 3(3).) Such transfer mechanisms work fairly well in other monetary unions like those among American states, Danish municipalities, and so on. But politically that would probably have been too difficult to achieve. Instead one is left with a mild rehash of the old system.

In sum, if you are a skeptic concerning fiscal rules within the EU, this Treaty does not offer significantly any more fiscal restraint that was already there in the first place (in letter, not in reality). It furthermore adds additional room for interpretation, which more than makes up for the now so-called automated punishment procedures in case a country (perhaps) violates the rules.

Share
Posted in Economics, Macroeconomics | Tagged , , , , , | Comments Off on That strange feeling of Déjà Vu: EU’s New Fiscal Compact

Fed “Fan Charts”

I recently wrote that USA had now entered the club of inflation targeting banks. This occurred when the Federal Reserve in April last year officially started mentioning an explicit inflation target, and also introduced press conferences after its policy meetings. Thereby, central criteria for being considered an inflation targeter were met.

Following its January 25 meeting, the Fed initiated immediate publication of projections for the paths for main macroeconomic indicators (they have been available at least since October 2007 in slightly different style, but then along with the minutes of meetings which are published three weeks after the policy decision). The projections are presented along with “confidence bands,” in a manner visually similar to the presentational style of many inflation-targeting central banks (e.g., Bank of England, Sveriges Riksbank, Norges Bank, etc.). Figure 1 shows the current projections for real GDP, unemployment and inflation:

Where these charts differ from those of many other inflation-targeting central banks, is that the uncertainty embedded in the fan charts is not statistically based, in the sense that the charts are based on some economic model (and the standard deviations drawn from it). Instead, the variations reflect the FOMC members’ individual projections.  Thereby, they reveal the range of disagreement over economic developments, and thus monetary policy among members. This disagreement is further highlighted in the figure that explicitly shows the distribution of the members’ view on the appropriate interest rate now as well as in the future. This is an example of a high degree of transparency in Fed policymaking. Previously, dissent was immediately mentioned, but here it is quantified (like the projections, this information has been provided since at least October 2007, but then after the fact along with the publication of minutes).

While there is strong agreement to keep rates unchanged at the 0-0.25% interval throughout this year, 8 of 17 members judge the appropriate rate to be at 1% or more at the end of 2014. In the recent policy statement, this is considered an unlikely scenario. This brings me to a point of criticism. While the figures are clear, and honestly describe the (non-trivial) disagreement over economic developments and thus monetary policy, the market signal effect may be obscured by the anonymity. Since monetary policy actions are determined by only a subset of the 17 FOMC members, and since the composition of the FOMC vary according to a fixed rotation scheme, if would be helpful to get faces on the dots. If the two members desiring 2 and 1.75%, respectively, in 2013 are from Reserve Banks not in a voting capacity that year, one could discount these dots heavily. If, on the other hand, one of these dots belonged to Ben Bernanke, one could clearly not (this is in all likelihood not the case, but just for illustration’s sake).

So, while the immediate publication of projections and individual views on policy stance is a big step towards greater transparency, and yet a confirmation that the label “inflation targeter” is appropriate, a lot of ambiguity remains when this anonymity is retained.

Share
Posted in Monetary policy | Tagged , , , , | 1 Comment

New-Keynesian explosions: The Cochrane interpretation and explosive solution

John Cochrane has some interesting comments on New Keynesian economics in his latest blog post on “New Keynesian Stimulus“. The interesting is not the part of the blog-literature to which it also contributes; the part about mudslinging in fiscal stimulus discussions, about which prominent economist got basic theory wrong, about who is acting most disrespectful and whatnot. I.e., the extremely counterproductive style of “debate” that was basically initiated by he-who-shall-go-unmentioned for once. I normally find that Cochrane behaves quite academic and adhere to scientific arguments (which is not entirely unfair given that he is a professor of economics), but even he has to defend himself every once in a while, and then the ball is rolling.

His post, however, does transcend the usual gibberish by advertising and describing his latest paper on New Keynesian theory. Here, “paper” means a peer-reviewed piece of academic research published in an international journal; not some opinionated self-published article. It is “Determinacy and Identification with Taylor Rules”, Journal of Political Economy  Vol. 119, No. 3, June 2011. (It, however, manages to slip in surprisingly many purely speculative statements, see below, but it definitely contains many interesting, provocative and analytically strong results.)

It is difficult in a blog post to do justice all of the contents of the long paper, but briefly, it has two main messages: 1) New-Keynesian models that includes a Taylor-type interest rule as a description of monetary policy do a poor job at providing a story for how aggregate prices are formed, or “determined.” 2) Those trying to infer anything about monetary policy conduct from econometric estimation of Taylor-type rules are in for an ugly surprise, as it is virtually impossible to identify their coefficients for a number of statistically based reasons (and even if one can overcome the identification problems, Cochrane does not find the estimated coefficients of interest due to message No. 1—indeed, he calls them “mongrels”).

Message 1) is interesting and arises from deep issues of equilibrium determination in economic theory. Basically, to pin down an equilibrium of a model, it is not just crucial to describe what happens in the model’s equilibrium, but also what happens “out-of” equilibrium. That is, a fully specified model must have a description of where the economy goes if it is not in equilibrium. One often says that it is the specification of “out-of-equilibrium” events that “support” the occurrence of the equilibrium of interest. If I have a model of peoples’ behavior when they embark on a trip onto a pedestrian bridge without fences across an abyss between two mountains, my prediction of an equilibrium where people walk straight and carefully on the bridge to get across, is supported by the out-of-equilibrium behavior where people walk carelessly and erratically, and occasionally falls into the abyss and die. If the model captures reality we should observe careful walkers, and never observe the out-of-equilibrium behavior. But it is nevertheless important for the prediction, since it is the embedded “threat” of falling into the abyss that keeps people walking carefully and straight.

Macroeconomics is full of such cases where one picks a particular equilibrium for scrutiny by ruling out others with more or less good arguments. In particular, many dynamic models involving expectations about the future have this feature. In the standard Ramsey growth model, for example, the determination of consumption is attained by ruling out “explosive” paths of consumption (as these will eventually not satisfy optimal consumer behavior). In asset-pricing models, a price based on fundamentals like dividends is attained by ruling out explosive paths for the asset price. I.e., by ruling out bubbles; both negative (which is easy) and positive (which is less so). In either case, equilibrium determination is attained by deeming the out-of-equilibrium events as unattractive for various reasons. And often the out-of-equilibrium paths for the variables under consideration are “explosive”, i.e., you fall into the abyss. Hence, the analyst rules them out and focuses on the often unique non-explosive equilibrium. Its relevance, of course, will depend crucially on the properties of the out-of-equilibrium, as those are what support the focus on the particular equilibrium.

In New Keynesian models, equilibrium determination is non-trivial. In fact, for a fixed nominal interest rate, the typical model will feature infinitely many non-explosive equilibria. Whatever is expected will happen. Such “indeterminacy” is of course not a desirable property, if the model to any extent should be used for normative guidance in real life (as that real life could be heavily disturbed if people just expects it—not nice). To obtain a unique non-explosive equilibrium in these models, one therefore has to specify particular policies, which will secure that any deviation from this unique equilibrium will lead to mayhem (make you fall of the bridge). One such policy is the famous Taylor rule for the nominal interest rate. It stipulates that the interest rate should respond to inflation sufficiently aggressive such that an increase in inflation leads to a more than one-for-one increase in the nominal interest rate thereby increasing the real interest rate. In the most common versions of the New Keynesian model, adherence to this “Taylor principle” in monetary policy leads to a unique equilibrium.

But given the story above, something bad out-of-equilibrium must be prescribed by the theory (even though we will never observe it, since it supports the choice of the particular equilibrium we will observe according to theory). Indeed, what secures uniqueness in the basic New Keynesian model is that a deviation from the equilibrium will make economic variables explode; e.g., inflation will explode over time (not just increase somewhat, but increase a lot and keep doing so). Cochrane is not convinced that such explosive out-of-equilibrium behavior is a viable support for the unique equilibrium. Therefore, he does not believe that Taylor rules are a way of determining prices and inflation. In his blog post he summarizes his position in plain language:

“For example, the common-sense story for inflation control via the Taylor rule is this:  Inflation rises 1%, the Fed raises rates 1.5% so real rates rise 0.5%, “demand” falls, and inflation subsides.  In a new-Keynesian model, by contrast, if inflation rises 1%, the Fed engineers a hyperinflation where inflation will rise more and more! Not liking this threat, the private sector jumps to an alternative equilibrium in which inflation doesn’t rise in the first place. New Keynesian models try to attain “determinacy” — choose one of many equilibria — by supposing that the Fed deliberately introduces “instability” (eigenvalues greater than one in system dynamics). Good luck explaining that honestly!”

I have no quarrels with the fact that it is instability that secures uniqueness—just like it is the missing fences that makes people walk carefully on the bridge. It is a mathematical result as he remarks (“eigenvalues greater than one”). In his words from the JPE paper, “to rule out equilibria, people must believe that the government will choose to blow up the economy” (p. 568). What I would like to quarrel with, is his particular interpretation of what happens out-of-equilibrium; that is, his colorful storytelling, which is constructed to sound sufficiently crazy to be impossible to “explain honestly.” Mostly, however, I will quarrel with his alternative solution to price determination (he mentions in the blog post that he “solves” the problem—I don’t think so), as well as the simple fact that the models he consider in detail with mathematical rigor are not New Keynesian at all (so there may be no problem to “solve”).

First, the storytelling. The intuition provided by Cochrane ignores output effects of monetary policy. But monetary-policy induced output effects are central in the basic New Keynesian model, and arise through the New Keynesian Phillips curve. In fact, they are the main reason that these models were dubbed something with “Keynes” in the first place: Demand plays a role for output determination. Moreover, an important determinant of inflation, apart from output, is expectations about future inflation. With this in mind, the explosive paths that are ruled out can instead be explained like this: “Assume the Fed follows a Taylor rule. If it responds sufficiently active toward inflation, this stabilizes inflation. For example, if inflation expectations go up for no underlying economic reason, this will increase current inflation. When the Fed then raises the nominal interest rate sufficiently, it depresses demand and output which reduces the initial impact on current inflation. The considered increase in expected inflation can therefore only be an equilibrium if inflation expectations keep on increasing, i.e., are on an explosive path. Hence, the Fed’s commitment to stabilize inflation implies that self-fulfilling equilibria can only be explosive”. This sounds perhaps less provocative, but is much more in accordance with the basics of the New Keynesian model. It is, however, no coincidence that Cochrane ignores output effects in his storytelling, as his formal model is one of a flex-price endowment economy. Yes, correctly understood: in endowment economies, there are by definition no output effects of anything—policy included.

Then, onto the proposed solution to price determination. Cochrane essentially adheres to what is known as the Fiscal Theory of the Price Level. This is a theory, to some controversial,which puts the interaction between monetary and fiscal policy at center stage and thus the government budget constraint. The theory secures that the government flow budget constraint is honored at all dates. This is not different from any other consistent model. Where it differs, is in terms of so-called “terminal conditions”, i.e., about what the model builder assumes about fiscal policy as time progresses far into the future. Many researches add a terminal condition to the government’s budget constraint that disallows explosive real debt. Essentially, for any path of prices (or other variables), this corresponds to an assumption that the government cannot run Ponzi schemes. With this assumption, one can solve the government’s flow constraint into a compact expression stating that current liabilities (like real debt) must match the present value of current and future net surpluses.

In the Fiscal Theory of the Price Level, the No-Ponzi-Game is not seen as a constraint on fiscal policymaking. Instead, it is viewed as an equilibrium condition that, by implication, only holds in equilibrium. Hence, the compact expression mentioned above becomes an equilibrium condition: The price level will adjust such that the real value of current liabilities exactly match the present value of current and future net surpluses. An example of such an equilibrium condition is given by equation (21) in Cochrane’s  JPE article. What does this imply for price determination? It implies that the government in theory can commit to an unsustainable path of net surpluses, i.e., a path that lead no explosive debt (and Ponzi schemes)—such policies are called “non-Ricardian”—for all price levels except the one that satisfies the equilibrium condition. Hence, the price level secures sustainability of fiscal policy even for the most bizarre fiscal paths. Put differently, it is a fiscal commitment to blow up the world that secures determinacy of the price level. In Cochrane’s words:

“If P is too low, then the real value of government debt explodes. In response to a shock, P jumps to the unique value that prevents such an explosion. (. . . ) If the price level is below the value specified by (21), nominal government bonds appear as net wealth to consumers. They will try to increase consumption. Collectively, they cannot do so; therefore, this increase in “aggregate demand” will push prices back to the equilibrium level. Supply equals demand and consumer optimization are satisfied only at the unique equilibrium.” – Cochrane (2011, JPE, p. 580)

Good luck explaining that honestly to a non-academic! Indeed, a more colorful storytelling of this mechanism could be: “Under the so-called fiscal theory of the price level determination, the government commits itself to engage in Ponzi schemes if the price level is slightly lower than desired. Not liking this threat of forever exploding debt, the private sector jumps to the price level where this threat will not be carried out by the government. So, determinacy is attained as the government deliberately introduces instability to the system“. To repeat: Good luck explaining that story “honestly”.

So, is Cochrane not just relocating the “blowing-up-the-world” story from the monetary authority to the fiscal authority? Yes, but he notes that the identity of the authority makes a fundamental difference in terms of whether one can support a unique equilibrium with the explosions they respectively make. In the case of the monetary explosions, the inflationary explosions are not seen as costly by Cochrane—they are merely nominal. There is nothing fundamental in the model that forbids them. We may not like them, we may not think they are realistic, but they are still valid equilibria. As for the fiscal explosions, these are real explosions, and as mentioned by Cochrane, ones that will be inconsistent with consumer optimization. So, within the model framework Cochrane formally presents, and if one accepts the Fiscal Theory of the Price Level, he has a valid point. Inflation is indeed irrelevant for individuals in the model economy. And therein lies the problem of his approach, as I see it. When inflation is irrelevant, it is a good indication that you do not have a model that is remotely New Keynesian. And indeed, as noted above, the model economy under scrutiny is a flex-price endowment economy. Obviously, prices are entirely irrelevant. Hence, Cochrane is basically using a straw-man argument against price determination by Taylor rules in New Keynesian theory. He presents verbally their flaws and the flaws of New Keynesian Theory, and then proves the flaws in a model that is classical. Endowment economies with flexible prices are by no standards New Keynesian.  I should think that this is not just my opinion, but an irrefutable fact.

Well, he is aware of this weakness: When he 13 pages later starts analyses of a truly New Keynesian model (to examine the empirical identification problems of Taylor rule coefficients), he immediately shows the simple linearized model, and do not make any proofs that Taylor-rule induced determinacy is based on explosive equilibria that are arbitrarily ruled out as in the flex-price model. Instead, the reader gets the rather disappointing message:

“One might complain that I have not shown the full, nonlinear model in this case, as I did for the frictionless model. This is a valid complaint, especially since output may also explode in the linearized nonlocal equilibria. I do not pursue this question here since I find no claim in any new-Keynesian writing that this route can rule out the nonlocal equilibria. Its determinacy literature is all carried out in simpler frameworks, as I have done. And there is no reason, really, to suspect that this route will work either. Sensible economic models work in hyperinflation or deflation. If they do not, it usually reveals something wrong with the model rather than the impossibility of inflation. In particular, while linearized Phillips curve models can give large output effects of high inflations, we know that some of their simple abstractions, such as fixed intervals between price changes, are useful approximations only for low inflation. The Calvo fairy seems to visit more often in Argentina.” – Cochrane (2011, JPE, p. 593)

I would say it is much more than a “valid complaint”. It is a reason to discard his criticism of price determination in New Keynesian models altogether. The paper raises, as said before, important issues, but its results are for models that are not New Keynesian: He shows that price level determination by active Taylor rules in flex-price models lacks solid foundation. Well, mostly would not really care that much, as Taylor rules in flex-price models involve counterfactual interest rate effects at business frequencies; cf. Jordi Galí’s textbook treatment “Monetary Policy, Inflation, and the Business Cycle” (in particular, Chapter 2). Actually, they perform so weird that contractionary interest-rate disturbances have expansive equilibrium effects on the interest rate (!).

More important, we should not as academics “suspect” things as Cochrane at this point apparently feels sufficient. We should prove them rigorously; I’m confident Cochrane would agree to that. Also, in the non-linearized New Keynesian model, inflation causes output dispersion which reduces output (one doesn’t see that in the standard linearized model when one examines small perturbations around zero inflation). So, explosive inflation would ultimately reduce output toward zero. That is a REAL explosion (or, in output terms, implosion). Would this be “allowed” as a reasonable equilibrium? I doubt it, but it obviously needs careful scrutiny.

[As an aside, most would never have been able to publish such speculation as the above, along with a more or less inside joke on “Calvo fairies” and Argentina in a Top-5 professional journal. I can, of course, not imagine that being located in Chicago makes it easier to slip these, at best vague, at worst silly, comments into the JPE because the journal is located in Chicago. I mean, it is a respected research outlet!]

So, as I see it, Cochrane raises numbers of interesting questions, but does not manage to provide a serious blow to the New-Keynesian literature as his abstract and first 20 pages seem to promise. In many respects his analyses are simply done in a framework too far from (if not orthogonal to) the New Keynesian framework.

As for his second main message, the paper is (also) an interesting read. He shows in many analytical examples and numerical simulations how estimations of Taylor rules are biased and/or meaningless. I like this a lot, but I am also biased in this respect, as I wrote similar stuff back in 2002 (albeit much less generally). Cochrane gracefully acknowledges this in the extensive online appendix to his paper: “Appendix B from John H. Cochrane, “Determinacy and Identification with Taylor Rules” (JPE, vol. 119, no. 3, p. 565)“. This appendix is a great paper in itself. My own humble output was eventually published as “Estimated Interest Rate Rules: Do they Determine Determinacy Properties? The B.E. Journal of Macroeconomics: Vol. 11: Iss. 1 (Contributions), Article 11 (2011).” The answer, by the way, is “no”.

Share
Posted in Economic Sciences, Economics, Economists, Macroeconomics, Monetary policy | Tagged , , , , , , , | 2 Comments

Aaa – aaa – PLUS! Gesundheit!

Rating agencies dominate the financial markets and the news these days. Standard & Poor’s recent downgrading of French, Spanish, Austrian (and other, but not German) government bonds from “AAA” to “AA+” caused waves in media and markets even before they were official. But maybe it is much ado about nothing. Bond yields didn’t go up in France and Spain, as markets have seemed to downplay the downgrade. Maybe common sense is ticking in?

Because, what is it that these rating agencies can? They could rate junk financial instruments “AAA” before the financial crisis. Standard & Poor’s rated Lehman Brothers “A” in September 2008 (just before Lehman went bankrupt). This rating means “Strong capacity to meet financial commitments, but somewhat susceptible to adverse economic conditions and changes in circumstances.

Confronted with these peculiarities during the US Senate hearings on “2008 Financial Crisis & Rating Agencies”, the agencies themselves emphasized that they only offer “opinions”. Yes, “opinions”. This clip from the Oscar-winning 2010 documentary Inside Job shows the agencies in live action:

So maybe the media and politicians should spend less time in the future on the opinions of a few private companies with poor track records.

Share
Posted in Economics, Macroeconomics, movies | Tagged , , , | Comments Off on Aaa – aaa – PLUS! Gesundheit!