Saturday, August 30, 2008

Why I disagree with Jim Manzi

This month the excellent Cato Unbound has a discussion about global warming, with Jim Manzi, Joseph Romm, Indur Goklany, and Michael Shellenberger and Ted Nordhaus. As we should expect from the publication, this issue is both informative and entertaining, but unfortunately it is deficient in one key respect: a lack of high-level economic discussion. Jim Manzi is the only panelist interested in a genuine economic analysis of global warming, and many of his points are left unrebutted.

I feel compelled, then, to offer a list of reasons why I think his analysis is gravely incomplete. It is by no means comprehensive, or fully reasoned; the exact extent to which the points below change the outcome of a cost-benefit analysis is unclear, and should be investigated further. Still, I am convinced that they point to conclusions radically different from those offered by Manzi.

1. Extreme uncertainty about the extent of climate change and feedback mechanisms

This is the argument championed by Harvard economist Martin Weitzman in a recent paper. Using a clever mathematical derivation, Weitzman shows that under reasonable assumptions, the expected cost (a term economists use for the probability-weighted average of costs in different scenarios) of a disaster may be infinite. This is because "fat-tail uncertainty" overwhelms the calculation: the likelihood of far-off disasters does not fall as rapidly as their damages increase.

Does this mean that the expected costs of global warming are actually "infinite"? Of course not. After all, with a planetary issue like climate change, there is a certain upper bound on the amount of damage that can occur -- perhaps the existence of human civilization, as we know it, on Earth. This bounds the calculation below infinity. Further, direct conclusions from Weitzman's "Dismal Theorem" are unwarranted, because in theory it can apply to any type of disaster, not only global warming. Needless to say, we cannot run around and declare all risks to be infinite. In his criticism of Weitzman's work, Manzi refers to it as a "Generalized Precautionary Principle," whose almost unlimited implications cannot be consistently applied.

In reality, Weitzman's mathematical argument is best viewed as a theoretically compelling illustration of the need to analyze the full range of climate change scenarios, especially those that appear most unlikely and speculative. The moral of the Dismal Theorem is that under extreme uncertainty, seemingly marginal possibilities may be dominant. We cannot pursue a narrow, superficially precise analysis and wave off the exclusion of low-likelihood disaster scenarios as the inevitable cost of rigor. Instead, an insistence on precision is likely to make our analysis completely wrong, by excluding the possibilities that make climate change grave in the first place.

And indeed, although they are difficult to quantify with only today's research, the risks of feedback-induced disaster scenarios appear to be quite serious. Recent exploration has found evidence of astonishing changes in planetary and regional tempatures in the past, including the discovery that during a sudden event called the Palaeocene-Eocene thermal maximum 55 million years ago, the North Pole was subtropical. How did this happen? Our best guess is that some climactic change (perhaps a jump in solar radiative forcing) triggered a runaway cycle where CO2 and other greenhouse gases were released in enormous quantities into the atmosphere. Since our sudden and unprecedentedly rapid emission of carbon represents a similar disturbance to Earth's thermal equilibrium, the possibility that history may repeat itself should loom heavy in our minds.

Using estimates of the geophysical feedback factor from a paper by geologists Margaret Torn and John Harte, and aggregating 22 probability distributions of climate change published in reputable scientific journals and cited by the IPCC, Weitzman arrives at a crude probability distribution, which gives a 5% probability of a more than 11.5 C increase in global mean temperature and a 1% probability of an overwhelming 22.6 C jump. Such estimates, far above those provided by the IPCC (which does not consider geophysical feedback), raise the possibility of almost unthinkable climactic devastation.

Manzi responds:
"You really don’t need all of the complicated mathematical formalism that follows in Weitzman’s paper if you accept his distribution of possible climate sensitivities. At a practical level, he’s saying that there is a reliably-quantified 1% chance that the average year-round temperature on Earth will be about 100 degrees Fahrenheit within the foreseeable future. This is about the average summer temperature in Death Valley, California. I think that any rational person who accepted this premise would support pretty aggressive measures to reduce carbon emissions without needing a lot of further analysis...

Weitzman characterizes his analysis of the PDF of climate sensitivity using the following terms: “back-of-envelope” (page 2), an “extraordinarily crude approximation” (page 2), “ballpark” (page 2), “extremely crude ballpark numerical estimates” (page 5), “simplistic” (page 5), “naively assume” (page 6), “simplistically aggregated” (page 6), “very approximately” (page 7), “some VERY rough idea” (page 7), “without further ado I am just going to assume for the purposes of this simplistic exercise” (page 8), “wildly-uncertain unbelievably crude ballpark estimates” ( page 8), “crude ballpark numbers” (page 9), and “overly simplistic story” (page 9). Weitzman is a well-known economist, so I assume he could pass peer review with a paper titled “My Shopping List”, but at a certain point you just have to ask whether one should be able to publish an estimate in an academic paper with these kinds of qualifications around it."
Anticipating this criticism, Weitzman writes:
"These small probabilities of what amounts to huge climate impacts occurring at some indefinite time in the remote future are wildly-uncertain unbelievably-crude ballpark estimates -- most definitely not based on hard science. But the subject matter of this paper concerns just such kind of situations and my overly simplistic example here does not depend at all on precise numbers or specifications. To the contrary, the major point of this paper is that such numbers and specifications must be imprecise and that this is a significant part of the climate-change economic-analysis problem, whose strong implications have thus far been ignored."
Do we need more thorough research along the same lines? Of course. But I still believe that Weitzman's crude, back-of-the-envelope calculation is superior to the estimates of other climate models, because it stresses the importance of the unlikely but cataclysmic possibilities that almost certainly dominate the risk profile of global warming. Given this, it is difficult to view William Nordhaus's estimates of the cost of climate change, which provide the starting point for Manzi's argument, as anything but an extreme lower bound for the true expected cost.

2. Questionably high discount rates

Nordhaus's analysis, along with many other analyses of climate change, uses a high "discount rate." Discount rates are a mainstay of economic analysis; if I am trying to tabulate costs over time, a rate of 5% means that I should literally discount a cost next year by 5%, a cost fifty years from now by 92.3%, and a cost one hundred years from now by 99.4%. It is easy to see why the choice of discount rate may have a big impact on the calculated costs from climate change! In fact, the Stern Review's unusual choice of discount rate was almost singlehandedly responsible for its urgent conclusions.

A famous equation in economics, the Ramsey equation, gives us a formula to calculate the "correct" discount rate. It is r = delta + eta*g.

The 'g' is the rate of per-capita economic growth, and the 'eta' is a mathematical parameter called the coefficient of relative risk aversion. In plain English, 'eta' represents how we compare different changes in income. If eta=1, we treat all percentage changes the same: a move from an income of $1000 to $1500 gives the same additional "utility" as a move from $100,000 to $150,000, even though the latter change is much larger in absolute terms. If eta is higher than 1, we are even more interested in increasing income when we are at lower levels: in this case, a change from $1000 to $1500 is better than a change from $100,000 to $150,000. Alternatively, if eta is lower than 1, we place less weight on income increases from low baselines, and if eta is 0, we only care about the absolute totals of wealth -- $5000 is just as useful to a person with $1,000,000 as it is to someone with $1000. (If you think this sounds ridiculous, you're right -- no one uses eta=0, or even close to it.)

Observing the revealed preferences of market participants, almost all economists think that a reasonable value for eta is at least 1, and many think that it is 2 or even a little higher. Why does this matter, and what does it have to do with economic growth? The idea is that as the economic output per person grows, we become richer, and a real loss of $X become less important to us. Using some math, we find that this is best incorporated into the discount rate by multiplying the growth rate, g, times eta.

What's the other term -- the delta? This is fuzzier, and somewhat more controversial. Delta is our "pure rate of time preference," the amount by which we judge our well-being today to be more important than our well-being in the future, solely because of the time difference. If you're thinking "why should we preference one period of time over the other at all?," you are mirroring a long line of critiques of the entire concept of pure time preference.

Of course, there are some theoretical reasons for a positive delta: Stern's report justified a rate of 0.1% using the possibility that humanity will go extinct for some reason other than global warming, and a more generic uncertainty about the future may warrant some time preference as well. But it is difficult to see how rates like 1%, which is used in Nordhaus's calculations, can be derived from a priori ethical considerations.

Instead, pure rates of time preference are justified by observing the market. Depending on the time frame we select for our statistics, the average return on equity in the United States is somewhere between 6 to 7 percent. Such a high return is extremely difficult to justify without some level of pure time preference.

But here we run headlong into the great conundrum of financial econometrics: the equity premium puzzle. The average real rate of return on a simple riskfree asset, the short-term treasury bond, is closer to 1%. To some extent, the gap between the return on equities and the return on risk-free bonds can be explained by compensation for risk, but most attempts at modeling this risk fail to show why we should have such a large premium for equity. A wild, confused set of economic conclusions thus arises. You can use the high return on equity, along with other market experiments, to conclude that there is indeed some implicit pure time preference in the market, and leave the low rate on riskfree assets as either a lingering curiosity or a deviation to be explained using another model. Alternatively, you can insist that riskfree assets provide a better approximation of the true discount rate, and that some hidden source of risk or uncertainty causes the market to demand such a high return on equities.

I think the second approach is more reasonable. If you're looking to use market evidence to determine the rate of pure time preference, interest rates on riskfree assets are the obvious place to start. It is inconsistent to claim a high-minded desire for an empirically derived discount rate, and then turn around and ignore the natural implication of the gap between equity and riskfree returns, which is that the true return on equity isn't 6 or 7 percent, and the market is pricing in catastrophic risk that financial economists have difficulty including in their models.

It is also worth stressing just how absurd the implications of pure time preference are. One classic objection to pure time discounting asks how a person today would feel to know that upon turning 21, he will die of cancer, because Cleopatra made a welfare-optimizing decision to have an extra helping of dessert. Yet if anything, the actual results of pure time discounting are even more dramatic. Nordhaus's 1% rate means that society 6000 years ago would be valued 154,000,000,000,000,000,000,000,000 times more than society today. Under this cost-benefit analysis, if a caveman in 4000 BC choose between scratching a fly off his back and saving the planet from destruction six millenia hence, the best decision for social welfare would be clear: get rid of the fly!

But this kind of analysis is critical to Nordhaus's -- and, by extension, Manzi's -- conclusions. After one century, 1% pure time discounting causes an 2.7-fold drop in the estimated present value of costs. (In other words, kids born in 2100 are already three times less important than kids born in 2000.) After two centuries, the costs of global warming are deflated by a factor of 7.5. Putting aside all the other problems with Nordhaus's work, simply axing the pure time preference in his model would transform it into a much stronger tract demanding action against CO2 emissions.

3. Uneven impact of climate change across nations

Like most economic modelers of climate change, Nordhaus and Manzi use aggregate world production as the only input in their welfare function. On its face, this appears a little absurd -- do they really think that the worldwide sum of material wealth is all that matters? Yet cost-benefit analysis over many centuries is a complicated endeavor, and to get tangible results we'll inevitably need to make some strong simplifications. The key is to pick simplifications that are least likely to skew the analysis in a particular direction.

Needless to say, this particular assumption is not so neutral, as a simple example demonstrates. Consider a hypothetical world where half the inhabitants make $40,000 and half the inhabitants make $1000. Now say that global warming causes an equally distributed 10% drop in income worldwide. With a risk aversion coefficient of 1.85, used by Nordhaus in his other calculations, it turns out that the magnitude of the utility drop hinges crucially on whether we aggregate wealth worldwide or consider different individuals separately. I won't bore you with the calculations, but the damage is almost 7 times higher when we cast off the "aggregate wealth" simplification.

Admittedly, the world income distribution isn't quite as unequal as the one in my example, which inflates the "7" figure. But the "proportional damage" part of my example is also a simplification, and it works in the other direction. Most evidence indicates that the damage from climate change will not be evenly distributed among the economies of the globe, but will instead fall disproportionately upon poor regions like Africa and the Indian subcontinent. Although these issues are too complicated for me to pinpoint the exact level of downward bias caused by the practice of using aggregate world GDP, I hope it is clear that the effects may be quite large.



When considering the implications of the three concerns I have raised, one question is particularly important to keep in mind. What happens if the concerns are simultaneously valid -- if the Nordhaus/Manzi damage estimates are biased down by more than one of the weaknesses I mention? To a first approximation, they multiply. This means that if the failure to analyze extreme uncertainty, the high discount rates, and the crude worldwide aggregation of damages present in Nordhaus's model each lower the optimum carbon tax by a factor of two, a better estimate would have a starting carbon tax approximately eight times as high. Since Nordhaus's "optimal policy ramp" starts with a tax of $7.40 per ton of carbon dioxide, such a recalculation would bring us to almost $60 per ton, which is close to what many carbon tax advocates suggest. More interestingly, if each of the above failures causes a miscalculation by a factor of three, the total estimate may be off by twenty-seven. This brings us to a very high carbon tax of $200.

Granted, the situation is not quite so simple, and crude multiplication does not account for the complicated ways in which changes in all parts of the model may interact with each other. But it does provide a good mental estimate, and most of all it illustrates how reasonable modifications to the Nordhaus/Manzi analytical regime drastically change the urgency of its recommendations.

2 comments:

Anonymous said...

Matt:

Thanks for taking the time to reply to some of the things that I’ve written with such care.

However, I think there are problems with each of your criticisms. I’ll index my comments to your numbering scheme.

1. You say:

“…I still believe that Weitzman's crude, back-of-the-envelope calculation is superior to the estimates of other climate models, because it stresses the importance of the unlikely but cataclysmic possibilities that almost certainly dominate the risk profile of global warming.”

But Weitzman’s calculation doesn’t STRESS THE IMPORTANCE of these risks, it ASSERTS THEIR PROBABILITY. This seems to me to be a crucial distinction. The IPCC presents probability distributions for the relative odds of various levels of warming by scenario through 2100 (links in my Cato dialogue). Weitzman does his own armchair climate science, disputes these projections and presents his own as superior. But to accept his projections is not to accept merely his economic analysis of the implications of projected warming, but to accept his climate science about the amount of warming we are likely to experience.

2. I won’t reprise it, but you can find my detailed reaction to the discounting question here: http://theamericanscene.com/2008/06/17/scientific-american-and-climate-change-ii-discounting

3. Yes, but. First, if there is (on a risk and time-adjusted basis) greater cost from reduction in growth rate from mitigation efforts than benefits from reduced AGW on an average basis, then all else equal the utility calculation will tilt even worse against poorer regions because they have a lower starting income. Certainly the behavior of China, India, Brazil and every other significant actual poor country that I know of indicates where they stand on this trade-off. Second, its pretty artificial to isolate this from every other actual political decision and demand that voters in the US, Europe and so on be expected to weigh the benefits created in other countries many decades from now as if they are identical to similar benefits that might be created (or costs taken from) people living in the US, Europe and so on today.

Best,
Jim Manzi

Anonymous said...

Matt:
I really enjoy your blog -- it's always nice to discover a like-minded person (in our case, a fellow math/econ student with a wonky streak). Here was my take on the Cato Unbound article.
Keep writing!

Best,
Sarah Constantin