Thursday, August 20, 2009

Is summer vacation really the problem?

In an aside, Scott Sumner vents about proposals for a longer school year:
BTW, I highly recommend Tyler’s book—it’s full of fascinating insights. He finally explained to me why I was so bored in school, despite my love of learning. All I recall from school is staring at the clock waiting for it to hit 3:20, and waiting for summer vacation. When I read proposals for a 12-month school year all I can think of is the book 1984. And I am an educator.
What's the possible justification for these proposals? There are studies suggesting that summer vacation is a primary source of the learning gap between students with low and high socioeconomic status. As measured by test scores, poorer students often improve as much or more than affluent students during the school year, only to fall behind massively during the summer. This raises the possibility that we could slash the achievement gap simply by extending the school year.

Yet I'm not sure that the logic underlying such conclusions is sound. As a useful analogy, imagine two countries: Niceland and Struggalia. Over the past 20 years, Niceland has experienced an extremely consistent rate of economic growth, with a 3% increase in GDP every year. Struggalia, on the other hand, alternates between contractions and recoveries, and its average annual growth rate in the last two decades has been only 1%.

Economists parachute into Struggalia and discover an interesting empirical fact. During times of recovery, Struggalia grows at an average rate of 3%—just as much as Niceland! The cause of Struggalia's long-term growth problems, the economists conclude, is straightforward. If only Struggalia didn't suffer from intermittent recessions, it would experience the same long-term growth as Niceland.

Another economist, however, begs to differ. She notes that Struggalia's rapid growth during periods of recovery comes primarily from increased use of capacity that already existed: workers and factories that idle during the recession return to productive use. If Struggalia didn't have such deep recessions, she declares, GDP wouldn't grow at the same speed during good times, because it would no longer be bouncing back from a period of idle capacity. The boom/bust cycle might be bad in its own right, and it might have more complicated effects on long-term growth, but eliminating it certainly wouldn't make Struggalia as prosperous as Niceland. In fact, the maverick economist claims, it's not clear that Struggalia could grow much faster than 1% at all, even if it smoothed the business cycle completely.

Does this sound plausible to you? It does to me—the latter economist is arguing that Struggalia's economy is mostly trend stationary rather than difference stationary. Is student learning mostly trend stationary as well? I don't know, but it certainly isn't difference stationary, and naive extrapolations of the rates of improvement we see during the school year are unlikely to hold if summer vacation is slashed. In fact, if students are anything like Struggalia, we might not see very much improvement at all...

I do not understand the public plan

Part of the reasoning behind a public plan is that it will put public health care to a rigorous market test. If millions of customers voluntarily flee traditional insurers and sign up for the public plan, so much the better; if not, fair enough.

Yet I cannot see any good explanations of why the public plan should have even a modest chance of outcompeting traditional insurers without subsidies. Proponents' arguments include (in italics):

Since it doesn't need to make any profits, the public plan will be able to charge less for the same care.

Why, then, are nonprofit insurers not already dominant, since the exact same reasoning applies to them?

The public plan will not waste so much money trying to deny coverage to sick people.

This confuses the possible gains from a world where the public plan is the only option—and competition does not produce zero-sum games like denying coverage to the sick—and a world where the public plan is forced to compete with existing insurers, which is the one actually under discussion. Insurers are not idiots who foolishly spend more money than they make by working to deny coverage. They do it because it saves them money. If the public plan refuses to play such games, it will be uncompetitive—end of story.

The public plan will have lower administrative costs and save money by using its bargaining power to push down payments

I put very little stock in hypotheses that could just as easily apply to government involvement in any industry. To make a serious case for the public plan, you can't just offer generic arguments about why the government might be good at running a business. You need to provide specific points about why the special economics of health care make a government plan more competitive. But while there are economic arguments for why a government monopoly on health care might be more efficient, these do not apply to the case where a public plan is merely an equal, subsidy-free competitor to existing plans.

So what? If you're right, then a public plan with no subsidies will fizzle out, and we'll be back to where we started. What's the harm in trying?

Frankly, I don't find guarantees of no subsidy credible. This isn't because I think proponents are dishonest schemers who will provide subsidies the instant everyone else stops looking. To the contrary, I think they are perfectly honest in their belief that the public plan will operate subsidy-free. Even the best of intentions, however, cannot make this outlook believable.

Imagine that the government sets up a public plan. It needs to choose a price for the plan. At any price, however, it will be disproportionately frequented by sick patients taking advantage of the public plan's refusal to deny coverage; in fact, the higher the price, the sicker the pool of customers for the public plan will be, since at a sufficiently high price the only people with an incentive to purchase the plan will be the ones trying to escape massive medical bills.

The adverse selection problem here is so overwhelming that there is almost certainly no price at which the government can break even. But this will only be discovered after the public plan has already swung into operation, and millions of people have signed up. What does the government do now? Throw up its hands, announce that the plan isn't solvent, and force millions of customers who have placed their trust in the public plan to join the ranks of the uninsured? Of course not. Subsidies are inevitable.

There is one complication. If, as described in my last post, new rules against denying care for preexisting conditions cause all insurers to rachet down their coverage to whatever minimum is defined by law, a public plan that also offers this minimum will not be at any particular disadvantage. Of course, it will not have any advantage either, but it may survive without subsidies. Under this scenario, however, the government already has control over the extent of all insurance coverage; the public plan is merely a direct manifestation of an insurance system that has effectively been nationalized anyway.

In other words, the public plan is only viable if it barely matters.

The best critique of reform that no one is making

Paul Starr writes:
Some other provisions of the reform package aren't separable. For example, Congress can't tell insurers they have to cover pre-existing conditions unless there is an individual requirement (or mandate) to buy insurance. Otherwise, people would rationally not buy coverage until they get sick, and the whole insurance system would break down.
It's more problematic than this. In order to mandate health insurance, Congress must endow some administrative body with the ability to decide what exactly qualifies as "health insurance." If private insurers decide to offer coverage beyond the minimum required to qualify, they'll be saddled by customers with preexisting conditions (who they are no longer allowed to deny) seeking to take advantage of the superior coverage. This is untenable, and there is almost surely no stable equilibrium other than providing the minimum.

The effects are profound. Under this system, the economics of adverse selection imply that the level of coverage will effectively be determined by government fiat. I am not sure that this is bad; the benefits from universal care may outweigh the loss of competition and variety in the marketplace. It will represent, however, a massive shift toward government control in health care. And it means that progressives cannot respond to complaints about rationing by saying "well, you can always buy extra coverage on the private market." When a law banning discrimination against customers with preexisting conditions effectively prevents insurers from offering any extra care, this is simply not true.

When judges are bad at arithmetic

Richard Posner is a very smart man who enjoys feigning expertise even when he has little understanding of the topics at hand. Mark Thoma, Brad DeLong, and Menzie Chinn provide some well-deserved criticism of his latest; DeLong notes that Posner missed the appropriate comparison between stimulus money and GDP change by a factor of sixteen. (!)

Posner's lack of facility with arithmetic comes as little surprise. It's a defining feature of his commentary, and was thrown into particularly sharp relief for me when he opined on drunk driving back in 2006:
This is actually a plausible inference. If there are only 2,000 nonpassenger deaths (other than that of the drunk driver himself) caused by drunk driving every year (and how many of the accidents in which a drunk driver is involved are actually caused by the drinking?), then the probability of being killed by a drunk driver is very small...
The second parenthetical comment is utter nonsense—a clever debater's jibe that cannot hold up to even the slightest quantitative scrutiny. Consider the fraction of traffic fatalities associated with drunk driving: about a third, according to the NHTSA. Now imagine the number of miles driven drunk as a fraction of the total vehicle miles in the United States. When you consider that most Americans do not drive over the legal limit, and even those who do drive drunk generally do so for only a small fraction of their total driving time (much of which consists of routine driving like the daily commute), it's hard to imagine that this ratio is any more than one in a thousand.

If, say, 30% of deaths associated with drunk driving are not actually caused by drunk driving, it follows that the kinds of people who drive drunk must be at least a hundred times more likely to cause a fatality when driving as the rest of the population. Granted, I am sure that the ratio is more than one, but I very much doubt it is anywhere near 100.

You can quibble with my assumptions if you like. No matter how you slice the numbers, however, it is impossible to describe a world where more than a small fraction of deaths involving drunk driving would have occurred without it. Yet this analysis clearly never occurred to Posner. He is a clever writer and debater, but he lacks the quantitative intuition that makes such insights obvious. And while this isn't so bad for a judge, it's a serious problem for anyone who purports to be an authority on all matters economic.

The fallacies of shared space

On my recent skeptical post about "shared space"—the principle that many roads are actually safer when traffic rules are removed—commenter Dan points out a even more significant flaw in advocates' analysis:
People may be avoiding this road, decreasing the total traffic. Apparently the cars are going slower than before. So the new accident statistics should be compared to the expected decrease due to reduced traffic/slower speeds. Combine that with the decrease in accidents citywide and small N (both mentioned elsewhere in the article) and you have a strong case that the conclusion is overstated.
This is an incredibly important point. The relevant consideration is not whether shared space decreases accidents on the roads where it is implemented; it is whether shared space decreases accidents on these roads per passenger mile. If the slow, harrowing experience of driving on a road without well-defined rules causes most drivers to seek an alternate route, it's not surprising at all that we see a measured decrease in accidents. The accidents are simply redistributed elsewhere. To make a real case for shared space, you'd have to show that accidents decrease after you adjust for the much lower level of traffic. Using raw totals instead is one of the most egregious examples of statistical malpractice I've ever seen.

Monday, August 17, 2009

Why John Taylor is wrong to blame the Fed, redux

John Taylor is a renowned monetary economist who has decided that the Federal Reserve is at fault for the housing boom, and thus most of the current crisis. As I've blogged before, his arguments are curiously difficient: they all seem to boil down to "well, the Fed didn't follow the Taylor rule, and the Taylor rule can't possibly be wrong." One can imagine economists putting a little too much faith in rules named after themselves...

First, it's important to understand what economics does not tell us. Monetary theory does not provide any mechanism through which the Fed's decisions about the money supply can produce massive distortions in the relative prices of assets, cause banks to make loans to customers who clearly can't pay them back, or lead investors to systematically misprice risk. Monetary theory does tell us that if the Fed keeps rates unnaturally low, we'll suffer from inflation—but as Brad DeLong points out, this is not at all what happened in the current crisis.

Let's take a look at the only publicly available and even slightly rigorous argument that Taylor has provided for his position: this speech from September 2007. In it, he performs a "counterfactual simulation," comparing the rise in the housing starts simulated under the actual Fed rate to an alternate path computed under what Taylor believes the rate should have been. This is what he obtains:

Indeed, this is already far from rigorous: he's using the historical correlation with the federal funds rate to predict housing starts even when there are innumerable other variables and directions of causality in play. As we can see, however, his simulations don't capture the dynamics of the housing bubble at all—and it's revealing that he doesn't even show us a curve of housing prices, which are the most difficult part to rationalize in an economic model.

To better fit the data, he adds a new dimension to the model:
However, such sharp falls frequently occur at the end of booms because of rapid changes in housing inflation expectations. In fact, there is a close interactive relation between housing price inflation and housing construction (technically, two-way Granger causality). Placing housing inflation directly into the housing starts equation, and adding a simple equation to explain housing inflation, helps explain more of the decline as shown in Figure 3, but psychological factors (a Shiller swoosh) still seem to have been at work as the boom ended.
The result is a slightly more impressive picture:

Hidden in the discussion, however, is an important clue about what's really going on. He adjusts the model for the "two-way Granger causality" (which is really correlation with a lag, not actual causality) between housing starts and housing price inflation. More housing starts lead to greater price inflation, which in turn leads to more housing starts, and so on; it's easy to see why this make the paths in his simulation more dramatic! This is really, however, an embedded assumption that's enough to generate bubbles completely independent of the Fed's policy. It's a classic feedback loop where price increases beget additional price increases—perhaps the essential characteristic of a speculation-driven bubble. Maybe in this case he thinks that the Fed was the initial trigger (I'd disagree), but if Taylor really believes in the methodology he's using for these simulations, he has to admit that all kinds of other events could have triggered the same sequence. In this model, all you need is some initial runup in prices and construction, and then the speculative feedback loop—the real driver of the bubble—takes over. The Fed is no more to "blame" for the bubble than the proverbial butterfly flapping its wings is to blame for the subsequent tornado.

And if speculative feedback—enhanced by regulatory failure and widespread mispricing of risk—is the real culprit, we're brought to a very different set of conclusions. Pinning responsibility on the Fed, Taylor makes it seem as if our real problem is simple and technical: just set rates according to his formula and everything will be fine. Reality, however, isn't so simple, and we'll need more than slight tweaks to monetary policy to make ourselves safe in the future.

Why Mandarin won't be a world language: part 3

A few additional points to follow up on my previous posts about why Mandarin won't replace English as the world's most prominent second language:
  1. In the last few centuries, we have seen two large Asian languages shift from Chinese characters to alphabets: Korean and Vietnamese. In Korea, the homegrown Hangul alphabet has almost completely replaced Chinese characters in written usage, and knowledge of Chinese logograms is steadily diminishing from generation to generation. In Vietnam, meanwhile, the vast majority of the country became literate on a modified Latin alphabet, promoted by nationalists and colonialists alike as the path to reading and writing, and it now also dominates that language. In both cases, alphabets won because they are simply much easier to learn than logographic systems. When there are thousands of non-phonetic characters to master, the barriers to entry are formidable.

  2. I know many extremely smart Chinese-American students, people who came over from China somewhere between 7 and 12 and are fluent in the spoken language, who are nevertheless barely literate (or not literate at all) in the written one. I have not seen this happen to a comparable degree with any other group of immigrants. If young people who grew up in China—and whose entire extended families, in many cases, remain in China—can't muster the energy to become fluent in Chinese writing, how can the rest of the world?

Is shared space really safer?

Matthew Yglesias points us to the shared space concept, the idea that cities should abolish the traditional separation between vehicles and other road users, in addition to removing traffic signals and signs. This innovation has led to suggestions that a road with fewer rules is actually safer, like this one from the Guardian:
Aesthetics were partly behind the changes - the street is now clad in York stone and granite - but public safety was the other motivation, and accident figures out today seem to justify the council's initiative. Figures for 1998 to 2000, before the changes, show there were 70 casualties on the high street, including eight people killed or seriously injured and 62 suffering slight injuries. In the two years from September 2003 to August 2005 there have been 40 casualties (four killed or seriously injured) and 36 slight injuries - a 43%-plus decrease.
It's easy to see the appeal: the idea is deliciously counterintuitive, and it appeals to the widespread sentiment that cars receive too much special treatment.

Yet I'm skeptical. Certainly I can see how a limited implementation of this idea, for a limited period of time, reduces accidents. Drivers are shocked by the sudden change and take special care to avoid accidents. Since better attention can make an enormous difference in road safety, we see a initial decline in casualties, especially if the new policy is limited to a small set of streets, and drivers are startled every time they enter the rule-free space. It's not clear, however, that this is either sustainable or scalable. Once shared space becomes the norm, motorists will no longer apply the same extreme caution every time they drive, and you'll start to see everyone pushing the limits.

Perhaps this isn't a fair comparison. but some form of "shared space" already exists in the vast majority of third world countries, and the resulting safety record isn't exactly positive...

Friday, August 14, 2009

Tax interaction: the ultimate unintuitive, general equilibrium effect

Greg Mankiw and Brad DeLong differ on the effect of carbon pricing when the revenues are not used to cut income taxes.

Here's Mankiw:
But if most of those allowances are handed out rather than auctioned, the government won’t have the resources to cut other taxes and offset that price increase. The result is an increase in the effective tax rates facing most Americans, leading to lower real take-home wages, reduced work incentives and depressed economic activity...
DeLong:
I have been trying to think of a model of the economy in which Mankiw's claim is true. I am not having an easy time doing so...

We currently have a tax on carbon of zero. We don't think that a zero tax on carbon is where we should be right now, do we? It is good to raise the tax on carbon whether you use the revenues to reduce the tax rate on labor or whether you distribute them in some other way. Only if our tax on carbon were already equal to the pollution externality v, and we were thinking of raising it even higher, would it be a bad bet unless we used the revenues to reduce the tax on labor income--or if our tax on carbon was inescapably also a tax on labor income via joint production.

There are, I think, some very strong assumptions about the form of the production function and the demand for and supply of labor underlying Mankiw's claim: for his claim to be true, production has to be joint or nearly joing. Nearly every way of employing labor from playing Peruvian shepherd pipes in the public square to making aluminum via a Hall process powered by low-pressure steam generated by the combustion of lignite must involve nearly the same carbon footprint. And I can't figure out why he thinks that the production function takes that form...
I think Mankiw is actually right here, for a reason he drives at but never quite states explicitly (perhaps because it seems too technical for a general audience): the existence of tax interactions. The reasoning is simple but unintuitive. A tax on carbon lowers the returns to spending on consumption, and thus like any consumption tax acts implicitly as a tax on income, albeit without the intertemporal distortions. If our starting income tax rate was zero, this would be fine—one can work out the math and show that a carbon tax would discourage labor only to the extent that it was efficient to do so. Yet when we are starting with income taxes well above zero, the implicit labor taxation created by a carbon tax reduces work incentives far more than is warranted—rather than taking us from 0% to 2%, it pushes the rate from 28% to 30% (or 25% to 27%, or...), which is far more costly. If we recycle the revenues and lower income taxes, we relieve most of this negative effect, but otherwise the costs of the policy can increase by several hundred percent. Contra DeLong, you don't even need to consider the impact of carbon in manufacturing or services to get this effect (although it certainly increases the magnitude). It's enough to consider carbon—in the form of coal-fired electricity, or oil-powered transportation—as part of the consumption bundle.

This primer by Stanford professor Lawrence Goulder is a very good introduction to the topic.

Why the world needs economists

Joe Romm claims that Cash for Clunkers is a good deal (via Bradford Plumer):
BUT as a stimulus that saves oil while cutting CO2 for free — it has turned out to be a slam dunk, far better than I had expected. Indeed, Borenstein points out that “America will be using nearly 72 million fewer gallons of gasoline a year because of the program.” At $3 a gallon — hardly what the price is likely to average over the next decade — that is $216 million a year in gasoline savings.

So the billion dollar program pays the taxpayers back in oil savings in 5 years. That means the CO2 savings are for free!
This all-too-common argument is enough to give an economist an aneurysm. What Romm misses is that consumers already internalize the savings from spending less on gasoline. Paying them extra to junk their old cars for new, fuel-efficient ones amounts to compensating them twice for improved fuel economy, leading to wasteful exchanges where the benefits hardly justify the costs.

This feels like too easy a target, but since otherwise very smart people like Romm inexplicably make this mistake all the time, I think I should clarify with an example. Say that everyone in America attends Mattland, widely revered as the best amusement park in the world. Since it provides such exquisite entertainment, Mattland charges a reasonable price: either $30 for an all-day pass or $5 per hour. Assume further that while Mattland provides a great time to its visitors, its charms naturally wear thin after a while, and the vast majority of Americans choose to attend Mattland for about 4 hours. As optimizing economic creatures, visitors to Mattland choose to pay $5 per hour for 4 hours ($20) rather than the flat $30 fee.

Now the government comes in with its Amusement Efficiency Package and offers to subsidize all-day passes by $15. Everyone switches to the $30 pass, and advocates breathlessly declare that their efforts have saved the average American $20 in hourly fees. Success!

Of course, here it's obvious that there was no actual gain. Prodded by the subsidy, visitors needlessly bought the $30 pass when they would have been just as happy spending $20, and the net combined loss to consumers and the government was $10. Yet this is precisely the argument that Romm is making (and Plumer seems to find convincing)! Perhaps you feel that this is a silly example, ill-suited for a serious topic like energy policy. But as Paul Krugman memorably argued—and I think this case confirms—we need silly, playful examples to understand the essence of our arguments. Somehow the logic is much clearer when we're discussing amusement parks rather than fuel economy.

I don't claim to make any all-encompassing argument against cash-for-clunkers here. Its possible effect as a stimulus is far too complicated to examine in such a small space. Still, as Romm admits and an understanding of the economics of carbon policy confirms, cash-for-clunkers is an incredibly inefficient way to lower carbon emissions; and as I hope I've established here, tallying every fuel dollar saved as a direct benefit of the bill is patently illogical.

It seems that this is a general phenomenon. For whatever reason, otherwise very smart non-economists are prone to bizarre lapses in critical thinking when the conversation turns to economic efficiency. And this is why—whatever our faults may be—economists will remain an important part of the policy discussion.

Tuesday, August 04, 2009

The most important chart for understanding carbon policy

I've commented in the past on how there's a serious popular misconception about the changes we'll make to confront global warming. Many people think that we'll cut back dramatically on oil fueled transportation, when in reality that's one of the polluting activities whose cost is least affected by a carbon price.

Let's consider what would happen if the price per ton of CO2 emissions shot to $100 tomorrow. This is much higher than we're likely to see in the near future, of course, but it's useful to understand what might happen a couple decades down the road. It translates into an increase in the price of electricity from coal of about 10.5 cents, enough to increase the whole cost of coal power to over 15 cents. At this point, many alternative sources of electricity become extremely competitive with coal, and indeed probably cheaper: solar thermal, nuclear, wind, combined-cycle gas, etc. The corresponding rise in the price of gasoline, on the other hand, is slightly less than a dollar. This is hardly insignificant, but it's smaller than the swings we've seen over the past few years, and not nearly enough to threaten the basic viability of car transportation.

To grasp the difference in magnitude here, I think it's best to look at a chart:


At current prices, coal power skyrockets 217% in cost where gas inches up only 38%. Gasoline (and thus car transportation) is one of the least responsive prices to a carbon charge, while coal power is one of the most. This is a good illustration of why explicit carbon pricing is so important—without it, we're liable to miss vast differences in the cost-effectiveness of different strategies to reduce emissions. Coal is the low-hanging fruit; motor vehicles are not.



*To arrive at these numbers, I used data on the carbon intensity of coal available here and on the carbon intensity of gasoline available here. I took the price of coal generation from this MIT report on the future of coal, averaging the prices for the four standard types of coal generation on page 19 and adjusting slightly for inflation since the report was published. Gas prices are taken from EIA data here.

Edit: As Prent points out in comments, I forgot to define my units! The coal price is in cents per kilowatt hour.

Hoisted from comments: why Mandarin won't become the second language of choice

Shane follows up on my last post about Mandarin as a world language in comments:
You're right that Mandarin Chinese is a difficult language to learn. I think you're a little bit off as to why. In addition, you are wrong about how quickly one can type in Chinese, for the same reason.

The greatest difficulty I saw when watching Americans learn Chinese is the tones. I watched a small number master the writing system, but I never saw a person who could differentiate the tones like a native speaker. And so once you get into an intermediate level, it becomes far easier to read and write Chinese than it does to understand and speak it.

Because the tones carry meaning, Chinese has actually developed into a language where the "meaning density" (I'm just making up a term here) is very high with regard to the syllables...

This becomes a huge problem when trying to watch the news, when they use Chinese acronyms (think the way the military abbreviates "Central Command" to "CENTCOM" or how the "Ministry of Peace" was abbreviated to
"minipax" in Orwell's 1984). The anchors simply say so much in such a short amount of time that non-native speakers (or even less-educated native speakers) are quickly overwhelmed.

...the Mandarin Chinese that we learn as students of a second language is completely different from what is spoken on the ground. Like your examples of French and Latin, it's only spoken and understood by the elites in China. I can't understand someone from Sichuan or Shandong speaking "Mandarin", the same way I can't understand some English accents from the British Isles. I would even venture to say that Standard Mandarin Chinese has fewer native speakers than English. For these reasons I think Mandarin Chinese has almost no chance of ever overtaking English in importance.
There's much more, including an example of the tremendous difference in "meaning density," in the original comment.

My roommate tells me that tones were also his greatest barrier in trying to learn Mandarin—as Shane says, this seems to be a problem for the vast majority of learners. It's yet another way in which Chinese departs from most other languages. This emphasizes, I think, how "English versus Chinese" alone isn't really the right way to look at the competition. At the very least, it's also "Spanish and Portuguese and French and Italian and German and Dutch and more against Chinese," as all the former languages are far more similar to English, with the same basic alphabet, related syntax, some shared vocabulary, and no tones. It will be extremely difficult to convince speakers of these languages to adopt Chinese as their second language of choice—the learning curve is just too steep. (To be fair, there are also additional languages more similar to Chinese, but they're spoken by a much smaller population.)

Shane's point that Standard Mandarin is to a large extent an artificial construction, spoken by a much smaller number of people on the ground, is also very good. One of the most remarkable aspects of American English is how it remains almost constant across large swathes of the country: when I was eleven, I moved from Phoenix, Arizona to Portland, Oregon (1000 miles away!) and barely noticed any difference in speech. In more established regions of the country, there are more distinct dialects, but even they're starting to disappear as General American continues its relentless push.

Admittedly, at one point this was an artificial "standard" dialect as well, elevated by television's somewhat arbitrary selection of Midwestern speech as its style of choice. It's possible that China will someday experience the same transformation. But it's starting with greater fragmentation of dialects than ever existed in America (arguably even in the English-speaking world at large), and in the meantime English can only solidify its dominance.

Monday, August 03, 2009

Mandarin: not a world language anytime soon

Every so often we see wide-eyed futurists talking about how Mandarin will usurp English's role as the world's most important second language, and become the "language of the 21st century." At first, this seems vaguely plausible: China is much larger than the United States and all other English-speaking countries put together, and it may eclipse them in economic influence sometime in the next 50 to 100 years. The status of English, after all, has mostly been a function of the economic power of the United States—why won't China take the crown as its economy grows?

Well, first of all, you have to contend with the massive base of people who already speak English as a second language. More students currently study English in China alone than in the US. Add this to a strong desire for English proficiency in other East Asian countries, extremely strong penetration of English in continental Europe, and English's importance among the economic and social elite in India (soon to be even more populous than China), and you have formidable momentum on English's side. You certainly don't see millions of German schoolchildren lining up to learn Mandarin, or ambitious Indian students attending Mandarin institutions to land a plum job in an office park.

Sure, you say, but Latin and French were the "world languages" of their time—what's to stop English from suffering the same fate? Frankly, there's no equivalence. Latin and French may have been the languages of diplomacy, science, and the cosmopolitan elite in Western Europe, but they had nowhere near the worldwide prominence that English now enjoys, nor its wide base of second-language speakers of varied economic standing.

And at the same time, Chinese is hard. Yes, more people speak Mandarin or Cantonese as a first language than English, but that's not necessarily the most accurate comparison. One of the most difficult aspects of learning Chinese is the massive number of characters used for writing, which are far harder to learn than an alphabet. In fact, if we tally up the number of native speakers of languages using the Latin alphabet alone, we easily pass a billion: English, Spanish, Portuguese, German, French, Italian, Turkish, Polish, and so on. These people—and indeed almost anyone who uses an alphabet rather than a logographic script, which is the vast majority of the world—will have a far easier time reading and writing in English than Chinese. This is a particularly critical advantage in the Internet age, as it's faster and simpler to write in English characters than painstakingly hammer out the pinyin for Chinese ones.

The conclusion is inescapable: English is here to stay. Mandarin may well become the world's second most prominent language, but we English speakers will continue to unfairly benefit from stumbling into the world's lingua franca at birth—and given my limited linguistic capacity, I'm damn glad about that.

-----------

Edit: Follow-ups here and here.

Sunday, August 02, 2009

Nuclear power and bias

When I started this blog as a 17-year-old with too much time on his hands, I promised that I would regularly review my past posts and note when I'd been wrong. Since I haven't made many explicit predictions here (with a few successful exceptions), it's not clear how to decide whether I'm objectively "wrong," but I can still look back and see where I strayed.

I feel a particular tinge of embarrassment when I read some of my past writing on nuclear power, because I realize that I had an unusually crude case of confirmation bias. Essentially, I interpreted every fact about nuclear power in the least favorable way possible, and I interpreted every fact about renewable power in the most favorable way possible. It's true, for instance, that nuclear power plants have frequently gone over budget, and that official protections of cost ignore the reality that capital outlays are often much higher than first estimated. This is still no excuse for handpicking the most speculatively high cost estimates for nuclear power and placing them alongside the most optimistic estimates for renewable electricity. It also elides the main issue: why is nuclear power going over budget anyway? If it's something that we could cure with streamlined regulatory processes or simple economies of scale, it may be just as amenable to cost improvements as the most promising renewables.

So that's cognitive bias number one: interpreting every new piece of evidence in a way predestined to support a single position. But I think there's more to this story. Why wasn't I more interested in a favorable evaluation of nuclear power, since it holds the promise to provide carbon-free electricity at a massive scale? Partly I was put off by legitimate drawbacks of nuclear power: the risk of accidents, the problem of waste, and the specter of proliferation. Yet I also suffered from simple overcertainty. I was convinced that an ambitious program of social thermal power, improved efficiency, and possibly cheap photovoltaics could achieve the necessary reduction in coal-fired electricity.

A moment's reflection should reveal that this is a highly speculative hope, rife with assumptions about technology, scalability, infrastructure, and cost. Perhaps it is no less reasonable than its mirror image, the assumption that nuclear power can swoop in and solve all our energy needs the instant we muster the political will, but it is still fundamentally misplaced. Indeed, part of the reason why global warming is such a threat is that our limited models of the climate can't account for all its potentially devastating complications and feedbacks, and we're left with a dangerous uncertainty. In this light, it's positively crass to exclude possible solutions to climate change based on a simple mental model that tells us they'll be unnecessary. We can't be sure what will work: that's why we need to try every good option.

Confirmation bias and overcertainty are everywhere, and though I'm sorry I was such a pathetic victim, I'm glad that this issue provides such a nice example of common flaws in our reasoning. With any luck, my future punditry will be better.

Unproductive education

Dean Baker complains about a Post article on South Korea:
The decline in South Korea's saving rate, which is the main issue in the story, turns out to be much less of a story when you read through it. According to the article, one reason for the low saving rate is the large amount of money that Koreans spend on education in the form of private schools, tutors, and other expenditures to ensure that children do well in school.

In GDP accounts, education spending by households is counted as consumption. In reality, it is a form of investment. More educated workers are more productive workers. If the next generation of South Koreans all have the equivalent of medical degrees or PhDs, they will not have to worry about their lack of saving.
This is theoretically plausible, but I don't think it matches the facts on the ground. Education in Korea is essentially a desperate struggle for admission to prestigious universities, where acing the test and getting in is much more important than the actual education that comes afterward. You see some of the same tendencies in America, but they're taken to an extreme in Korea: high school students stay in private cram schools until after midnight, and then wake up early to study the next morning. Once they manage to get into college, most students don't try nearly as hard—the name of the university, not their record once there, is the credential that will define their working lives.

It's hard to imagine that a system with such bizarre extremes is really a productive way of educating people, and indeed it bears all the hallmarks of a signaling equilibrium run amok. Education is prized not for its productive value but as a means to societal prestige, and as the level of competition rachets ever upward, the wasted resources from the struggle to climb to the top grow vast. Baker is right that spending on education can be a useful form of investment, but just as I don't think he would say that billions spent on test prep courses in the US are accomplishing anything societally useful, it's wrong to assume that similar spending in Korea will pay dividends for the economy.