Sunday, August 29, 2010

Why no question-and-answer sites for economists?

To me, one of the most intriguing online developments in the last several years has been the emergence of sites like Stack Overflow and Math Overflow: online communities where reputation is important and highly technical questions are met with highly technical answers. These sites are clearly superior to the listservs of yore—it's much easier to search for relevant content, and the best answers are "voted up" and displayed at the top of the page. Top contributors are visibly distinguished from the rest of the pack, which encourages good contributions and makes the best answers even easier to find.

At the moment, however, there is no comparable site for economics. Why? It's possible that that one will emerge in time, but I think that economics faces difficulties unlike any in math or programming. Specifically, it's hard to wall off a community from non-experts.

In math, this is easy enough. Everyone has some academic exposure to mathematics (if only in grade school), and virtually everyone who is not qualified to comment on advanced mathematics is self-aware enough to realize it. The vocabulary alone is nearly impossible for an outsider to crack: at the moment, one of the top questions on Math Overflow is "Morphism Between Polarized Abelian Varieties." (Not very inspiring raw material for a troll!)

In economics, however, most people have opinions about the "big issues" regardless of their academic background. How much deficit spending can we tolerate? Is another round of trade liberalization warranted? Should the Fed be loosening monetary policy further? Any site advertising itself as a forum for economists would quickly be overwhelmed by outsiders preaching their own decidedly non-academic opinions. Good luck having a technical discussion about how to specify a vector autoregression when the board is filled with posts about impending hyperinflation and the evils of the Federal Reserve.

The only way to avoid an overwhelming influx of non-economists would be to label the site so that only economists have any clue what it's about. A site titled "Sunspot Equilibrium" wouldn't attract any aspiring investors searching for advice on Google, or monetary cranks chatting about the need to buy gold. With sufficiently aggressive moderation, it could remain the exclusive domain of experts almost indefinitely.

There's always the risk, of course, that a horde of ideologues would run in to do battle with mainstream economists, but I'm afraid that comes with the territory. Math Overflow has it easy; only the wildest cranks believe that the Evil Mathematics Establishment is ruining numbers. Inevitably, however, a lot of people believe just that about economists, and it'll be tough to develop the online institutions that are improving communication in so many other fields.

Saturday, August 28, 2010

Information and financial crises

No economics paper better captures the essence of the 2008 financial crisis than Ricardo Caballero and Alp Simsek's Fire Sales in a Model of Complexity. From the abstract:
In this paper we present a model of fire sales and market breakdowns, and of the financial amplification mechanism that follows from them. The distinctive feature of our model is the central role played by endogenous (payoff relevant) complexity: As asset prices implode, more "banks" within the financial network become distressed, which increases each (non-distressed) banks' likelihood of being hit by an indirect shock. As this happens, banks face an increasingly complex environment since they need to understand more and more interlinkages in making their financial decisions. This complexity brings about confusion and uncertainty, which makes relatively healthy banks, and hence potential asset buyers, reluctant to buy since they now fear becoming embroiled in a cascade they do not control or understand. The liquidity of the market quickly vanishes and a financial crisis ensues.
This all underscores the centrality of information to the stability of the financial system. If everyone knew exactly which banks were insolvent, in theory the resolution to a crisis would be rather simple: insolvent banks would fail, and the rest of the financial system would keep chugging along. In reality, of course, the lack of any orderly way to liquidate large banks (now purportedly fixed by Dodd-Frank) leads to a long and costly bankruptcy process, but in a world with better legal institutions this arguably wouldn't be as much of an issue. So why should we be concerned with protecting "systemically important" institutions?

As Caballero and Simsek point out, the problem is that indirect exposure to risks is extremely difficult to measure. Even if banks have a reasonably good idea about which institutions are likely to fail, they may not know which additional banks have exposure to those highly troubled institutions. Banks lacking any obvious proximity to the crisis may nevertheless be in danger if they're dependent on institutions that were affected.

And this doesn't stop at the second degree! Even if you have a good handle on which banks are exposed to banks with bad assets, you're unlikely to know which banks are exposed to banks that are exposed to banks that are exposed to banks that are exposed to banks with bad assets. (Just writing the sentence gives me a headache.) The risks to any individual bank may fall as we move farther from the source of the original imbalance, but the vastly larger pool of banks connected in some way to the crisis makes risk management just as difficult.

Meanwhile, as Caballero and Simsek establish in their model, all this risk and complexity induces banks to hoard liquidity. This necessitates selling assets, but in a world where virtually every institution is in some way connected to the crisis, the pool of willing buyers isn't very deep. As a result, we see plummeting asset values and fire sales, which further damage balance sheets and introduce entirely new stresses into the system. This drives further liquidity hoarding and fire sales, and the vicious cycle continues until either the government steps in or our financial system is in tatters.

Friday, August 27, 2010

The strangest line in US immigration law

Take a look at Section 132(c) of the Immigration Act of 1990 (emphasis mine):
(c) DISTRIBUTION OF VISA NUMBERS- The Secretary of State shall provide for making immigrant visas provided under subsection (a) available in the chronological order in which aliens apply for each fiscal year, except that at least 40 percent of the number of such visas in each fiscal year shall be made available to natives of the foreign state the natives of which received the greatest number of visas issued under section 314 of the Immigration Reform and Control Act (or to aliens described in subsection (d) who are the spouses or children of such natives).
This begs the question: which foreign state received the "greatest number of visas issued under section 314", qualifying it for fully 40% of visas issued under this statute in fiscal years 1992 through 1994?

Ireland. The text of the Act obviously avoids mentioning the country by name, but as Anna Law explains in her article on the history of the visa lottery, the handout to Ireland was no accident. Apparently in 1990 there was still a substantial number of undocumented Irish immigrants in the United States. The visa lottery was primarily an attempt to legalize this population and restore the flow of legal immigrants from Ireland, who had difficulty competing with Asians and Latin Americans through the standard channels. To ensure that Ireland—and other supposedly disadvantaged European states—received the intended benefits, the bill's sponsors inserted a three-year transitional period that gave special visas to "adversely affected" states, most of which were located in Europe. 40% of these visas were reserved for Ireland specifically.

That's right: the "diversity" visa was originally a ploy to bring more white people into the country.

But here's the wonderful part. Since the visa's proponents felt compelled to disguise their handout as part of a general attempt to increase diversity, the visa they designed ultimately became a legitimate source of diversity. In 2009, the 5 largest sources of diversity visa immigrants were Ethiopia, Nigeria, Egypt, Bangladesh, and Uzbekistan. Immigrants from countries without a large preexisting population in the US—traditionally a group with no legislative voice whatsoever—were the unlikely beneficiaries of a bill intended to favor established ethnicities.

It's... oddly heartwarming?

Thursday, August 26, 2010

Asking the wrong questions about the minimum wage

Often discussions of the minimum wage center around its aggregate effect on employment—under a certain minimum wage, is total employment higher or lower than it would have been otherwise? Basic economic intuition says that the answer should be "lower", but alternative models are less clear, and empirical evidence is ambiguous.

Upon further examination, this isn't even the right question. Total employment isn't all that matters—among other things, we want to know who is holding the jobs. In a model of imperfect competition, a minimum wage has two effects. First, it decreases employment by putting a wage floor above some workers' marginal product of labor; second, it increases employment by inducing more people to enter the labor force. While the aggregate outcome may involve either higher or lower total employment, this will hide important shifts in the composition of the workforce.

What are the welfare effects of these shifts? Consider the group of people induced to enter the labor force by a small increase in the minimum wage. Since this group, by definition, consists of people who would opt to stay out of the labor force under a slightly lower wage, it gains relatively little utility from working—members of the group are still close to being indifferent toward work. Workers who lose their jobs because of the minimum wage, on the other hand, are likely to derive much higher utility from work. In particular, since they will tend to be individuals from poor backgrounds with little education, they are typically in special need of the income from a steady job.

This doesn't mean that the minimum wage is necessarily a bad idea. From a social welfare perspective, it's still possible that the direct income-increasing effects of the minimum wage will outweigh the disemployment effects. But it is critical that we know what those disemployment effects are, and looking at aggregate employment alone won't tell us.

Wednesday, August 25, 2010

The crucial difference between patents and copyrights

After correctly making the case against copyright protection in fashion, Reihan Salam states that "in my view, copyright protection is a bad idea in general". He follows up by citing an interesting example from Against Intellectual Monopoly, which discusses how James Watt's "invention" of the steam engine really consisted of making improvements to existing steam engines and being assertive about patent rights. The punch line? The steam engine only took off after Watt's patents expired.

I hate to be so pedantic, but we need to remember that patents and copyrights are very different. While the steam engine example is useful for understanding the effects of patent protection, it offers very little support for Reihan's opposition to copyrights "in general".

The key difference is that copyrights don't allow ownership over an idea. If Reihan writes a book about how iPhone apps will revolutionize lighthouse construction (or something similarly Reihan-esque), I can come along and write a very similar book without penalty. As long as I avoid outright plagiarism, I am free to remix Reihan's ideas in my own bestselling work. "Intellectual monopoly" isn't really the right phrase to describe text copyrights; no one has a monopoly on ideas (and no one should!).

It's especially ironic—though admirable in its way—for a professional writer to advocate the abolition of copyright. In a world without copyright, it would be extremely difficult for National Review to make any money. I could come along and extract National Review's content onto my own, nearly ad-free page; the magazine would need to remove ads on its own site to compete, and revenues would plummet. A online subscription model like the Wall Street Journal or Financial Times would be dead in the water.

This isn't to say, of course, that our current copyright regime is optimal. There is no reason why the copyright on Reihan's blog posts should last any longer than, say, ten years. But abolishing copyright altogether would result in a radical change in the incentives to create written content, and as I hope my earlier post illustrates, the economic case for no copyright is very convoluted and difficult to make.

Patents are a completely different story, because they do produce intellectual monopolies and all kinds of pernicious rent-seeking behavior. Granting software patents is surely a destructive practice, and our entire patent system deserves a comprehensive and skeptical reexamination. But please, don't conflate justified skepticism about patents with opposition to intellectual property in general. The two are very different.

Does this tell us something about the Israeli psyche?

Upon learning that the US* won this year's International Olympiad in Informatics—the premier algorithmic programming contest for high school students—I came across an article on the Israeli website ynetnews discussing the Israeli team's performance. The spin? "Iran beats Israel in Informatics Olympiad". The article opens:
The Iranian team won four medals in this year's International Olympiad in Informatics, hosted last week in Canada. Israel's team had to make do with just three.
And so it goes, as the article closes by mentioning this "disappointment" and a sidebar displays a picture of the Israeli team—carefully cropped to display the three team members who smiled the least—with the heading "Slightly disappointed?" No other context for the performance is provided.

I shouldn't overgeneralize from a single article—especially one associated with an extremely popular but tabloid-y newpaper—but I have to say I found this hilarious. Iran has ten times the population of Israel. If you think that Iran is filled with poorly educated fundamentalists, maybe you'll be surprised by their performance in the contest. If you have a realistic view of the country, however, you'll realize that it is a complicated nation with a well-educated and cosmopolitan elite. In fact, Iran has ranked higher than Israel in the International Math Olympiad—the top high school mathematics contest in the world—in every year since 2001, when it ranked immediately below them.

Needless to say, Israel is a country with a lot of smart people, and it has contributed a disproportionate share of top academics in my own field of economics. But it's not so disproportionately brilliant that it can hope to outperform a far larger nation with an apparent interest in high-level high school academic contests. The fact that "Iran beats Israel" is considered the newsworthy spin for these events is just sad.

* By the way, go USA! And since two of the four team members have Chinese surnames, we should also probably be cheering the H-1B visa.

Tuesday, August 24, 2010

The mix of legal immigration

One oddity of the American immigration system is that comparatively few people are allowed to immigrate for work. The number of employment-based green cards available each year (140,000) is small compared with both the family-based green cards limited by quota (226,000) and the unlimited immediate relative category (~535,000 in 2009). Once you add the diversity visa and refugee and asylum admissions, employment accounted for only 12.7% of green cards issued in 2009. (Note that the 12.7% includes spouses and children included in an employment-based green card application.)

Compare this to Canada, where fully 64% of new permanent residents came as "economic immigrants" in 2008. In fact, Canada's share of immigrants selected in some way for their employability is higher than the employment-based share of immigration from any country that sent more than 1000 new permanent residents to the US in 2009, of which the ten highest shares were:
  1. South Korea: 54.7%
  2. France: 45.9%
  3. United Kingdom: 44.7%
  4. Canada: 42.9%
  5. Netherlands: 41.8%
  6. South Africa: 41.2%
  7. Malaysia: 37.3%
  8. Israel: 36.2%
  9. Sweden: 35.4%
  10. India: 35.4%
India, which we'd stereotype as a nation that sends mostly high-tech workers, contributes a proportion of employment-based immigrants that is far less than what we see overall in Canada.

I'm not arguing that we should lower the quotas on family-based immigration. In fact, at the very least we should increase the 7% per-country cap that results in absurd waiting times for countries like Mexico and the Philippines. But even if you're an immigration skeptic, it's worth noting that even a dramatic increase in the number of employment-based green cards would have a relatively minor impact on the overall level of immigration, while offering many profound benefits.

(Data extracted from Office of Information Statistics Profiles on Legal Permanent Residents by country of birth.)

Why do grocery store labels list unit price?

Whenever I go to the store, I'm overwhelmed by the pervasiveness of price discrimination. Slightly superior products that cost only a few cents extra to manufacture are sold for a dollar more; products cost twice as much as their competitors simply because they carry a brand name. The relationship between volume and unit price is inconsistent: on average products sold in bulk tend to be cheaper, but for individual product types this isn't always true.

Presumably the store's goal is to sort out the careful, price-sensitive shoppers—who pay attention to subtleties like unit price—from the careless customers who grab whatever looks good. But this should depend on making it difficult to compare products. If it's too easy, everyone will do it, and the benefits from charging inconsistent prices will evaporate. So why do labels tell me what the unit price is, making comparisons trivially easy? I don't even need to do division!

The answer isn't kind to the intelligence of the typical shopper. Apparently, in the absence of unit price labels, too few shoppers do the arithmetic to make this a viable price discrimination strategy. Most customers remain pooled in a single group, and the small minority that compares unit prices may be so good at cheap shopping that no store even wants to attract it.

With posted unit prices, on the other hand, a substantial fraction of shoppers is smart enough to make comparisons and select the cheapest item. Yet for price discrimination to make sense, a sizable chunk of shoppers must still be indifferent to unit price. Even when they see two identical goods sold in different volumes with a large disparity in unit price, they either don't pay attention or don't care.

This is particularly baffling when the low-volume good is cheaper (which happens more often than one would think). With certain perishable goods, or goods that many people need in only small quantities, you can imagine how shoppers might not want to pay for high-volume items with low unit price. But when the smaller volume is cheaper, this doesn't matter: the smaller volume gives you more freedom to select exactly the amount you want. In a few cases shoppers might still have a preference for the higher volume—if they really, really need 200 grams of some item and no more, they might prefer the 200 gram version to a significantly cheaper 150 gram version. Except for the most price-insensitive shoppers, however, I can't imagine this happening frequently enough to matter.

And that's the rub: in the end, the only way that this pricing strategy makes sense is if many shoppers are extremely bad at making low-level optimizations. I wonder about these people. Are they rich enough that it doesn't matter? Lazy (or especially adverse to spending a few extra seconds shopping)? Do they not realize why the numbers in small print matter? Or am I just a crazy economist who thinks about these issues when most people don't really care?

Edit: As Michael Webster points out in comments, there are unit pricing laws that vary at the state level. Apparently there is a very strong one in Massachusetts that requires both unit and item pricing -- and since this post was inspired by a trip through a Massachusetts grocery store, the unit price law is a pretty big omission on my part. Many of the observations in this post are still curious -- since I've made them in North Carolina too, where there are no unit price requirements -- but regulation surely explains a great deal.

Monday, August 23, 2010

Where has fertility dropped the most?

According to the World Bank's World Development Indicators database, the world's average total fertility rate dropped 6.6 percent from 2000 to 2008, from 2.72 to 2.54 children per woman. Needless to say, this is one of the most important developments of our time. Around the world, fertility rates have changed much more quickly than expected, relieving us—at a global level, at least—of the fear of overpopulation.

What countries have experienced the largest individual drops in fertility? Among nations with populations above 5 million, they are:

Country20002008% decline
Saudi Arabia4.203.12-25.6
El Salvador2.932.32-20.9

It's hard to draw specific conclusions from data that shows declines in fertility happening all around the world—there aren't many commonalities between Brazil and Cambodia, and all these countries are starting from very different levels of initial fertility. The impact of these changes, however, is almost impossible to overstate. If the data for Mexico is correct, its fertility is now roughly the same as the United States'! El Salvador, the second largest source of Central American immigrants, isn't far behind. The impoverished but rapidly improving Southeast Asian nations of Cambodia and Laos are beginning to see fertility rates consistent with sustained prosperity, while Brazil is cementing its status as a middle-income country with a rate as low as many developed nations'.

The only dim spot in these statistics is that a few nations still have still have so far to go. The Democratic Republic of the Congo, for instance, saw a drop in fertility of 6.91 to 6.03 over the same eight years—an decline of almost thirteen percent! Yet even at this blistering pace, Congo would need four and a half decades to fall to 3 children per woman. With luck, change in countries like Congo will accelerate, but sheer inertia makes a massive (and potentially crippling) rise in population inevitable.

Sunday, August 22, 2010

The optimal level of ability sorting

Our system of higher education is designed heavily to achieve ability sorting. States and municipalities create institutions that cater to different academic backgrounds. Flagship state universities are intended to attract the best students, other state universities serve a wider swath of the student population, and community colleges attract unconventional students who often lack the preparation necessary to enter four-year universities right away. Universities are differentiated in other ways as well—for instance, in my home state of Oregon (as in many states), Oregon State University has engineering departments while the University of Oregon does not—but the general design almost always involves a hierarchy of institutions. And within institutions there are clear hierarchies as well.

In general, I think this is a good idea. My experience tells me that I am far more likely to learn when placed alongside people at a similar (or slightly higher) level. For an academically capable person, there is nothing more infuriating than being trapped in an environment with no real peers.

Needless to say, however, our current system of ability sorting is far from complete. There are some very smart (and very dumb) people almost everywhere. Students choose universities for financial or personal reasons rather than academic strength alone. This makes me wonder: what is the optimal level of ability sorting?

Many models will say that we should have perfect ability sorting. But regardless of whether perfect sorting would be desirable, it's clearly unrealistic: there will always be frictions and informational asymmetries that keep us from achieving it. A better question, then, is this: if some imperfection in ability sorting is inevitable, what is the optimal policy given that imperfection?

The intuitive answer is that we should come as close as possible to perfect ability sorting. But depending on our model, this isn't necessarily true at all; even if perfect sorting is the first-best solution, once imperfections exist it may be optimal for us to add additional noise to the sorting process.

To see why, suppose that there are two universities in the world, A and F, and two types of students, Good and Bad. In an ideal world, all the Good students attend university A while all the Bad students attend university F. Imperfections in the sorting process, however, mean that at best 1% of the students at university F will actually be Good. Now consider a policy that shifts students so that 2% of university F will be Good. Obviously, the students who moved from university A to university F will be worse off. The Good students who were already at university F, however, will be better off—they have a larger group of Good peers to learn from. It's entirely possible that the second effect will exceed the first. In other words, as long as university F has some Good students, it's possible that the benefits from providing a "critical mass" of Good students outweigh the harm to the additional Good students moved to university F.

Broadly speaking, this is an example of how the second-best policy solution may involve deliberately moving away from the first-best solution.

The reality of child labor

I recently came across an 1997 report by UNICEF that listed "four myths about child labor". I found some of the discussion on myths three and four to be particularly illuminating:
The third myth is that most child labourers are at work in the sweatshops of industries exporting cheap goods to the stores of the rich world. Soccer balls made by children in Pakistan for use by children in industrialized countries may provide a compelling symbol, but in fact, only a very small proportion of all child workers are employed in export industries - probably less than 5 per cent. Most of the world’s child labourers actually are to be found in the informal sector - selling on the street, at work in agriculture or hidden away in houses — far from the reach of official labour inspectors and from media scrutiny.

Myth four is that the only way to make headway against child labour is for consumers and governments to apply pressure through sanctions and boycotts. While international commitment and pressure are important, boycotts and other sweeping measures can only affect export sectors, which are relatively small exploiters of child labour. Such measures are also blunt instruments with long-term consequences that can actually harm rather than help the children involved. UNICEF advocates a comprehensive strategy against hazardous child labour that supports and develops local initiatives and provides alternatives - notably compulsory primary education of high quality - for liberated children. (Emphasis mine.)
There's a tendency to view this issue as a morality play, with evil multinational corporations exploiting children for cheap labor. Yet as this report notes, most children actually work outside the formal sector: they hawk goods on the street, work long hours on a farm, or work in a small factory producing goods that will never come close to the Western market. In this world, heavy-handed attempts by advocates in rich countries to end child labor can easily be harmful, as they depress the export market and hence the nascent manufacturing base that is necessary to pull a country out of poverty.

When is no copyright optimal?

My last post carries an implicit question: when is it optimal to have no copyright law?

First, let's examine a (fallacious) argument for why copyright is always a good idea. It proceeds as follows: suppose that someone would release a work in the absence of copyright law. Then they can already release that work without copyright today; the law doesn't force them to protect it. Hence copyright can only be beneficial—it preserves the copyright-free content while incentivizing the creation of additional content.

The fallacy here, of course, is that sometimes people who would release a work even without copyright protection still prefer to use copyright when given the choice. If open source content is better for overall welfare than equivalent copyrighted content—a likely proposition—then if this group of people is big enough, copyright may be a net negative after all.

When might this happen? First we have to consider what induces people to create non-copyrighted content. The possibilities include:
  1. They are able to capture monetary returns from content even when it is legally unprotected. This is surely the case in the fashion industry, where a "first mover" benefits from setting a trend even if a horde of copycats swiftly follows.
  2. They derive non-monetary returns from content. Perhaps they just enjoy the process of creating content, or sharing it with the wider world. This blog is an example: I freely share my writing (even without ad revenue) because I enjoy swapping opinions with others and seeing my ideas discussed elsewhere.
My sense is that the abolition of copyright is only reasonable (although not necessarily optimal) when (1) predominates rather than (2). Under (1), where returns are monetary, it's likely that many content producers would still prefer to have exclusive rights to their content. Hence, with copyright, we'd see many producers switch from making copyright-free content to making similar copyrighted content—which, as I observed above, is necessary for no copyright to be socially optimal.

Under (2), however, the interest of the content producer is often already to spread content as freely and widely as possible. If I'm writing essays in a copyright-free world because I want to make my views more popular (or bolster my reputation), it doesn't make much sense for me to exercise copyright even when it's available.

It's still possible, of course, that someone with these interests would choose opt for copyright once it became possible. Maybe I'm writing a book that will be wildly popular and massively enhance my reputation regardless of whether I make it freely available. Then I would still write (for reputation purposes) in a world without copyright, but I'd choose to extract profits from my work once copyright law allowed me.

This seems much less likely, however, than the corresponding situation in (1). My tentative conclusion is that abolishing copyright only makes sense in fields where there is a large monetary incentive for copyright-free production. Am I missing anything?

Saturday, August 21, 2010

No copyright?

Via Tyler Cowen, an article in Der Spiegel outlines the hypothesis that Germany experienced rapid industrial expansion in the 19th century due to the absence of copyright law. Apparently, the publishing industry in Germany was far more vibrant and open to the masses than its counterpart in England, a difference that seems attributable to the legal disparity in copyright protection.

I'm no defender of current copyright law, but it's still important to understand that this historical episode won't necessarily generalize to the present day. The only reason German authors were able to earn any income at all was that reproduction carried costs: plagiarism was always a threat, but it wasn't free. Today, however, the costs of reproduction are rapidly approaching zero. Books wouldn't disappear if we eliminated copyright tomorrow, but the monetary incentives to become a successful author would vanish.

Friday, August 20, 2010

The era of recognized genius

When Vinay Deolalikar (falsely, it now appears) announced that he had proven that P does not equal NP—thereby purportedly solving the most important open problem in computer science—I was intrigued but skeptical. I was intrigued because Deolalikar's background is far from that of the typical crank who announces a "proof" of some longstanding conjecture: he has a master's degree from IIT Bombay and a Ph.D. from USC, along with a few publications in good math journals. His manuscript was typeset in LaTeX (an easy way to filter out most completely disreputable attempts), long, and clearly fluent in the relevant technical vocabulary.

But I was also skeptical. Why? First, most attempts at proving such an important problem turn out to be flawed, even if they initially seem credible. Second, I've noticed that in recent years, virtually all successful efforts at solving key mathematical puzzles have come from people whose exceptional brilliance was already recognized via standard channels.

Consider perhaps the most impressive mathematical feat of our era, Grigori Perelman's proof of the Poincare Conjecture. A great deal of media attention focused on Perelman's apparent eccentricity: he lived with his mother, declined the Fields Medal, and now even seems to be declining the million-dollar Millennium Prize. But while he fits the "lone genius" cliche, he was hardly an unrecognized lone genius. As a high school student, he received a perfect score at the International Mathematical Olympiad, an incredibly difficult feat accessible to only an elite few. He went on to receive a Ph.D. from what is now St. Petersburg State University—one of the top institutions in Russia—and held positions at elite American math departments like SUNY Stony Brook, NYU's Courant Institute and UC Berkeley. Apparently he was offered professorships at Stanford and Princeton after leaving UC Berkeley, but turned them down in favor of returning to the Steklov Institute in Russia.

In other words, Perelman made his way to the very top of the mathematics profession long before he vanquished the Poincare conjecture.

Or consider Andrew Wiles, whose proof of Fermat's Last Theorem in the 1990s was easily the most widely recognized mathematical accomplishment of the decade. When he proved the theorem, he was a professor at Princeton University, which is as good a gig in mathematics as one can imagine. He was an undergraduate at Oxford, obtained his Ph.D. at Cambridge, and held a professorship at Oxford in between stints at Princeton.

Compare these examples to some of the widely publicized false proofs from recent years. In 2006 we saw a flawed attempt at solving the Navier-Stokes Equations (another of the Clay Mathematics Institute's "millenium problems") from Penelope Smith of Lehigh University. Smith was hardly a crank (which is why her attempt received so much attention) but her professional background didn't come close to Wiles or Perelman. She was an associate professor, never promoted to full professor despite almost three decades since her Ph.D., at a department in the lower half of the National Research Council rankings.

I don't mean to be elitist—I could never be a math professor anywhere. And my evidence is admittedly anecdotal. But I do see a compelling pattern here: more than ever before, the most compelling advances in mathematics come from people who were already at the top of their profession. The era of Swiss patent clerks making major contributions is over.

Is this because our current set of institutions is better at identifying talent early on? Because math at the research frontier has become so ungodly complicated that one needs to be plugged into the research elite to understand it? Or, on a related note, because ever more arcane mathematics requires more time to understand deeply, driving up the traditionally low median age of mathematical accomplishment and offering more time for our institutions to recognize geniuses before they make their greatest advances? All of the above?

I can't say.

Applies to apples in higher education

Via Ezra Klein, Michelle Singletary in the Washington Post points out that federal student loan repayment rates at nonprofit and public colleges aren't so hot themselves:
Only 36 percent of students at for-profit schools were paying down their student loans in 2009, according to an analysis of Education Department data by the Institute for College Access and Success, a nonprofit group whose mission is to help make higher education more affordable. At public colleges, 54 percent of borrowers were paying down the principal on their loans, compared with 56 percent of those from private, nonprofit schools. These are not great percentages, either.
I'd go even further. Given these numbers, it's not clear that for-profit schools are serving their students worse at all. The pool of students attending for-profit colleges is very different from the pool elsewhere—coming in, these students tend to be from less wealthy backgrounds and have weaker academic records. Being older on average, they may have family or other financial obligations that are rarely an issue for 22-year-old students graduating from conventional four year universities. To really understand how these universities compare, we'd need to see how they perform conditional on admitting students with similar backgrounds. Since unobserved factors will always play heavily in both the choice to attend a certain college and performance afterward, it's impossible to do a perfect analysis, but I'd still like to see us give it our best shot.

That said, even if it turns out that for-profit universities do just as well as other ones given the same inputs, they shouldn't be immune to criticism. Sadly, some people may not have the academic background to make any college a worthwhile option, for-profit or not. If for-profit universities recruit such people much more intensively than their nonprofit or public counterparts do, they still may be causing net social harm, even if they train these students just as well as other schools would.

But "some students attending for-profit universities should attend other colleges instead" is very different from "some students attending for-profit universities just shouldn't be going to college," and it's important to clarify what our data actually implies.

Thursday, August 19, 2010

The silliness of the Coase Theorem

Easily the most famous result in law and economics, the Coase Theorem states that whenever an externality is possible and there are no transaction costs, two parties will bargain themselves to an efficient solution regardless of the initial assignment of property rights. In Coase's original example, a cattle-raiser's herd damages the crops of a nearby farmer, and Coase argues that even if the farmer has no legal right against damage from his neighbor's cattle, the two will arrive at the same efficient outcome that would have occurred in a world with farmer rights (albeit perhaps with a different distribution of wealth).

In a narrow sense, the Coase Theorem is correct. If the parties are bargaining about a single, static externality, the outcome will be "efficient" regardless of initial property rights (assuming that we exclude distributive concerns from our definition of efficiency). But this completely ignores the impact on incentives to create an externality.

Suppose that we inhabited a world where farmers had no property rights against encroachment by cattle. In this world, the first thing I'd do is start a company (preferably with an nefarious-sounding name like Multinational United) that raised cattle specially bred to ravage cropland. I would locate these cattle near a large number of farmers and announce that each they had to pay me $100,000 to avoid being overrun by the herd. For most farmers, this would be worthwhile, and I'd make a bundle of money.

Of course, this would be inefficient: if any farmers didn't find it worthwhile to pay the $100,000, their farms would be destroyed. And even if we assumed that evil Matt had perfect information and could charge a special rate to each farmer to make sure that it was worthwhile for everyone to pay, this would be inefficient in a different way. Payments that varied according to ability to pay would effectively be a tax on productive assets, which in the long run would discourage the accumulation of those assets.

After I set up a lucrative business exploiting farmers, I would retire by moving into dense residential neighborhoods, buying houses, and blaring extremely loud music at different corners of each house until I was paid compensation by the nearest homeowner. Again, this would be monstrously inefficient. If I committed to charging $50,000 a year to everyone, some people might not be able to pay, and would be forced to suffer the hearing damage and general psychological trauma that comes from listening to my music. If I charged different rates according to accumulated wealth and income (making it so that everyone could pay), I'd be creating a large implicit tax on wealth and income. Either way, this is far from the efficient utopia envisioned by Coase—and it can all happen in a world without transaction costs!

If there was ever a case where you had to wonder whether some economists were actually confused impostors from the planet WZ-15, this would be it...

Wednesday, August 18, 2010

Why do we read classics?

I've expressed skepticism in the past that reading "classic" works should be an important part of education.

There are several reasons to be doubtful. First, today we have a much larger talent base than has ever existed in the past. It seems unlikely that many of the best philosophers were ancient Greeks when today we have literally a million times more people with the education and leisure to do philosophy. Second, regardless of the intergenerational distribution of talent, the human knowledge base tends to improve over time. Each generation of thinkers builds upon the insights of its predecessors, refining arguments and clarifying our understanding of the issues. Given this (almost) continual process of improvement, the chances that the best introduction to any topic was written centuries ago strike me as slim. Finally, even if the ideas in classic works are as good as those in any modern competitor, the style and approachability of modern works are almost always superior. Krugman on Ricardo is a far more pleasant read than Ricardo himself.

So why are the classics, at least in many subjects, still a mainstay of our educational system? I can think of several possible explanations, some charitable and others less so.

The Positive
  1. Even if some newer works have great ideas, centuries of deliberation have made us more confident about which specific older works contain good material. In general, we're bound not by a shortage of clever ideas but by a lack of time in which to assimilate them, and thus we want to concentrate our limited attention on books whose merit is clear.
  2. Older thinkers built up their ideas from first principles in a way that modern ones (for whom the first principles are second nature) almost never do. The attempts of an intelligent person—lacking knowledge of future developments—to lay out the core issues in a field provides a useful sense of perspective to all of us.
  3. Even if the classics are inferior at conveying specific knowledge or ideas, wrestling with them is a productive intellectual exercise in its own right. It's often more important that we develop abstract verbal and analytical skills than that we actually "learn" something, and classics that are bad at providing the latter may nevertheless be good at promoting the former.
  4. In certain fields, the seemingly overwhelming numerical advantage of the modern world may not really be so compelling. Even if we have 1000 times as many educated people as were alive several centuries ago, today our intellectual elites pursue a much broader set of fields, while in the past they concentrated in a few specific areas like philosophy.
  5. Classics carry inherently valuable historical or cultural context.
The Not-so-Positive
  1. We need a certain canon of material to provide a basis for common discussion, and classics are simply the current equilibrium in this coordination game. Classics may be inferior, but they offer a Schelling-style focal point that is not easily changed. This may explain why classics are so much more common in non-scientific fields; in science, there is an objective set of core material that provides a foundation for more advanced work, but in fuzzier subjects the lack of any objective core forces us to use personalities and books instead.
  2. Classics are a possibly wasteful signalling equilibrium, where budding intellectuals demonstrate their devotion to a field of study by combing through dense and unrewarding texts. (This argument is the evil twin of #3 above.)
  3. Focus on classics is a symptom of educational inertia. People who have been successful within a certain system are loath to change that system, and we're often stuck with whatever framework seemed optimal centuries ago.
In the end, there are definitely several good reasons to read classics—they should have some presence in our intellectual portfolio. I'm not convinced, however, that a move away from classics heralds irrevocably declining standards in the academy. In a world where many educated people don't understand basic statistics, spending hours trapped in the library with Aristotle doesn't strike me as a particularly useful pursuit.

Tuesday, August 10, 2010

The morality of randomization

Since my recent post on the importance of randomization, I've heard several people argue that randomized experiments in education are somehow unethical. I don't want to dismiss these concerns out of hand, but in most cases I think they reflect a muddled set of moral intuitions. It's easy to freak out at the idea of "experiments" where children are the subjects, even when we'd view a policy as perfectly moral when presented in a slightly different light.

For instance, suppose that we have limited resources for early childhood education, and we can only provide a certain program to a handful of at-risk children. Perhaps 1000 families apply when the program has a capacity of 500. How do we decide who to let in? Often we will use a lottery, specifically because it's viewed as the fairest and most ethical solution! In this case, my argument is simply that we should keep better data on the randomization, and record outcomes for both lottery winners and losers. The same holds for lotteries at oversubscribed charter schools, or assigning kids to kindergarten teachers. As Kevin Carey points out: "Given that teacher assignment often unfairly reflects parental pressure, periodic random assignment could be a net increase in fairness for students."

In most instances, you can reframe randomization in education as "allocating scarce resources in a way that doesn't reward parental connections", or "making sure that a program works before we open it to everybody", and it sounds great from an ethical perspective. It will be very sad if knee-jerk aversion to experiments keeps us in the dark about what's really effective in education.

Carbon taxes will not increase the price of gas by more than a dollar

Megan McArdle addresses the debate on whether a carbon tax will spur innovation. I'll save my commentary on this topic for a later, more comprehensive treatment; for now, I just want to reiterate that oil consumption in transportation is not a key area in limiting carbon emissions. Virtually all the low-hanging fruit is elsewhere, and although they may be valuable for other reasons, grand debates about the future of American transportation are mostly peripheral to the question of carbon.

Allow me to repeat some arithmetic I've done in the past. Consider a tax on carbon dioxide of $100 per ton. From the carbon intensity of coal, I calculate that this would increase the price of coal electricity by 10.5 cents per kilowatt hour—an enormous increase that would make it uneconomical compared with combined-cycle natural gas, and probably solar thermal (given more investment), wind, and nuclear power as well. Such a tax would be more than five times the current spot price on the European Climate Exchange, and it would massively alter energy consumption in the United States. In other words, $100 per ton of carbon dioxide is essentially a dream scenario for those of us who are concerned about climate change.

What would this mean for gas prices? Using the carbon intensity of gasoline, we can calculate an answer: 97 cents per gallon. This isn't nothing, but it's far less than the gas taxes we see in Europe, and it's even smaller than fluctuations in gas prices that we've experienced in the last half-decade. The large shifts in energy consumption that will make the most difference in our carbon output will happen elsewhere.

The incredible children of the H-1B visa

Since Charles Schumer is spreading malicious nonsense about "chop shops" using H-1B visas, I think this is an appropriate time to revisit the fact that the H-1B visa produces an incomprehensibly large fraction of America's young math and science superstars.

Stuart Anderson tallied the numbers several years ago in a report appropriately titled The Multiplier Effect. Key quote (emphasis mine):
Seven of the top 10 award winners at the 2004 Intel Science Talent Search were immigrants or their children. (In 2003, three of the top four awardees were foreign-born.) In fact, in the 2004 Intel Science Talent Search, more children (18) have parents who entered the country on H-1B (professional) visas than parents born in the United States (16). To place this finding in perspective, note that new H-1B visa holders each year represent less than 0.04 percent of the U.S. population, illustrating the substantial gain in human capital that the United States receives from the entry of these individuals and their offspring.
This isn't some fluke of the Intel contest. For the U.S. Math Olympiad—the country's premier mathematics competition for high school students—Anderson finds:
Among the top scorers of the 2004 U.S. Math Olympiad, 65 percent (13 of 20) were the children of immigrants. A remarkable 50 percent were born outside of the United States (10 of 20)...

More of the Math Olympiad top scorers have parents who received H-1B visas (10) than parents born in the United States (7).
At the highest levels of competition, the tiny fraction of high school students whose parents came to the United States via the H-1B visa do better than the entire population with American-born parents. This program is a really, really big deal.

Even supposing that high school contests overstate the fraction of children of immigrants (whose parents are perhaps more likely to push them at that stage) among the nation's scientific elite, the H-1B visa is responsible for enormous percentage of our future top scientists. Maybe the bias is a factor of four—but then this one small program alone will still give us 12%.

Yet the number of H-1B visas each year remains capped at 65,000, and thanks to our bizarre system of per-country limits, the EB-2 green card (to which the H-1B often leads) is backlogged to 2006 for applicants from China and India. The EB-3 is backlogged to 2002! Why should fixing this antiquated system wait until "comprehensive immigration reform"?

Aid to states does not create moral hazard (unless it's done terribly)

As I've pointed out before, aid to states suffering from cyclical budget shortfalls is the best kind of stimulus: by directly saving jobs that would otherwise disappear, it is more certain to improve the employment picture than any other type of spending.

Yet many very intelligent commentators—even those who do support some kind of aid to states—seem to think that we run the risk of creating moral hazard by providing the money. Tyler Cowen, for instance, advocates helping states via federalizing Medicaid on the grounds that "as a one-time change it reduces the moral hazard problems from ongoing outright grants".

I don't understand how this is supposed to work. Granted, I can imagine how aid to states done badly would create moral hazard. If the amount each state receives depends in part on its financial difficulties, then you certainly get moral hazard; states have more of an incentive to run their finances into the ground in the hope of obtaining bailout money. But aid doesn't have to work that way to be effective. During a recession as deep and prolonged as this one, virtually every state is suffering from financial problems. At the margin, aid based on a simple metric like population would do a great deal of good without creating any moral hazard at all. And indeed, the tragically underfunded State Fiscal Stabilization Fund in the 2009 stimulus allocated funds to each state based almost entirely on population: 61% on school and college-aged population (5 to 24), and 39% on total population.

You might argue that since the decision to rescue states at all is a result of poor budgetary planning by the states, there are still some perverse incentives implicit in aid. Remember, however, that providing aid to states is a national decision. A single state's irresponsibility only improves its prospects to the extent that it makes some difference in the overall level of state budgetary problems in the US. Most states are too small for this to be relevant at all. Perhaps it's a minimal issue for California or Texas, but even then it could be eliminated entirely by including a small penalty for having a higher deficit than other states. (This would, of course, diminish the efficiency of the policy a little, as states that need aid the most would receive slightly less.) It could also be eliminated by making aid automatic in future recessions, conditional on a certain level of unemployment or decline in GDP.

Perhaps the possibility that federal aid will render stabilization funds unnecessary makes states less likely to accumulate them. Note, however, that if federal aid was automatic (or even highly likely) in bad economic times, this wouldn't be an issue; federal transfers would lower the need for stabilization funds, making smaller funds efficient. Inefficiency only arises when the federal response to recessions is inconsistent—in other words, the problem is policy uncertainty, not moral hazard.

Moreover, the notion that it is politically feasible for states to amass large stabilization funds strikes me as deeply naive. I'm not very old, but I've already seen the same story again and again: during boom times, every governor cuts taxes and increases spending, appearing to be a genius in the process. Especially with term limits, there just isn't much of an incentive to build an expensive stabilization fund that might have benefits down the road—not when compared to the outsize political returns from tax cuts or benefit hikes. I've rarely seen voters display much interest in holding state politicians accountable for solvency in the distant future.

In the end, the only model where I can imagine countercyclical aid to states causing adverse consequences is not an economic model—rather, it is a model of a dysfunctional political process, where states tend to accumulate wasteful spending that can only be purged through a budget crisis. The problem with this hypothesis is that it doesn't seem to match the real world. Maybe a little fiscal pain prods legislators to cut the fat from their budgets (and as long as aid doesn't balance the budget for every state, it will deliver that pain in the most egregious cases). Today, however, we're seeing cuts like massive teacher layoffs. That doesn't seem like "fat" to me, and it is tough to imagine why a recession should change the returns to a long-term investment like public education.

Meanwhile, massive cuts in state and local government jobs are the worst possible response to a lingering recession. I can't see any argument that justifies continued inaction.

Saturday, August 07, 2010

Why randomization is so important

Kevin Carey calls for randomized trials in education:
The few randomized control trials that exist continue to be enormously influential–I recently heard Nobel prize-winning economist James Heckman give a fascinating presentation on his recent analysis of results from the Perry Preschool project, which involved 128 students (64 treatment, 64 control) from Ypsilanti who were randomly assigned to preschool 48 years ago. What if future Heckmans had a thousand times as many data sets from which to choose? Given that teacher assignment often unfairly reflects parental pressure, periodic random assignment could be a net increase in fairness for students.
Randomization can't answer every important question. Economists trying to understand the impact of a large-scale merger in the airline industry, for instance, can't randomly allow and disallow the merger 500 times each to see what happens. We only have one economy. But when it does apply, randomization is an incomparably useful empirical technique. Nothing else really comes close.

Suppose you want to answer a basic question in education policy, like the effect of class size on student performance. First you look at a simple correlation—how do students in small classes score compared with students in large classes? The problems with this strategy, however, quickly become apparent. What if students with academic disabilities tend to be assigned to smaller classes, dragging down the scores in that category? Or, alternatively, what if students with the most motivated parents (who will tend to be more successful anyway) squeeze into the smallest classes?

More generally, do you look at class size variation within schools, across schools, or both? If it's the former, you must contend with the fact that within-school variation in class size is quite small, and whatever differences is achievement are actually caused by the variation in class size will surely be swamped by the effects of whatever rule (say, the "whiny parent" rule) is used to assign students to those classes.

If it's the latter, you're faced with the enormous empirical challenge of separating the actual effect of average class size from all the confounding factors that are associated with it—type of school, level of resources, student population, general academic philosophy, and so on. If you're extremely, extremely smart and lucky, you might find that a tradition dating back to a 12th century rabbinic scholar induces an arbitrary discontinuity in class size with respect to school size, which can be used to identify the effect of class size on student performance. More likely, however, you'll be forced to resort to running an ugly regression where you attempt to "control" for other factors to isolate the impact of class size. (This is the methodology that produces the famously erratic epidemiological "studies" that produce breathless news stories about the connection between cauliflower and lung cancer.) The problem, of course, is that you can't control for everything; correlation with unobserved variables will inevitably distort your results. Meanwhile, in adding controls to your regression model you make strong assumptions about the functional form. If you assume linearity when it isn't appropriate, your results may be completely wrong—but it's very difficult to know whether this is happening or not.

With rare exceptions, non-randomized evidence on questions like class size sucks.

Randomization, meanwhile, offers extraordinary credibility. Worried that class size will correlate with other important determinants of student performance? Worry not—randomization makes systematic correlation impossible, and to the extent that it happens by chance in finite samples, it's governed by the well-trod principles of basic statistics. Even if cagey parents convince school officials to switch their children to another class, as long as you retain data from the original randomization you can use it as an "instrument" for class assignment and recover a meaningful estimate.

I've written before about the dangers of extrapolating from randomized experiments that don't resemble reality. But education is one of the fortunate cases where randomization is a perfectly appropriate way to gather evidence—it's best suited for understanding the effects of a "treatment", and public education is really just a long and expensive treatment. The fact that a nation of over 300 million people has no better evidence about the benefits of preschool for at-risk children than an experiment with 128 subjects, 48 years ago, is outrageous.