I recently came across a commentary on the SAT written several years ago by a Rutgers English professor. His thesis is plain: the SAT, or at least the SAT verbal section, is not the meaningless "test on test-taking" that its opponents caricature. Instead, Professor Dowling claims, the exam effectively identifies the verbal competency that is so important for successful college work.
First, I should note that I'm not an entirely unbiased observer here. I placed the stamp on my last college application just over six months ago, and I took the SAT six months before that. It's fresh in my memory. My egoistic subconscious will want to claim, since I scored a 2400, that the SAT is indeed a glorious, inerrant measure of intellectual promise. It isn't.
As I mentioned, Dowling focused on the old verbal section of the SAT. This was fortunate. The verbal section, and the "critical reading" that replaced it, is probably the most effective part of the SAT. It resists attempts at simplification. While on the math or writing sections, a prospective examinee can theoretically prepare for almost any problem "type," the variety of passages and vocabulary on the verbal section makes such fake mastery close to impossible. I'll admit that I don't have any hard data to support this, but my experience with helping others plan for the test leaves me with the impression.
Yet as I just indicated, the math and writing sections are quite different from the verbal. For instance, the SAT is fine for students without strong interest (or talent) in mathematics. It takes a reasonably accurate snapshot of their ability, which admission officers may find useful. But these students, in general, are not those who are applying to the most selective universities, for whom the SAT is most important. Indeed, at the high end of the math SAT scale, predictive power is hazy at best.
For high-scoring students, the questions become so easy that the main challenge isn't solving them. It's avoiding trivial arithmetic errors. Since so many students find the math questions so easy, the "scaling" process has been made brutal and turns these trivial arithmetic errors into apparently meaningful differences in score. Make 3 little miscalculations on an endless exam? You would have scored 800; now you're lucky to have a 730.
The writing section posesses its own set of flaws. Just as many academics proclaim, the SAT essay is a farce that reinforces formulaic writing with claims to a "standardized" scoring process. I suspect that the five-paragraph essay has destroyed more writing talent than any other scholastic artifice—as you can probably tell, I still haven't recovered (a double tragedy since I never had much talent). In tenth grade, I tried to write a essay for history class in a different style. It sought to imitate the clever organization I saw in real essays—bits from the Atlantic and Harper's and the New Republic. What happened?
Smack! B-minus. Back to the five-paragraph shell.
I should mention that the essay is only one-third of the writing score. The rest consists of multiple-choice questions. This seems paradoxical, reflecting some absurd attempt to impose the SAT philosophy: "How will we measure writing? Well, isn't it obvious? Bubble sheets!"
But I suspect that it's actually more effective than the essay. My gripe with the multiple choice section actually stems from a particular variety of question: "identifying the sentence error." Each problem is a sentence: four parts of the sentence are underlined, labeled "A," "B," "C" and "D". A "no error" option follows, corresponding to "E." If an underlined portion contains a grammatical error, students must bubble the corresponding letter. If none do, they bubble "E."
Maybe it's a bit pedantic, but it sounds innocuous, right? Not really. The other parts of the writing section call upon students to improve sentences or paragraphs. They must consider both style and grammar to find the best option, and usually it's not very difficult. But the work of "identifying sentence errors" requires that students set aside any sense of fluid writing and focus simply on identifying technical errors. Now for the obvious question: is there any circumstance, outside of testing, where this is actually useful? Where the fantastically bad phrasing in a sentence doesn't matter but the exact use of pronouns does?
The SAT, needless to say, is not a perfect test. It's probably a decent one—I'll leave that to argument. But as the established test, it plays a vital role in contemporary education. I want to make several additional points.
First, there is precious little evidence that the SAT makes admission to the most elite schools (Ivy Leagues, MIT, Stanford, etc.) more difficult for disadvantaged students. It might even help them. Consider: according to a report by the Century Foundation, 15.3% of Caltech's students are Pell Grant recipients, versus 6.8% for Harvard, 7.4% for Princeton, and 11.7% for Stanford. Caltech also happens to have the most test-focused admissions policy in the country, which results in an average SAT that is usually around 1510. That's higher than Harvard—in fact, higher than any school. How, then, does Caltech admit so many more disadvantaged students?
There are several possible explanations, but my best guess is that the "other factors" considered by elite schools are often quite discriminatory. Harvard, for instance, seeks out high-level involvement in extracurricular activities: the state champion violinists, the debaters who win regional contests, etc. But just who are these students? How does a violin virtuoso start? Private lessons—and does anything scream "affluent middle class" more than expensive lessons at a very early age? How do debaters become so successful? Well, to start, they usually have strong programs at their high schools. These aren't found at your inner-city schools, but they're in strong supply at private academies.
Let's be honest: SAT tutoring isn't the only advantage possessed by rich students. And as someone who never received any help for the test, I'm frankly annoyed at how it's fingered as a mere pawn of wealthy elites. Yes, the SAT can be "gamed" to some extent, but I suspect the system would be far more vulnerable to free-spending parents without it. I want better testing, not conspiratorial neo-Marxist rhetoric about testing in general.
My second point is directed toward the ludicrous use of "score ranges," which plague standardized testing in general. The initial idea isn't so bad: a test will inevitably be imprecise in its measurements, and a "score range" provided by the testing agency can illustrate the degree of this variance.
But many universities then decide to incorporate "score ranges" into their admissions procedures. I've heard about this with MIT: they have a "750-800" range, a "700-740" range, and so on. Supposedly, this recognizes that minor differences in score don't mean much. But in effect it arbitrarily ignores all incremental differences in score except one (740 to 750), which is blown beyond reasonable proportion. This doesn't actually solve anything; it just sets inefficient cutoff points that distort results and preserve the very weakness they seek to eliminate.
Third, this discussion shouldn't just be about the SAT. We already have tests that are arguably more effective, and we should work to create tests that are even better. Fairtest.org, the leading opponent of standardized tests, has a fact sheet comparing the SAT I, SAT II, and ACT. You would expect the sheet, entitled "Different Tests, Same Flaws," to provide evidence selected for its damning rejection of all three tests. Yet a validity study cited in the SAT II section, from the University of California system—an ideal environment thanks to its relatively wide range of students—actually shows that SAT II results had more predictive power than high school GPA. That's pretty remarkable, given Fairtest's blithe assertions that all standardized tests are worthless, soul-destroying shells.
Soul-destroying? Somewhat. Worthless? No. Further evidence of an elite conspiracy to keep poor students out of top colleges? I don't think so.
The main problem with SAT IIs (and AP exams) is that their subject-oriented character leads to potential unfairness. Students at stronger schools, the argument asserts, are far better prepared to handle the tests. This is probably true, but we can work to alleviate this with some imaginative thinking: create a system that automatically informs colleges of the average scores on each test at a particular high school. Why not?
Finally, I want to emphasize the inability of these tests to deal with truly extraordinary talent in a particular area. Even if the SAT perfectly measured ability in each section, we would still have 2200s with far more promise than 2400s. Why? The 2200 might be an extraordinary writer with middling math skills; 800s on the critical reading and writing sections wouldn't begin to describe her brilliance. And the SAT certainly doesn't perfectly measure ability—there are enormous inherent limitations.
I realize that all this writing may sound incoherent: one minute I'm defending standardized tests and the next I'm ridiculing them. But I do have a consistent philosophy, which I'd describe as "realist". Yes, the SAT has flaws; yes, it can and should be better. Yes, there are broad problems with testing in general. My point is that we can't view these issues in isolation. It's not enough to criticize the system—you have to look at the alternatives. The SAT came about as a way to loosen ignorant WASPs' stranglehold on the Ivy League. It hasn't succeeded fully, but we can't just dismiss it.