Idea of the day: forget acceptance rates

Acceptance rate is often perceived as an important indicator of a university’s quality.  In the U.S. News and World Report‘s college rankings, acceptance rate determines 10% of the student selectivity score for a university, which is 15% of the total score.  U.S. News also publishes a list of colleges with the lowest acceptance rates.  The top 3 spots were taken by Curtis Institute of Music (4.0%) , Jarvis Christian College (4.5%), and Rust College (7.6%) respectively. 

The fact that you probably have never heard of these schools illustrates my point perfectly.  Comparing colleges based on acceptance rates is meaningless at best, and counterproductive at worst.  Although schools such as Harvard (7.9%), Yale (8.6%), and Stanford (9.5%) also make the top of the “lowest acceptance rate” list, they share spots with schools such as College of the Ozarks (11.7%) and Alice Lloyd College (10.5%).  Perhaps College of the Ozarks and Rust College are fine institutions as well, but who could argue seriously that they are comparable to Princeton (9.9%) and Columbia (10.0%)?  What would such a comparison even mean?

Acceptance rate is nothing more than the number of applicants that a school receives divided by the number of students that are admitted.  A low acceptance rate might enhance an image of exclusivity and prestige for an institution, but it says nothing about the substantive quality of its faculty or student body, which can be, and should be, and already are measured by other, more objective criteria in the rankings, such as test scores, retention rates, or peer assessment of reputation and academic quality. 

Moreover, rewarding low acceptance rates creates a perverse incentive for colleges to encourage applications from students who are not suited for the institution but limit the number of admitted students, in order to artificially decrease its acceptance rate and gain a spot or two on the rankings.   Perhaps you think that 1.5% of a total score does not matter, then keep in mind that colleges are often separated by one point in their score in the rankings.   This is after the scores are reweighed (recalibrated to a scale of 100 based on the highest score received), which means that their original scores are separated by even more minute numbers.  For schools at the margins, it may very well be worthwhile for them to do exactly what I described above.

More troubling, however, is the whole notion that exclusivity is a good thing.  Academic quality should be the real objective here.  Why should exclusivity matter?  In an excellent essay in the Chronicle of Higher Education last week, Kevin Carey faulted Harvard for not using its immense endowment, which increased nearly $ 32 billion in less than two decades before a spectacular loss of $10 billion last year, to educate more undergraduates.   (In both his orignial essay and his follow-up blog post, Carey considers arguments such as the fact that Harvard has increased financial aid for its undergraduates in recent years and the need to keep class sizes small, and rejects these reasons as justifications for not expanding its undergraduate class, so I won’t go into that here.)  It is undisputable that many more extremely qualified applicants apply to Harvard (or any of the elite colleges and universities) than there is space for them.  Shouldn’t a spectacular expansion of resources mean that a top college also expand its capacity to fulfill its basic educational mission, and therefore increase its acceptance rates?

And to do that, let’s discard “acceptance rates” as a measure of a school’s quality.