Not quite post-racial at the Chicago public schools

Has Chicago, the city that gave us Obama, arrived at a post-racial era in its public school system?  Last week, the New York Times reported that the city’s public schools have decided to use student socio-economic profiles rather than race to assign students to schools.

Those who believe that race is merely a clumsy and inaccurate proxy for socio-economic status will surely welcome the change as long overdue.  For them, removing race as a factor would allow Chicago schools to deal directly with the true underlying concerns of school integration — combating the devastating effect poverty has on the education of our children.  It would be a welcomed first step toward moving beyond our fixation on race.  The result would be more equitable and accurate, as well.

The reality in Chicago, however, is far more complicated.  For one thing, the Times article makes it clear that the Chicago officials are implementing the reform reluctantly.  They are doing so only in response to the Supreme Court decision in 2007 that prohibited Seattle and Kentucky school districts from using race as a factor in school assignments.

More importantly, the objective of using these types of criteria to sort and assign students remain peculiarly fixated on race.  The goal that the school officials say they hope to achieve, and the standard by which they evaluate their success, is racial integration, not socio-economic parity.  As a result, socio-economic profiling is perceived and spoken of as a second-best solution, a crude proxy for race.

Unfortunately, if racial integration is the objective, then the Chicago policy is likely to fail.  San Francisco, which has been using socio-economic factors instead of race in school assignment for the past few years, has seen less racial integration in its schools since adopting the new policy.  Denver and Charlotte had reported similar trends.

Part of the problem may be technical.  Defining and measuring socio-economic status turns out to be a bit more elusive than defining race.  According to the New York Times, Chicago will be using a variety of factors that evaluates the student’s neighborhood — “income, education levels, single-parent households, owner-occupied homes and the use of language other than English as the primary tongue” — in placing students in selective-enrollment schools.

Using neighborhood characteristics as a proxy for socio-economic status may be just as inaccurate as using race as a proxy.  The system is also more easily gamed, since the calculus depends on assumptions about people’s living preferences and the fact that they are reporting their actual addresses.  San Francisco, which uses a similar system, is reevaluating the effectiveness of using those factors and considering using additional factors such as whether the student has attended pre-school.

In short, if Chicago’s true objective is more racial integration, it is likely to be sorely disappointed.  None of these criticisms, however, address the largest problems that both of these methods for school integration fails to address: a paucity of middle- or upper-middle class white students.  Students who attend urban, inner-city schools are overwhelmingly minorities and poor.  In Chicago, only 9% of students are white, while 45% are African-American and 41% Latino.  According to the school district website, 85% of the public school students are from a “low-income family.”

And Chicago is not unique.  70% of Denver’s students are Latino or African-American, and roughly the same percentage are low-income students eligible for the federal free lunch program.  In San Francisco’s school district, nearly 1/3 of the students are immigrant “English language learners”, and white students only make up 10% of the student population.  More than half of the students are eligible for the free lunch program.

Integration is only meaningful and sensible when there are diverse groups to integrate.  The Chicago officials themselves acknowledge the absurdities of trying to “integrate” a district where the vast majority of students are quite uniformly low-income and non-white.  Short of busing these students to wealthy suburbs, who are Chicago integrating these children with?

Rather than achieving the ideal of running schools where race does not matter, Chicago’s new policy shows us that race is still an issue that is very much front and center in people’s minds — and at the same time, it is an issue that is beside the point.

Advertisements

Religious schools and church-state relations

church and stateAs is well known, the First Amendment to the U.S. Constitution provides, among other things, that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof”. This is usually understood to mean, in Jefferson’s words, that there should be a “wall of separation” between the church and the state. Of course, Jefferson meant that the wall would only apply to the national government and not state government, who did have established religions (for example Massachusetts, Connecticut, New Hampshire, and Maryland). However, the practice of having state established religions discontinued, and definitely died out with the passage of the Fourteenth Amendment.

Today, however, it can’t be said that there is a “wall of separation” between religion and government.  The Supreme Court has allowed a few cracks in that wall. Here are just three examples. In Everson v. Board of Education, the Court held that reimbursements for transportation even to students of private religious schools do not violate the Establishment Clause. And in Simmons v. Zelman-Harris the Court ruled that disbursement of federal funds to local educational agencies which lent educational materials and equipment to public and private schools to implement secular, neutral and non-ideological programs, even though some of them are given to Catholic schools is constitutional as well.  Finally, in Agostini v. Felton, the Court held that public school teachers can instruct at religious schools so long as material is secular in nature.

In all these cases the Court has insisted that there be no excessive entanglement government and religion, and, as long as this condition was kept (along with other requirements), then the allocation of funds and resources to religious schools is constitutional. The flip side of this separation, of course,  is that government doesn’t have a say over the management of these schools.

Of course, this doesn’t have to be the case. One can ask the question why isn’t it the case that as long as federal (or state) funds are given to religious schools, government shouldn’t have more of a say in how these schools are run. If, as Greta suggested, education should be considered a basic right, perhaps we should be more inclined to increase state regulation on private religious schools. The problem here is that this type of involvement can create exactly the difficulties the Framers sought to avoid.

Consider this recent British Case. A 12 year old boy, son to a Jewish father and a Jewish convert mother, applied to the Jewish Free School. The school, like other parochial institutions, is partially funded by the British government. And yet, British law allows the school to decide on admissions based on criteria decided by a designated religious authority. Those criteria have denied the boy admission, since, per the school, the mother did not undergo an orthodox Jewish conversion, but a progressive one. Thus, per orthodox rules, the boy is not considered a Jew and thus was not admitted.

The parents sued, and though they lost, the appeals court reversed and held that the test of whether someone is Jewish — whether one’s mother is Jewish — was discriminatory, no matter what the rationale was. Further, the court held that the school did not use a religious distinction because its decision rested on the status of the child’s mother and hence it was an ethnic test, which is illegal. The court said that “The requirement that if a pupil is to qualify for admission his mother must be Jewish, whether by descent or conversion, is a test of ethnicity which contravenes the Race Relations Act”. It concluded by saying that the admissions criteria must depend not on family ties on “faith. However defined”.

The ruling, which has been appealed to the Supreme Court, has rattled the Jewish community in Britain. The case is problematic on many grounds. We can talk about the usual problem of judicial overreach, but more importantly, there is the problem of a secular authority attempting to determine something which is clearly the purview of religious doctrine. Of course, the rationale for intervening is the fact of public financing. If public monies are used, why shouldn’t the larger community have a say on who gets admitted?

The problem, I think, lies with this dichotomy, this either-or thinking that once there is some state involvement then everything is up for grabs and is fair game. To be sure, I don’t know what the appropriate balance is, though my sense is that, at least politically, it would have been better for the court not to intervene and let this vexing issue be resolved inside the Jewish community. Legally, it’s hard for me to see why this is an “ethnic” issue and not also, or mainly, a “religious” one.

Be that as it may, my point is that the U.S. model is not the only one available to us. At present, there is very little regulation over the management and content being taught at religious schools. But with the increased allocation of funds, more regulation might be expected. And that might lead us to grapple with far more complicated questions than books and busing. Increased regulation necessarily means, then, making decisions on religious dogma in the name of constitutional principles. Currently, this opens up more problems than it purports to solve.

Idea of the day: ditch standardized tests

examsWhy do colleges and universities require millions of students each year to take standardized tests as part of their college applications?  The conventional rationale goes something like this: there is a vast quality difference in the nation’s high schools, so GPA alone may not accurately reflect the true ability of the students.  After all, an A in English at Phillips Exeter is not the same as an A at East Memphis Public High, but a 2400 on the SATs is the same everywhere.  Standardized tests therefore put students on the same evaluative plane.  The test scores are more useful for gauging the student’s academic aptitude than GPAs.

Sounds good, except that it’s not really true.  Critics have long charged that flaws and biases in these tests make them bad predictors of college success.  The latest and important evidence comes from the new book Crossing the Finishing Line. In a brief review of the book, Chad Alderman notes that the data in the book shows that standardized test scores have little predictive power of student graduation rates in college.  High school GPAs are “three to five times more important” in predicting whether a student will graduate from college than SAT or ACT scores.  Moreover, when high school quality is accounted for, the predictive power of SAT and ACT entirely disappears and even becomes negative.

What should we make of this data?  If the data shows what it purports to show, then a major — perhaps the only — rationale for requiring and using standardized test scores in college admissions has just been dealt a fatal blow.

And on the other side, there is a host of possible negative effects of using standardized tests.  The critics of standardized tests have accused it of being biased against low-income students in favor of more affluent students, of spawning a 310-million-dollar-per-year test-prep industry, of encouraging rote preparations in schools, of harboring racial and gender biases.  The sad truth is that even the supporters of standardized tests may only favor the tests in spite of all their flaws because “this is the only thing we have.

Is it really the only thing we have?  If we want to predict student performance and admit only those who are likely to succeed in college, colleges could simply look at objective measures of high school quality and use that in conjunction with student grades to determine how much that GPA should be weighted compared to other factors and candidates.

And if we wish to reward the students who succeeded in spite of adversity, we should use the same objective measure of high school quality and give students who did well in the worst schools extra points in their applications.

This brings me to another concern I have with standardized test scores.  The scores, as meaningless and unpredictive as they might be, give ammunition to those who accuse affirmative action programs of being unfair to meritorious applicants who lose out to “less qualified” minorities.  The scores are often presented as a neutral and quantifiable indicator of how much less qualified the minority candidates are.

The plaintiff in Gratz v. Bollinger, for example, attacked the University of Michigan for giving race 20 points on a 100-point scale while giving only 12 points for having a perfect SAT score.  And as I have written here previously, a Princeton researcher found that being African-American is equivalent to having 230 extra points in a 1600-point SAT exam, and being Hispanic is equivalent to 185 points.

How closely a school tracks admission to standardized testing scores has become a proxy for whether the school is “meritocratic.”  Disparities in testing scores among different ethnic groups is a favorite examples that opponents of affirmative action use to criticize it.  But their criticism has a lot less force if the testing scores themselves are poor proxies for student ability.

A perfect score on the SAT is indeed the same everywhere.  But it turns out that the perfect score doesn’t tell us very much at all about what we want to know.

Asking for accountability in “legacy” admissions

It’s definitely constitutional for a university, even a public one, to give admissions preference to the children and relatives of its alumni — “legacies” in the vernacular.  But does it work?  Critics call the preferences a school gives to the children of its alumni “affirmative action for the rich.”  Defenders claim that the practice is both legitimate and useful because it forges a stronger connection between the alumni and the school and encourages alumni giving.  The latter argument seems to make sense, until you consider the fact that very few people allude to actual figures when they make this argument.

How much does legacy preference actually increase in alumni giving?  Earlier this year, an article published in the Santa Clara Law Review argues that, aside from suffering from various constitutional and statutory problems, giving preferences to legacies simply does not work in the way its defenders say it does: the policy makes no discernible difference in the level of alumni giving in the sample of “100 elite universities” studied.  Granted, the sample size seems rather small, but I would love to see some data from the other side of the debate.

Three Princeton researchers have found in a study from 2004 that being a legacy at one of the three elite universities is roughly equivalent to a boost of 160 points in a 1600-point SAT exam for the applicant to those universities.  This is quite an advantage, hence the outcry of unfairness.  But it does not necessarily mean that the net effect of the policy is negative.  The question is, point for dollar, what do the universities get in return?

The case for disclosing and studying such data is especially strong for tax-supported public universities (though even private universities are supported by federal funding and should be required some degree of accountability).  As taxpayers, the public should demand accountability for this policy.  Only with more information can we sensibly debate whether the policy of “affirmative action for the rich” is worth it.

And if the data is inconclusive, as it very well might be, then we ought to err on the side of fairness, follow the lead of the University of California system, and jettison this policy with many downsides and no obvious benefits.

Idea of the day: forget acceptance rates

Acceptance rate is often perceived as an important indicator of a university’s quality.  In the U.S. News and World Report‘s college rankings, acceptance rate determines 10% of the student selectivity score for a university, which is 15% of the total score.  U.S. News also publishes a list of colleges with the lowest acceptance rates.  The top 3 spots were taken by Curtis Institute of Music (4.0%) , Jarvis Christian College (4.5%), and Rust College (7.6%) respectively. 

The fact that you probably have never heard of these schools illustrates my point perfectly.  Comparing colleges based on acceptance rates is meaningless at best, and counterproductive at worst.  Although schools such as Harvard (7.9%), Yale (8.6%), and Stanford (9.5%) also make the top of the “lowest acceptance rate” list, they share spots with schools such as College of the Ozarks (11.7%) and Alice Lloyd College (10.5%).  Perhaps College of the Ozarks and Rust College are fine institutions as well, but who could argue seriously that they are comparable to Princeton (9.9%) and Columbia (10.0%)?  What would such a comparison even mean?

Acceptance rate is nothing more than the number of applicants that a school receives divided by the number of students that are admitted.  A low acceptance rate might enhance an image of exclusivity and prestige for an institution, but it says nothing about the substantive quality of its faculty or student body, which can be, and should be, and already are measured by other, more objective criteria in the rankings, such as test scores, retention rates, or peer assessment of reputation and academic quality. 

Moreover, rewarding low acceptance rates creates a perverse incentive for colleges to encourage applications from students who are not suited for the institution but limit the number of admitted students, in order to artificially decrease its acceptance rate and gain a spot or two on the rankings.   Perhaps you think that 1.5% of a total score does not matter, then keep in mind that colleges are often separated by one point in their score in the rankings.   This is after the scores are reweighed (recalibrated to a scale of 100 based on the highest score received), which means that their original scores are separated by even more minute numbers.  For schools at the margins, it may very well be worthwhile for them to do exactly what I described above.

More troubling, however, is the whole notion that exclusivity is a good thing.  Academic quality should be the real objective here.  Why should exclusivity matter?  In an excellent essay in the Chronicle of Higher Education last week, Kevin Carey faulted Harvard for not using its immense endowment, which increased nearly $ 32 billion in less than two decades before a spectacular loss of $10 billion last year, to educate more undergraduates.   (In both his orignial essay and his follow-up blog post, Carey considers arguments such as the fact that Harvard has increased financial aid for its undergraduates in recent years and the need to keep class sizes small, and rejects these reasons as justifications for not expanding its undergraduate class, so I won’t go into that here.)  It is undisputable that many more extremely qualified applicants apply to Harvard (or any of the elite colleges and universities) than there is space for them.  Shouldn’t a spectacular expansion of resources mean that a top college also expand its capacity to fulfill its basic educational mission, and therefore increase its acceptance rates?

And to do that, let’s discard “acceptance rates” as a measure of a school’s quality.