Idea of the day: using e-books to learn about learning

Amazon.com tells me that the Kindle is the hottest holiday gift this holiday season.  And not long ago, several law professors had a lively discussion of law e-textbooks on a prominent law blog.  It seems that, ready or not, widespread use of e-textbooks are just around the corner.

To be sure, traditional paper textbooks will not disappear immediately, but e-textbooks will surely gain more popularity as prices for e-readers fall, technologies become perfected, and more publishers make textbooks available electronically.

I must admit that I am not yet enamored by e-readers, and do not own one.  I prefer the texture, the weight, the craftsmanship of real books.  I love certain publishers just because I like the designs of their paperbacks, and a beautiful book cover always sends a little shiver of pleasure down my spine.  To think that a new generation of children will be weaned on a screen and never know the loveliness of paper makes me a bit sad.

Nonetheless, lately I have gotten more excited about the prospect of e-textbooks when I realized that e-readers might be used — indeed, is probably already being used — to track the reading habits of its readers and to generate data and trends.

To average adult readers reading for pleasure, the idea that their reading habits may be meticulously tracked by a machine might seem like an enormous invasion of privacy.  But collecting such data may be indispensable to educators who are trying to understand effective learning behavior.

Most of the data that we collect about education measures the output of the learning process.  We test student knowledge in standardized, nationwide or state-wide exams that are given periodically, and then analyze the test scores generated by these exams to determine how well the learning process has succeeded.  But e-readers and similar devices can generate enormous amount of data about the learning process itself.

What type of data might be collected?  The possibilities are virtually endless.  The e-device might track how much time a student spends per day reading, what he reads, the speed at which he reads, the amount of time he spends reading particular pages, etc.  If, as I believe would inevitably happen, such devices would come equipped with quizzes and problem sets and exams, students can also be tracked based on how long they spend doing a particular problems and, of course, their score.

There is a bonanza of information that researchers could use to study learning behavior.  But the tracking could be used not only for academic purposes, but also as a way for schools and teachers to ensure that students are completing their homework.  Many employers already do a form of this type of tracking by, for example, requiring their employees to take an on-line training program that consists of powerpoint slides with periodical questions to ensure that the content of the slides are being read and understood.

There are a few well-known problems in education that researchers have long puzzled over.  Take, for example, the problem of the racial achievement gap.  Why do children of different races perform differently in standardized exams even when one accounts for other factors, such as socio-economic background?  Elsewhere in this blog, I have argued that we have an unhealthy focus on race, and that other, even wider achievement gaps should trouble us more.  But it is undeniable that the racial achievement gaps exist.  Learning more about children’s learning habits might give us insights into this and other puzzles.

Of course, understanding how students learn is not the only or even the most important goal.  Through better understanding of how people learn, educators can figure out ways to improve the way students learn and the way teachers teach.

Does all of this sound Big Brother-esque?  Perhaps.  But like it or not, retailers, advertisers, and website developers already do very similar things in order to understand and track consumer behavior.  Information about the amount of time a shopper spends on a webpage, the types of links that he clicks, the search criteria that brought him there, are all meticulously tracked and then fed into sophisticated programs.  It is time that our educators take advantage of these tools, for the sake of our students.

Advertisements

Idea of the day: ditch standardized tests

examsWhy do colleges and universities require millions of students each year to take standardized tests as part of their college applications?  The conventional rationale goes something like this: there is a vast quality difference in the nation’s high schools, so GPA alone may not accurately reflect the true ability of the students.  After all, an A in English at Phillips Exeter is not the same as an A at East Memphis Public High, but a 2400 on the SATs is the same everywhere.  Standardized tests therefore put students on the same evaluative plane.  The test scores are more useful for gauging the student’s academic aptitude than GPAs.

Sounds good, except that it’s not really true.  Critics have long charged that flaws and biases in these tests make them bad predictors of college success.  The latest and important evidence comes from the new book Crossing the Finishing Line. In a brief review of the book, Chad Alderman notes that the data in the book shows that standardized test scores have little predictive power of student graduation rates in college.  High school GPAs are “three to five times more important” in predicting whether a student will graduate from college than SAT or ACT scores.  Moreover, when high school quality is accounted for, the predictive power of SAT and ACT entirely disappears and even becomes negative.

What should we make of this data?  If the data shows what it purports to show, then a major — perhaps the only — rationale for requiring and using standardized test scores in college admissions has just been dealt a fatal blow.

And on the other side, there is a host of possible negative effects of using standardized tests.  The critics of standardized tests have accused it of being biased against low-income students in favor of more affluent students, of spawning a 310-million-dollar-per-year test-prep industry, of encouraging rote preparations in schools, of harboring racial and gender biases.  The sad truth is that even the supporters of standardized tests may only favor the tests in spite of all their flaws because “this is the only thing we have.

Is it really the only thing we have?  If we want to predict student performance and admit only those who are likely to succeed in college, colleges could simply look at objective measures of high school quality and use that in conjunction with student grades to determine how much that GPA should be weighted compared to other factors and candidates.

And if we wish to reward the students who succeeded in spite of adversity, we should use the same objective measure of high school quality and give students who did well in the worst schools extra points in their applications.

This brings me to another concern I have with standardized test scores.  The scores, as meaningless and unpredictive as they might be, give ammunition to those who accuse affirmative action programs of being unfair to meritorious applicants who lose out to “less qualified” minorities.  The scores are often presented as a neutral and quantifiable indicator of how much less qualified the minority candidates are.

The plaintiff in Gratz v. Bollinger, for example, attacked the University of Michigan for giving race 20 points on a 100-point scale while giving only 12 points for having a perfect SAT score.  And as I have written here previously, a Princeton researcher found that being African-American is equivalent to having 230 extra points in a 1600-point SAT exam, and being Hispanic is equivalent to 185 points.

How closely a school tracks admission to standardized testing scores has become a proxy for whether the school is “meritocratic.”  Disparities in testing scores among different ethnic groups is a favorite examples that opponents of affirmative action use to criticize it.  But their criticism has a lot less force if the testing scores themselves are poor proxies for student ability.

A perfect score on the SAT is indeed the same everywhere.  But it turns out that the perfect score doesn’t tell us very much at all about what we want to know.

Idea of the day: forget acceptance rates

Acceptance rate is often perceived as an important indicator of a university’s quality.  In the U.S. News and World Report‘s college rankings, acceptance rate determines 10% of the student selectivity score for a university, which is 15% of the total score.  U.S. News also publishes a list of colleges with the lowest acceptance rates.  The top 3 spots were taken by Curtis Institute of Music (4.0%) , Jarvis Christian College (4.5%), and Rust College (7.6%) respectively. 

The fact that you probably have never heard of these schools illustrates my point perfectly.  Comparing colleges based on acceptance rates is meaningless at best, and counterproductive at worst.  Although schools such as Harvard (7.9%), Yale (8.6%), and Stanford (9.5%) also make the top of the “lowest acceptance rate” list, they share spots with schools such as College of the Ozarks (11.7%) and Alice Lloyd College (10.5%).  Perhaps College of the Ozarks and Rust College are fine institutions as well, but who could argue seriously that they are comparable to Princeton (9.9%) and Columbia (10.0%)?  What would such a comparison even mean?

Acceptance rate is nothing more than the number of applicants that a school receives divided by the number of students that are admitted.  A low acceptance rate might enhance an image of exclusivity and prestige for an institution, but it says nothing about the substantive quality of its faculty or student body, which can be, and should be, and already are measured by other, more objective criteria in the rankings, such as test scores, retention rates, or peer assessment of reputation and academic quality. 

Moreover, rewarding low acceptance rates creates a perverse incentive for colleges to encourage applications from students who are not suited for the institution but limit the number of admitted students, in order to artificially decrease its acceptance rate and gain a spot or two on the rankings.   Perhaps you think that 1.5% of a total score does not matter, then keep in mind that colleges are often separated by one point in their score in the rankings.   This is after the scores are reweighed (recalibrated to a scale of 100 based on the highest score received), which means that their original scores are separated by even more minute numbers.  For schools at the margins, it may very well be worthwhile for them to do exactly what I described above.

More troubling, however, is the whole notion that exclusivity is a good thing.  Academic quality should be the real objective here.  Why should exclusivity matter?  In an excellent essay in the Chronicle of Higher Education last week, Kevin Carey faulted Harvard for not using its immense endowment, which increased nearly $ 32 billion in less than two decades before a spectacular loss of $10 billion last year, to educate more undergraduates.   (In both his orignial essay and his follow-up blog post, Carey considers arguments such as the fact that Harvard has increased financial aid for its undergraduates in recent years and the need to keep class sizes small, and rejects these reasons as justifications for not expanding its undergraduate class, so I won’t go into that here.)  It is undisputable that many more extremely qualified applicants apply to Harvard (or any of the elite colleges and universities) than there is space for them.  Shouldn’t a spectacular expansion of resources mean that a top college also expand its capacity to fulfill its basic educational mission, and therefore increase its acceptance rates?

And to do that, let’s discard “acceptance rates” as a measure of a school’s quality.

Idea of the day: longer and more school days

school-bus-topThe AP reports that the Obama administration is proposing a longer school year and longer school days.  Children in other nations, says Obama and the Secretary of Education, Arne Duncan, spend up to 30% more time in school.  America needs to align itself with the international norm.

As someone who spent all of her elementary school days and part of the middle school in China, I can attest to the astonishment that my 12-year-old self felt when I first came to the States and discovered that a normal school day was over by 2:30 pm.  In China, I have often stayed in class until 7:00 or even 8:00 p.m.  My memory of 5th and 6th grade involved going to school in the dark and coming home long after dark.  By comparison, the American school schedule, as well as the content of its classes, seemed like child’s play.

The long 2.5 or even 3 month summers were also a novelty to me.  In Beijing, the school year was out in mid-July, and resumed in early September.  We had at most a month and a half of summer vacation.

Personally, I think the change is long overdue.  President Obama hit the nail on the head when he called the current American school calendar an outdated one based on the “agrarian calendar.”  The only surprise, for me, is how long it took for people to catch on to this fact.

Obama justified his proposal in terms of catching up with international standards and making American students competitive against students in other (Asian?) countries.  But his proposal has an additional benefit: narrowing the achievement gap between students of different socio-economic classes.

Just yesterday, my friend and I talked about Malcolm Gladwell’s book Outliers (which I still have to read), a book that, as I have since learned, is really about education.  My friend mentioned the well-known study that Gladwell cites in his book by Johns Hopkins sociology professor Karl Alexander about “summer learning loss.”  Put briefly, the Alexander study shows that in Baltimore Public Schools, low-income students actually learn more during the school year than their middle- and upper-middle-class classmates, but they fall behind during the summer while their richer peers gained more ground.  Gladwell concluded in his book that although the conventional wisdom is that we must “improve” the inner-city schools, school itself is likely not the problem.  Too little school is.

If Gladwell and Alexander’s points are correct, increasing the length of the school year and the school days will not only make American students more competitive on the global market, it will help eliminate the advantage that richer students have over their poorer counterparts and make our education system more equitable.  Seems like a worthy goal to me.