Monday, December 10, 2007

Do Value-Added Models Add Value? A New Paper Says "Not Yet."

Value-added models are all the rage. Last week, the Gates Foundation donated 4.5 million to Houston to fund value-added measurement of teachers' effectiveness and to award bonuses based on teachers' "value-added." Similarly, NYC is developing a system to provide value-added estimates to principals to aid in tenure decisions; last year, Dept of Ed official Robert Gordon suggested that up to 25% of new teachers should be dismissed based on such estimates. In other words, districts are planning to make significant decisions based on these measures.

A formidable challenge to value-added models is the issue of non-random assignment of students to teachers. (See longer posts about this issue here and here.) If students were randomly assigned to teachers, differences in their students' performance could be cleanly attributed to something the teacher did. Accurately measuring the effects of teachers would require this type of assignment.

Of course, students are not randomly assigned to teachers. Parents do not mindlessly flip a coin and leave their child’s placement with a bad teacher up to chance; we know that principals and guidance counselors often heed parents’ wishes in the teacher placement process. Parents aside, we also know that principals non-randomly assign kids to teachers based on their sense of which teachers are good with certain kinds of kids.

My central point here is this: if assignment is non-random, some teachers will spuriously appear to be doing much better than others. And spuriousness is an ugly problem to have on your hands when teachers' incomes and jobs are in question.

Jesse Rothstein, a Princeton economist whose work I've written about before, wondered just how big of a problem non-random assignment is for value-added models. He had a clever idea that I will call the "back to the future hypothesis." Rothstein reasoned that students' future 5th grade teachers cannot have causal effects on their 4th grade achievement gains. In other words, the future should not be able to predict the past if students are randomly assigned to teachers.

In an elegant new paper, Rothstein finds that 5th grade teachers - teachers in whose classes 4th grade students have never sat - have effects on students' 4th grade gains almost as large as their actual 4th grade teachers. That's pretty good evidence of non-random assignment.

Needless to say, this is very bad news for the folks hoping that value-added models will give them an accurate and reliable measure of individual teachers' performance. Read the whole paper at the link above.


Sherman Dorn said...

Thanks for the heads-up! I can now go to bed happy today, having learned something new.

Stuart Buck said...

Can someone explain in English why non-random assignment means that you can't look at student fixed effects? I'm not an econometrician by any means, and this may be way off, but even under non-random assignment, couldn't we tell something about teacher effects if Student A improved by 5 percentile points in Grade 5 but by 15 percentile points in Grade 6? That is, even if the Grade 6 teacher seems to "predict" the Grade 5 achievement because she's being sent the high-achieving students non-randomly, couldn't you still tell if the Grade 6 teacher has a greater-than-expected effect? Or maybe I'm just totally misunderstanding the reasoning here.