Friday, October 19, 2007

What lessons does KIPP offer for urban education reform?

If KIPP schools themselves aren't part of the answer for most urban kids, does KIPP offer any lessons for urban school reform?

I think it does. Increasing the amount of time that kids spend in school is a promising strategy for improving kids' academic achievement. The literature on summer learning gaps provides insight into why this could work.

Here's how researchers have gone about studying summer learning gaps. Imagine we give a kindergarten student a test at the beginning and end of kindergarten, and then again at the beginning and end of first grade. We can now calculate a rate of learning during the school year, as well as a rate of learning during the summer. Since the early 1970s, we've known that poor kids learn at a much lower rate during the summer than their advantaged peers.

This adds up over time. Karl Alexander found that two-thirds of the reading achievement gap between poor and rich 9th graders in Baltimore is explained by how much they learned during their elementary school summers. (Links for more info: Karl Alexander's recent work - written up here in Ed Week - or Doug Downey and colleagues paper using data from the Early Childhood Longitudinal Study. )

During the school year, every night from 3PM-11PM is a mini-summer. Advantaged kids participate in tutoring, art, music, and sports when school lets out, but disadvantaged kids don't have access to activities that keep them learning through the evening. While it is an expensive intervention, keeping poor kids in school longer - both during the school year and during the summer - is a policy option that deserves more attention.

Enjoy the weekend, everyone!

Thursday, October 18, 2007

Cool people you should know: Linda Renzulli

A charter school scholar to keep an eye on is Linda Renzulli, a sociologist at the University of Georgia. Renzulli has written a number of papers on the social and political conditions that influence the founding of charter schools, and you can find her papers here. She's currently working on a study comparing teacher satisfaction in charter and traditional neighborhood schools.

Cool findings:
  • In her study of the state-to-state diffusion of charter school legislation, Renzulli found that states were more likely to pass their own charter school legislation (between 1991 and 1999) when neighboring states already had charter legislation.
  • In another study, Renzulli found that the number of nonreligious private schools in a county was a strong predictor of the number of applications submitted to open charter schools, i.e. counties with more nonreligious private schools have more charter school applications.

Does KIPP provide a solution to the problems of urban education?

On Monday, I asked whether KIPP has a positive effect on the students they serve; I concluded that KIPP schools likely have positive effects on their students, but they are not as large as one perceives them to be when one compares outcomes at KIPP and neighborhood schools.

Today, I want to write about a different question: is a KIPP education is likely to pay dividends for students who don't select into KIPP lotteries? In other words, let's imagine that we head over to a neighborhood school, pick every 10th student, and assign them to a KIPP school. What's likely to happen?

Richard Rothstein said it best in Class and Schools, when he wrote:

[KIPP students] are not typical lower-class students....KIPP's strategy works well for them but there is no evidence that it would be as successful for students whose parents are not motivated to choose such a school and help enforce its academic rules. In the Bronx KIPP school, 41% of students entered at or above grade level in reading, and 48% entered at or above grade level in math...If these schools are unusually effective (as they probably are), they can post even higher achievement. This is admirable, but it does not indicate that KIPP has shown how to get middle-class results from typical lower-class students without addressing the social and economic causes of failure.

As I wrote Tuesday, KIPP's own enrollment policies basically make this point; that the school requires families to meet with the school to discuss their expectations and to sign a contract indicates that KIPP knows that this approach won't work for everyone. This is not to say that KIPPsters don't benefit, but that we are going to have to look elsewhere for solutions that work for the majority of kids.

KIPP's own patterns of attrition also demonstrate that this is an approach that doesn't work consistently across the urban population. Ed Week wrote a long article about attrition earlier this year, and the tables below show that in some KIPP schools, there appears to be significant attrition. I pulled the enrollment data for the KIPP schools in Texas for both 2005-2006 and 2004-2005, and here's what I found:

2005-2006 Number Enrolled

Grade 5--

Grade 6--

Grade 7--

Grade 8--

Grade 9--

Grade 10--

KIPP Truth - Dallas

61

33

37

-

-

-

KIPP Academy-Houston

92

87

94

84

95

59

KIPP Aspire -San Antonio

83

90

66

-

-

-

KIPP - 3D - Houston

91

89

75

63

-

-

KIPP Austin College Prep

85

79

57

35

-

-


2004-2005 Number Enrolled

Grade 5

Grade 6

Grade 7

Grade 8

Grade 9

Grade 10

KIPP Truth - Dallas

43

48

-

-

-

-

KIPP Academy-Houston

85

86

87

74

60


KIPP Aspire -San Antonio

78

70

-

-

-

-

KIPP Austin College Prep

51

55

41

-

-

-


Because these numbers don't allow us to follow individual kids, it's difficult to know what to make of them. For example, that only 33 of the 43 KIPP Truth 5th graders in 2004-2005 progressed to 6th grade in 2005-2006 could mean that they left, or that KIPP retained a number of them in 5th grade. On the other hand, we don't know how many kids were there on the first day (these are October 31st numbers); KIPP Truth reports that they have 90 5th grade seats, so that there are only 61 5th graders by October 31st may represent significant student attrition. And if the lowest scoring kids are leaving, there may be large effects on test scores. Here are the test score data for these schools for each of these two years.

2005-2006 Percent Passing State Tests

G5 Read

G6 Read

G7 Read

G8 Read

G5 Math

G6 Math

G7 Math

G8 Math

KIPP Truth - Dallas

64

86

87

-

67

86

94

81

KIPP Academy-Houston

*

96

96

99

77

94

95

99

KIPP Aspire -San Antonio

88

99

85

-

88

97

92

-

KIPP - 3D - Houston

*

98

80

98

66

93

81

98

KIPP Austin College Prep

71

95

92

99

62

96

94

97










2004-2005
Percent Passing State Tests

G5 Read

G6 Read

G7 Read

G8 Read

G5 Math

G6 Math

G7 Math

G8 Math

KIPP Truth - Dallas

55

87

-

-

64

83

-

-

KIPP Academy-Houston

83

99

95

97

80

94

96

99

KIPP Aspire -San Antonio

70

90

-

-

89

94

-

-

KIPP Austin College Prep

51

85

93

-

78

75

83

-


(* means that Texas did not report their scores.)

To sum up, while KIPP offers a viable alternative for a subset of kids, it's not a reform strategy that can be brought to scale. However, as I will discuss tomorrow, there are a few important elements of the KIPP approach that public schools can successfully adopt.

Wednesday, October 17, 2007

KIPP Reader Comments Roundup

A handful of readers have taken the time to post thoughtful comments on KIPP, and I wanted to respond to a few of them.

One reader asked:

Why is it you think that kids who spent 5 years in school and only got to the 34th percentile nationally would not logically stay at that percentile when they hit the middle school years in the same system? It is statistically extremely likely that they would see very little movement, so the KIPP increase to the 58th percentile is, one can be quite sure (even without your methodology), a dramatic improvement over the district school system.

Anyone who is selecting into an intervention is expressing dissatisfaction with how things are going. Suppose I have been lax about going to the gym for five years. I finally decide that I’m horribly out of shape and opt into a lottery for a personal trainer, but I don’t win. I’m likely to exercise on my own anyway, even though I don’t have trainer. Treatment for depression is another good example of why you can’t just compare pre- and post-tests without a control group. If you attribute all of post- anti-depressant improvement to the drug itself, one would come to the conclusion that the effects of these drugs are more than twice as large as they truly are. People who select into an intervention are making a statement that they want to make an improvement – and some improvement may happen anyway in the absence of the treatment. That said, I agree with you that there are likely positive KIPP effects; I would just like a better estimate of how large they are and for what kinds of kids KIPP works.

Another reader made this point:

Their students have just as many problems at home as the kids in the normal public school. Your assertion that of course THESE students at KIPP are special and smarter and have better families is just wrong.

Of course KIPP is serving very disadvantaged kids – kids who have problems at home and are coming in with low achievement levels. But the question is what keeps their peers from not choosing in and/or keeps them from staying enrolled; that everyone does not opt into a KIPP lottery tells us that these families are different in some regard. So I disagree that these families are identical. Most prominently, they may be more likely to benefit from a KIPP education because they have the support at home to back it up.

Joanne Jacobs, in her comment, related that KIPP schools change over their life course; from a story that she wrote on KIPP Heartwood, she found that the kids early on may be more disadvantaged than those enrolling after the school’s success has been demonstrated. This is worth following up on.

Finally, I think that some readers misinterpreted the purpose of my posts this week. I have nothing but respect for the educators who’ve worked at KIPP schools, who work tirelessly to improve their kids’ performance. They work grueling hours, care deeply about closing the achievement gap, and make incredible personal sacrifices to meet these goals.

My intent is not to diminish KIPP’s accomplishment, but to ask what the possibilities and limits of the KIPP approach are for urban education reform more broadly. 75% of my dinner party conversations with non-educators involve discussion of KIPP as a viable reform strategy for urban education. Answering these questions requires us to think about what KIPP can and can’t do, and for whom. Public policymakers can’t make good decisions without this information, and our failure to ask them, I think, is a disservice to the children whose lives are in the balance.

When a Lottery is Not a Lottery II: The NYC Small Schools

I received a number of emails last week in response to my post comparing Evander Childs High School and the new small schools in the same building, which found that the small schools enrolled a very different group of kids. The most common question was how these differences were possible if these students are assigned “by lottery.”

I spent a few days looking into this question, and here’s what I learned: it’s a common misconception that the NYC small schools are admitting students by lottery. The majority of the new small schools fall under the city’s “limited unscreened” selection mechanism. Here’s how it works: as a student, I can apply to up to 12 schools; students rank these choices from 1-12. Every 8th grader applies to high school, and these applications are entered into a central database. Each school then receives a list of the students who have applied to them, but the schools do not know if the students have ranked the school 1st or 12th.

Limited unscreened schools then dichotomously rank their students (yes or no); students are chosen if the school can verify that the student is making an “informed choice” to attend the school. Students that choose the school and that are also chosen by the school are admitted in order of the students' preferences (i.e. students who ranked the school first are admitted before those who ranked the school 10th).

Schools vary in the criteria they use for verifying informed choice. Some schools require students to attend information sessions; in the past, schools have required that students attend a session with a parent or guardian, but this has now been forbidden by the Department of Education. Other schools have used “applications” that students needed to fill out to verify informed choice. For example, a New Yorker passed along the application used in the 2004-2005 school years by the network of schools affiliated with Replications, which include schools like the Frederick Douglass Academy and Mott Hall replications, which are "limited unscreened" schools. Here are the essay questions:
1) What are three things your teachers would say about you?
2) What makes you want to attend a school that will demand your very best academically and will expect you to work harder than you probably ever have before?
3) What are five future goals you have for yourself?
4) Mention the title and authors of some books you would like to discuss during your interview.
5) What are some activities to which you belong either in school or outside of school?
In addition, until this year, all limited unscreened schools had access to individual students’ prior attendance, grades, their test scores, their date of birth, their address, their sending junior high schools, and their special education and English language learner status. Interestingly, the Department of Education did not provide this information to limited unscreened schools beginning with admission for the 9th grade class of 2007 - was this choice made because unscreened schools were using data they weren't supposed to?

What this means, however, is that the limited unscreened schools about which we’ve heard a lot of crowing had access to a great deal of students’ achievement data. Given the data I presented last week, which show large differences between large and small school incoming populations, I have a hard time believing that schools ignored these data entirely.

Certainly the formal rules of the system prohibit these schools from doing so – but how tightly was this regulated? The Department of Education could easily demonstrate that “informed choice” doesn’t limit lower-achieving students’ access to these schools by testing for differences in the mean test scores, attendance, etc of the applicants, the applicants chosen by the schools themselves, and the students who ultimately matriculate. This is a trivial (i.e. easy) analysis, and certainly something that the Department of Education needs to do if they want to maximize educational opportunity for New York City kids.

To sum up, it appears that some of these disparities are created by who applies to the small schools themselves, by who schools verify as making an “informed choice,” and potentially by these schools use of students’ achievement characteristics available through the database.

Tuesday, October 16, 2007

When a Lottery is Not a Lottery

Even KIPP’s critics acknowledge that KIPP students are chosen by lottery. What it means to be chosen by lottery is that there is no intermediate step between picking your name out of a hat and deciding whether you get to show up on the first day of school. From media depictions of KIPP, I was under the impression that a certain number of names are chosen and, give or take a handful of kids that represent random attrition, these are the kids who matriculate at KIPP.

It turns out that’s not exactly true. For example, consider the four KIPP schools that are located in New York City. From their websites, I found that students are chosen, and then are asked to come to the school for a meeting or have a phone conversation to discuss the expectations for the school; after these interactions, parents – or potentially the schools - decide if they will enroll their students at KIPP.

Contrary to many of KIPP’s critics, I don’t see this as strategic action (i.e. an attempt to cream the best students). KIPP wants to run schools where kids are committed to the culture, and this makes a lot of sense. Meeting kids individually before school starts is a necessary part of this process. KIPP schools ask kids to do things that are not required by regular public schools, and it’s impossible to run such a school without student and parent buy-in.

That said, here is the text from the KIPP A.M.P. application form, which is available at their website:

KIPP: A.M.P. Academy will extend offers of admission to the first 77 children whose names are picked in the lottery. If my child’s name is selected in the lottery, KIPP: A.M.P. Academy will contact me at the above-listed address and phone number to schedule a meeting. After my meeting, I will have to decide whether to accept the offer of admission and register my child at KIPP: A.M.P Academy for the 2007-2008 school year. If KIPP: A.M.P. makes a good faith effort to reach me but is unable to schedule a meeting, my child’s place at KIPP: A.M.P. Academy will automatically be surrendered to the first child on the waiting list.

I thought this might be specific to New York, so I checked out KIPP Indianapolis, where there is not just a phone call or a meeting, but a home visit before formal enrollment:

KIPP Indianapolis College Preparatory will extend offers of admission to the first 85 children whose names are picked in the lottery…If my child’s name is selected in the lottery, KIPP Indianapolis College Preparatory will contact me at the above-listed address and phone number to schedule a home visit.

It’s certainly possible that there is not significant attrition as a result of this process. From KIPP’s end, it would be helpful to know how much lottery to first day of school attrition we’re talking about. In other words, if a KIPP school draws 100 names, I would like to know what proportion of the original 100 attend KIPP, and also know the reasons why lottery winners chose not to attend. (This is a different kind of attrition than the post-matriculation attrition covered in Ed Week’s recent article; I’ll discuss this on Thursday.)

What’s the big deal? From the researcher's and public policymakers' perspective, the primary benefit of lottery selection is that it results in balancing the treatment and control groups on observable characteristics – i.e. students’ test scores, free lunch status, etc – and their “unobservable” characteristics – i.e. their motivation, aspirations, and commitment to KIPP’s principles. On any dimension we can think of, the treatment and control groups should be indistinguishable. We can then compare these groups to figure out how effective KIPP is, as I wrote in yesterday's post.

But if we draw a lottery of 100 people and after learning more about KIPP’s approach, 30 kids who were less motivated select out or are counseled out because they are a bad fit, we end up comparing very different treatment and control groups. The treatment group is now much more motivated/committed than the control group – many of whom might have opted out as well if they had these meetings before entering the lottery.

Again, I'm not sure that school leaders are aware that these post-lottery screening processes effectively result in non-lottery admission. For example, an old friend of mine recently started a KIPP-style charter school. This is a remarkable and brilliant guy who is deeply committed to improving educational options for disadvantaged kids; he’s not trying to play the system. His school drew 150 kids in the lottery for 100 seats at his school. His rationale for over-drawing was that many kids would realize they wouldn’t like the extra time, their parents wouldn’t accept the culture of the school, or the school would feel the family wasn’t sufficiently committed, and about 50 kids would withdraw even before school began. He still believed his students represented random draws from the lottery pool – after all, he argued, these are all poor and minority kids. I am not sure I was successful in convincing him otherwise.

The take-home lesson from this post is that we need to beware of the term “lottery” – it doesn’t always mean what we think it does.

Monday, October 15, 2007

Do KIPP schools have a positive effect on their students' achievement?

KIPP has been held up as a savior by its supporters and used as a perennial punching bag by its critics. In my view, these exchanges have not been particularly illuminating. And in most battles, both sides have neglected some important questions. So each day this week, I'll write about some of the issues raised by the KIPP case.

The Knowledge is Power Program (KIPP) is a network of 57 schools in 17 states. KIPP has received a significant amount of media attention for the results their schools have posted with urban students. A quick tour through the KIPP Annual Report reveals some impressive gains. What do we need to know before confirming that KIPP has a positive effect on their students’ academic achievement?

One way of outlining the challenge with evaluating KIPP is to think about how we would design a study to estimate its effects on the students KIPP serves. Recall that I am not asking whether KIPP provides a solution for students beyond those currently attending KIPP schools – this is a question I will address on Wednesday. But the first question we need to ask is whether a student attending KIPP is better off than she would have been had she attended a non-KIPP school.

Many journalists have attempted to answer this question by comparing the performance of students in a KIPP school with those at the closest neighborhood schools. For example, Paul Tough wrote in last year’s New York Times magazine:

When the scores [at KIPP Bronx Academy] are compared with the scores of the specific high-poverty cities or neighborhoods where they are located…it isn’t even close: 86 percent of eight-grade students at KIPP Academy scored at grade level in math this year, compared with 16 percent of students in the South Bronx.

Tough observes that KIPP students are doing much better, and concludes that KIPP is effective. What’s wrong with this argument?

First, that students selected into a KIPP lottery makes them different from than those who did not. It may be that their parents are more involved in their education, that they are having a particularly bad experience at their neighborhood school, or that their parents can no longer pay for private school. Whatever the reason, families selecting in, even if they are all poor and minority kids, are different by virtue of choosing a non-neighborhood school.

Lots of choice advocates will spar on this point, and argue that everyone wants a better choice for their children, so there is no selection problem. While rhetorically effective, anyone arguing that families that choose into a charter school are the same as those who don’t is simply wrong. Random assignment is the gold standard of causal inference in the natural and social sciences, and kids are not randomly assigned to KIPP lotteries. Saying that 80% of the kids are poor and 90% are African-American and Hispanic doesn't solve this problem. Even if KIPP kids had test scores identical to their neighborhood school peers, we still couldn't compare KIPP and neighborhood school kids who didn't opt in with any confidence because there is selection on "unobservables" - things like motivation and aspirations that are not measured by administrative datasets used to make these comparisons.

To get around this problem, KIPP has compared students' baseline performance with their performance after three years in a KIPP school. Jay Mathews summarizes the results of these analyses in an article earlier this year:

A KIPP analysis of the scores of about 1,400 students in 22 cities who have completed three years at KIPP show they went from the 34th percentile at the beginning of fifth grade to the 58th percentile at the end of the seventh grade in reading and from the 44th percentile to the 83rd percentile in math....Gains like that for that many disadvantaged children in one program have never happened before.

The argument is that students’ growth demonstrates the effect of the KIPP school. The problem with this approach is we have no way of knowing that these students wouldn’t have made similar gains anyway.

The best approach, and one that Mathematica has been contracted to carry out, is to compare students who entered the lottery and won with those who entered the lottery and lost. To keep this example as straightforward as possible, let’s assume that all lottery winners enroll and stay, and all lottery losers go to their neighborhood school, and there is no attrition in either case. We can now compare the achievement of these two groups and call the average difference the “treatment effect on the treated” – the effect of receiving a KIPP education on the students who received it. If the KIPP students are better off, we can say that KIPP “worked” for them.

The effectiveness of the research design I just proposed rests on two assumptions. First, we need to be sure that the lottery is actually a lottery, i.e. that we are not dealing with a “broken experiment.” (For a good explanation of this problem, see the Cullen and Jacob paper I wrote about last week.) A second way in which the design I described falls apart is when there is “selective attrition.” Selective attrition is the idea that people who choose to leave an experiment are different in one way or another. I’ll discuss these two issues later in the week.

For now, I'll conclude that we know that 1) KIPP kids perform better, on average, than their neighborhood school peers, 2) KIPP kids exhibit very large value-added gains on standardized tests. But we actually don’t know if KIPP kids are better off academically by virtue of attending KIPP than they would have been if KIPP didn't exist. There are certainly good reasons to believe that they are - i.e. they are in school substantially more - but the size of the "KIPP effect" is probably much smaller than we currently believe it to be.

It perplexes me that journalists continue to downplay these concerns. For example, in an article from a few years back, Jay Mathews wrote:

Whatever the academic or family characteristics of incoming KIPP students, they are clearly disadvantaged -- 82 percent of all KIPP students qualify for federal lunch subsidies -- and at KIPP have achieved gains in reading and mathematics far above those of other programs trying to help such children.
And Paul Tough wrote:
In some ways, the debate seems a trivial one — KIPP is clearly doing a great job of educating its students; do the incoming scores at a single school really matter?
I hope this post convinces you that these evaluation concerns are non-trivial. If we really want to know if these schools are working and how large their effects are (and for whom), we need to take these issues seriously. Perhaps Jay Mathews himself said it best when he wrote:
I understand why we education reporters try to make KIPP sound like more than it is. We are starved for good news about low-income schools. KIPP is an encouraging story, so we are tempted to gush rather than report. We don't ask all the questions we should.

Sunday, October 14, 2007

The KIPP Question

The KIPP model has been held up in many circles as a silver bullet solution for urban education, and the “KIPP question” that I’ve heard many times is, “If KIPP can achieve dramatically better results with urban kids, why can’t everyone else?” This week, I’ll consider some of the issues raised by the KIPP case.

Monday: Does KIPP have a positive effect on the academic achievement of the kids they serve, and what kind of data do we need to decide?

Tuesday: What does it mean to hold a lottery, and how is the idea of a lottery relevant for evaluating KIPP?

Thursday: Does KIPP present a solution to the problems of urban education? In other words, would KIPP-style schools improve the achievement of the kids who currently do not attend KIPP schools? What are the implications of attrition – that is, that kids who begin in 5th grade may not stay through 8th grade – for our understanding of KIPP’s effects?

Friday: What lessons does KIPP offer for urban education reform?