Poverty - An Excuse or A Reality?

Numerous studies have been conducted that establish the correlation between poverty and student standardized test scores.  I am providing a few here.

The importance of this relationship cannot be stressed enough nor published sufficiently until the prevailing system of  "consequential accountability" promulgated by the Bush legislation of No Child Left Behind is itself recognized, as I believe it will one day, as a major provocateur of the failed education reform effort that has taken on a life of its own.

Every time the argument is rightly made against the use of high stakes standardized testing as a punitive accountability measure using the rationale of this poverty factor, proponents of the testing regime will label it as just another excuse or allege that it represents a belief that "poor kids can't learn."  This most recently happened to me during a televised panel discussion debating the Common Core Initiative.  You can hear the host hurl that accusation at me in this video from the Louisiana Public Broadcasting presentation.

http://beta.lpb.org/index.php/publicsquare/topic/01_14_-_decoding_common_core/01_14_-_decoding_common_core

I was responding to the usual use of Louisiana NAEP scores which place Louisiana as a low achiever compared to other states.  In fact, here is a chart of Louisiana NAEP scores every year tested since 1992.  You can see the steady improvement EVERY year and a comparison of our AVERAGE scores to the national AVERAGE.  This website also allows you to compare Louisiana scores to every other tested state.

http://nces.ed.gov/nationsreportcard/states/

Now look here and see that Louisiana ranks 49th highest in poverty out of the 50 states and D.C. Behind only Mississippi and D.C.

http://en.wikipedia.org/wiki/List_of_U.S._states_by_poverty_rate

Here is an analysis done by Louisiana researcher Noel Hammett showing the correlation between the poverty level of Louisiana schools and their performance scores.

https://www.facebook.com/photo.php?fbid=10202198231061725&set=gm.537365276342463&type=1&relevant_count=1

A Google search on your own will yield much more research on the subject.  My purpose in pointing this out AGAIN this time is to reiterate that NONE of the corporate style education reforms forced down our throats since NCLB have addressed strategies that would improve learning opportunities in the classroom or solve poverty.  Reform has been hyper focused on accountability via the use of standardized test scores as a false measure.  You can't measure quality (learning) with a quantitative measure (test score) you can only measure how many answers were marked correctly on a test.  As a teacher I know that simply failing a student based on a test score does not effect future performance.

It's as simple as that.



Grading Schools on Poverty
Professor of Biological Science
Florida State University
Tallahassee, FL 32306-4370
The Bush Administration and the legislature, after months of lobbying, arguing, wrangling, dealing and agonizing, has given us the A-Plus Plan with its School Accountability Report (http://www.firn.edu/doe/schoolgrades/account.htm). Upon analysis, this Report turns out to be merely an elaborate and expensive way to grade schools on the poverty or affluence of their students.
The Bush/Brogan School Accountability Report assigns each school a grade primarily on its raw, overall standardized test scores. Because standardized test performance of schools is very reliably predicted by poverty, the poverty-level of a school is the by far the strongest predictor of its grade. In fact, if you tell me the percent of a school’s students that are on supported lunch (an indicator of low family income), I will tell you its Bush/Brogan grade with 80% accuracy. That is, I will be wrong only one out of five times.
If you think I’m bluffing, let me show you that it’s true. Let us simply classify schools by their affluence/poverty makeup: very affluent, moderately affluent, moderately poor, very poor. Next, let us grade them on their affluence or poverty. The most affluent schools get an A, the next group gets a B, and so on, with the poorest schools getting a D. The graph and the table below show how closely the grades based on poverty correspond to those assigned by the Bush/Brogan School Accountability Report. Simply by considering school affluence/poverty, we are able to assign the same grade as the Bush/Brogan "performance-based" system in about 80% of the cases, that is 26 of the 33 schools. And we did this without looking at a single test score.




Graph 1. This graph shows that the percent of the children coming from poor families predicts over three-fourths of the differences in school performance on standardized tests. Grading schools simply on test performance is thus almost equivalent to grading them on poverty, as the correspondence between my "poverty grade" and the Bush/Brogan "performance" grade shows. See table below for a summary.


% supported lunch

School Category

"Poverty Grade"

Bush/Brogan performance grades identical to poverty grades
less than 20%

very affluent
A
6 of 7
20 to 35%

moderately affluent
B
4 of 8
35 to 63%

moderately poor
C
9 of 10
more than 63%

very poor
D
7 of 8
Is this a fair, or even a sensible, way to grade our schools? Only if you think poverty should be punished. Does the Bush/Brogan grade tell us anything new about a schools’ educational performance? Of course it does not. It tells us what proportion of the student body comes from poor families. In this system, both "good" and "bad" schools attain their status not through their efforts and expertise, but mostly through the good or bad fortune of having few or many children from poor families. It is not my purpose here to dwell on the poverty-performance link. Whatever its reasons, it is a reality faced in much greater measure by schools at the low end of the socioeconomic scale than the high end. No school grading system that does not take this socioeconomic factor into account is useful in telling us how well our schools are really doing, given the socioeconomic status of the students they are asked to educate. Would it not be much fairer to adjust school performance for poverty before grading schools?
I think it would, and hereby offer the Prof. Walter’s Level-Playing-Field School-Grading System, which provides a poverty-adjusted estimate of school performance as an alternative to the Bush/Brogan School Accountability Report. We begin with a regression analysis of the school performance data (3 standardized tests) against the poverty level of the student body (% on supported lunch). This statistical method shows about 80% of the test scores are predicted by the poverty-level of the student body. I detailed this relationship in a previous My View column on March 14 (this article can be found on my website at http://www.fsu.edu/~biology/faculty/wrt.html). For every percent that poverty increases, the school’s scores drop by an average of 1.6 points. The most affluent schools, those with fewer than 15% poor students, have scores higher than 230, while the poorest, with more than 75% poor students, have scores below 120, less than about half those of the most affluent schools. This same relationship was equally strong in all seven major school systems in Florida. We can work toward eliminating this effect of poverty, but success will come only when we figure out how to help all students learn equally well.
Next, we take the difference between each school’s actual test scores and the test score predicted by the regression for a school of that socioeconomic condition. These differences are called the residuals, and tell us how much better or worse than average a school tested, given its particular level of poverty. By doing this, we have removed the effect of poverty on test scores. The result is that the maximum difference in test scores has shrunk from 175 points to only about 70 (the lost 105 points are the effect of poverty). Residuals less than zero indicate that (with poverty effects removed) a school did less well than average, and residuals above zero indicate that it did better than average.
Let us now assign letter grades to these residual scores. Here is my scale: above 25 gets an A; between 5 and 25 gets a B; between -20 and 5 gets a C (note that C’s straddle the average, which is zero); between -35 and -20 gets a D; anything below -35 gets an F. Before I assign grades to schools, I want to emphasize that I am doing so only to make a point. I am not advocating such simple data for grading schools.
The table below lists our elementary and middle schools in the order of the grades assigned by the Bush/Brogan Plan.


Test Performance

Bush/Brogan

Level-Field

School

% Students on supported lunch

from School Accountability Report

after adjustment for % lunch

Grade

Grade after Adjustment for % lunch

HAWKSRISE

14

253

24.69

A

B

KILLEARN

7

259

19.06

A

B

DEERLAKE

4

259

14.07

A

B

SWIFTCREEK

15

230

3.35

A

C

DESOTO

5

243

-0.26

A

C

BUCKLAKE

9

233

-3.61

A

C

GRIFFIN

26

198

-10.36

A

C

RAA

24

242

30.30

B

A

SEALEY

33

213

16.26

B

B

GILCHRIST

10

247

12.04

B

B

MOORE

26

202

-6.36

B

C

SULLIVAN

32

188

-10.39

B

C

WOODVILLE

63

175

28.10

C

A

HARTSFIELD

56

184

25.47

C

A

PINEVIEW

83

136

22.33

C

B

ASTORIA

39

209

22.23

C

B

RUEDIGER

56

179

20.47

C

B

FT BRADEN MIDL

44

195

16.53

C

B

BELLEVIEW

54

162

0.15

C

C

COBB

29

197

-6.38

C

C

CHAIRES

33

179

-17.73

C

C

APALACHEE

54

133

-28.84

C

D

FT BRADEN ELEM

44

147

-31.46

C

D

FAIRVIEW

43

146

-34.12

C

D

SPRINGWOOD

24

175

-36.69

C

F

RILEY

93

116

18.95

D

B

BOND

100

94

8.58

D

B

OAK RIDGE

84

107

-5.00

D

C

WESSON

93

92

-5.04

D

C

BREVARD

94

89

-6.38

D

C

SABAL

74

120

-8.61

D

C

NIMS

65

134

-9.57

D

C

STEELE

45

115

-61.80

D

F
When graded according to the Level-Field system, there are grade-changes, both up and down, within each Bush/Brogan grade. Now we can recognize that schools like Riley, Hartsfield, and Woodville (to name a few) are doing relatively well compared to other schools of similar socioeconomic makeup. My system recognizes this and rewards them with A’s and B’s instead of the C’s and D’s assigned by the Bush/Brogan system. It might pay us to find out what these schools are doing to rise above the average. On the other hand, my system also shows that schools like Swift Creek, Buck Lake and Griffin do not deserve their Bush/Brogan A’s because they are only average as compared to other schools of similar socioeconomic makeup. Hence, the Level-Field system assigns them a C, because the Level-Field system does not reward schools for being lucky enough to be teaching mostly affluent students.
The case of Griffin highlights another flaw of the Bush/Brogan School Accountability Plan. Griffin received an A, not because of its terrific performance on standardized tests, but because it met several additional criteria. These were: 1) percent of long absences or suspensions among students were below state averages; 2) greater than 95% of the student body was tested; 3) no subgroup test performance fell below minimum criterion; 4) improvement in reading scores without a decline in math and writing over 1998. Only the last two can actually be considered academic performance. The first two are bureaucratic tricks. It is a bit like requiring that an athlete run the 100 yd. dash in 10 seconds, but you credit him with half a second if he wears the right color shorts, and another half second if she pulls her socks up before starting. Neither has anything to do with performance, and both serve to obscure real performance.
You may ask, "Well, how are we supposed to know how our schools are really doing?" I suggest that we insist on a much more sophisticated analysis of school data by the state Department of Education, instead of letting them just plunk it onto their web site or onto a newspaper page so the public can worry about what it means. At the very least, school performance needs to be adjusted for the nature of the student body. Better yet, let us not pretend that a single number can adequately assess the performance of our schools. Performance must be measured, not by any single number, but by the relationship between what goes into a school and what comes out. The large and expensive bureaucracy at DOE can reasonably be expected to explain to the public how the data are related to each other, what they mean and how our schools are really doing. This will allow us to discover what works and what doesn’t work, and thus to spend money more effectively. Only a full statistical analysis of the data followed by publication of the results can lead to an effective plan that enjoys widespread public understanding and support. Then, perhaps well-intentioned but pointless maneuvers such as the Bush/Brogan School Accountability Plan will have a more difficult time in coming to life.
Back to "School Performance Articles"

No comments:

Post a Comment

Your Comments Welcome