Ericka Mellon wrote an article about the performance of high school students at HISD schools on a "college ready" assessment in the Houston Chronicle.
The article describes advances HISD has made in the number of "college ready" students on the TAKS Exit Level exam. This is defined (as far as I can tell) by a student receiving a scale score of 2200 in each of the Math and Reading sections of the exam. My first two questions are:
- Do these scale scores correlate with any future success, such as college performance or acceptance?
- Scale scores are a way of applying a "fudge factor" to try to normalize score reporting across different years' tests (just like the SAT scores, for example); are we sure they represent something meaningful? Can they easily be adjusted year to year to affect the numbers?
Here's an example of the scale score issue: One year, a student can get 23 of 36 questions correct, and receive a scale score of 2100. The next year, the test is determined to be "easier", so a student needs 26 of 37 correct for the same scale score. What's not clear to me is that the composition of the 23 or 26 correct affects the student's scale score. On the first test, if a student shows complete mastery of a most topics, and gets 13 wrong representing a 0% mastery of two core topics, can that be the same as a student who misses 13 questions, some here and there, but with a decent grasp of all the concepts? Can you compare two students who get 23 correct - one who gets 23 of the 24 "easy" questions right, and none of the 12 "hard" ones, and a student who gets 23 correct, a mix of hard and easy questions? Does it make sense to map a scale score to just a raw score, or should the questions or their distributions be weighted? You should refer to the TEA web site documenting the conversion of raw scores to scaled scores on the TAKS.
Never mind. Let's say students who meet the scale score test are all equally "ready for college". One of the documents Ms. Mellon attaches at the bottom of her article shows the achievement levels per high school in HISD. The numbers are interesting. DeBakey has an impressive record of preparing their students for the TAKS exit, the best in town. Almost all their kids score at least proficient in Math and Reading. Bellaire last year saw 82% of their kids "pass" in Math, 76% in Reading. Carnegie, 95%/94%, and that's way up from 87%/72% (!!) in 2007. I love the HSPVA numbers, which kind of buck the trend of doing better in math than reading: 84%/96%. Lamar's numbers are 65%/63%.
What does this mean for a parent trying to decide which HS is right for their kid? On the one hand, if you are not worried about your kids passing these standards, maybe these aggregate numbers aren't that important to your individual case. On the other hand, I worry that schools have been or will be looking at these numbers, setting campus goals, and then expending a large number of resources trying to get those numbers up. Although that's not a bad reaction (again, assuming these metrics actually measure something meaningful), in practice, I fear this means that fewer or no resources at those schools will be focused on the students who are in no danger of missing these goals - the advanced kids who could also use more attention to better develop their own skills and interests. I worry more and more that campus educational resources are a zero-sum game, and when the balance shifts inordinately to focusing on bringing the bottom students up, the top students get less attention. What should that balance be?
As parents, should we focus on sending our kids to schools with the best records, assuming their staff already feels confident in their students' performance and can focus on deeper or broader curricula? Or should we worry that schools at the top are there because they're focusing so many resources on passing these tests, and our children may languish?