My Bubble Sheets’ Flaws
In the ongoing debate over how to improve education, the new popular idea is to evaluate teachers based on how their students test. One conservative Minnesota plan in particular would require that 50 percent of a teacher's evaluation is based on test scores. This is seriously flawed for several reasons.
First, academic and education think tank studies have shown the correlation of test scores to teacher performance is erratic. The theory is that tests can measure the value a teacher has added to a student’s learning (known as the “value-added model”), but one year, student test scores can make a teacher look like a miracle worker and the next like a failure.
The MET Project (an endeavor to design a valid teacher evaluation tool funded by the Gates Foundation) issued a report last September that presumed test scores can be correlated with other measures of teacher effectiveness. Respected Berkley economist Jesse Rothstein found fault with the MET assumption, however. In his study of the project, Rothstein found the assumption skewed the results. In fact, “(e)ven in math (which was more stable than reading), a teacher with a value-added score at the 25th percentile in one year (less effective than 75 percent of other teachers) is just as likely to be above average the next year as she is to be below average. There is only a one-in-three chance that she will be that far below average in year two compared with year one,” according to the Shanker Blog, published by the respected education think tank, The Shanker Institute.
So many factors affect testing (home and school environment, a fight with friends, a question that makes the student give up, lack of breakfast) that to peg someone's job security to such a mercurial tool is wrong.
Tests are also seriously flawed in their design and administration. The MCA tests used in Minnesota provide only a snapshot of student performance without a baseline to which a teacher's contribution can be compared. Longitudinal tests, those that measure a student's performance throughout the year, have been implemented in St. Paul this year. While better in theory, these tests also have their limitations. Students must either complete the test in a class hour or return (missing more class) the next day. Sometimes they do, sometimes they don't, rendering their scores useless. Additionally, students who complain about the number of tests they have to take, are not taking these tests seriously and randomly mark answers. They see no personal consequence. How can such a test be used to measure a teacher's effect on that child?
The design of the tests is also flawed. Two years ago, the state decided to end the constructed response on the GRAD MCA Reading test that allowed students to write a response and defense of their interpretation of literature—not to improve the test, but because the state did not want to spend the money it would take to hire evaluators to score the written responses. Now Minnesota evaluates interpretation of literature by multiple choice, meaning there is only one right interpretation. Anyone who has read complex text knows this not to be true. This is a test students are required to pass to graduate and it is flawed because of financial considerations. Now, Rep. Pat Garofalo and his followers want to stake teachers’ careers on it, too.
Another strike against using tests for teacher evaluation is the chilling effect it will have on student learning. Garofalo’s proposal, which has now passed the Minnesota House, calls for the creation of even more tests so that all teachers—not just English and Math—can be evaluated.
What are we doing to our children? Does he realize how much time students already lose to testing? How much more money will the state have to spend implementing this? At a time when everyone says they recognize the importance of critical thinking, such a measure would pressure teachers, for their own protection, to drill and pretest. Do we really want an evaluation tool that draws teachers' attention away from student learning to themselves?
Should teachers be evaluated? Absolutely. But we have to consider what we want out of that evaluation. If we want a richer, more rigorous education for our children, then we must use tools that lead us to that end. The Peer Assistance and Review program started by the Toledo Federation of Teachers nearly 30 years ago is such a tool. It treats teaching like a profession and creates a system of teachers coaching and evaluating teachers that is far superior to the principal evaluation model traditionally used. It recognizes that teaching is a multi-level, complex task that cannot be reduced to a test score and has won national acclaim and awards.
If implemented with fidelity, it weeds out weak teachers in their first year, helps or counsels out underperforming teachers and makes the whole system stronger. We have begun using PAR in St. Paul schools this year and everyone from administration to the classroom teachers are impressed by its strength in supporting and evaluating new teachers in this pilot year.
So, the question is what kind of education do we want for our children? Do we want them treated like widgets that can be measured by a single score at the end of an assembly line or do we want them honored and nurtured by teachers who look beyond the test to their future?