Preliminary PARCC assessment results tell us what we already knew: If we test students against harder standards, their performance will be lower than on previous ones. Despite the state paying $40 million annually for the exams, they provide only minimal information. Why?
First, the exam itself has flaws. Practice tests featured ambiguous and poorly worded items. We do not know if those errors remained or how they affected scores. Spanish-language math questions were never field-tested before they were administered.
PARCC itself is frustratingly opaque. Technical reports on 2014 field tests were never published. Basic questions remain unanswered: How did device differences (e.g., iPads, Chromebooks, or PCs) affect results? How did students who experienced technical glitches perform versus those who did not? We do not know because PARCC has released no relevant data.
Second, the exams cannot answer many questions parents have. Are school curricula properly aligned with Common Core State Standards (CCSS)? PARCC did not collect data on school curricula, so we cannot answer that question. Was the average score in a grade low due to a few students doing very poorly or most doing just slightly sub-par? Score distributions are not reported, so we cannot know. Because no test, even one as long as PARCC, can assess more than a limited range of what students know, the results cannot inform instruction much.
Third, the proficiency ranges or “cut-scores” — the scores that mark the thresholds between the levels of mastery on a test — are inherently arbitrary. If a different, equally qualified group set the cut-scores, the ranges would be different. Cut-scores are not psychometric properties but committee judgments, subject to political pressure, and PARCC has not released the range-setting rules. Setting proficiency bands now is unnecessary; the scores alone provide a baseline. Changing the ranges will be difficult, as pro-testing critics will say PARCC weakened standards.
We can, however, make some predictions confidently. Wealthier districts and students from richer families will do better on average than less fortunate ones. When Illinois adopted the CCSS and decided to use computer-based exams, it never provided sufficient funds to improve curricula or technology. General State Aid is over $530 million lower in real terms today.
Test-boosters will tell us that the exams will get better. This same promise has been made every year since test-based accountability under No Child Left Behind began. We’re still waiting.
Christopher Ball is a member of the board of Raise Your Hand for Illinois Public Education, an education reform advocacy group.