Findings and Lessons

Takeaways

April 2017

Closing today’s assessment gaps

Team

It’s the time of year when assessments are on nearly every educator’s mind and calendar. Ironically, while it’s becoming clear that we’re spending too much time on assessment, maybe it’s not even the right “kind” of time. So, amidst the frenzy, testing season is an opportunity to share two assessment gaps we’ve identified—gaps that need closing if we want literacy improvement.

#1. The gap between test design and test use

On one end of the continuum, we have high-stakes tests. These group-administered “outcome” assessments were designed for program evaluation – a tool for leaders to look at their population’s achievement in broad content areas (ELA, math, etc.) against grade-level expectations. That’s it.

But in practice, item-level data is often analyzed in these tests, looking at how each student did and planning accordingly – even though these assessments were not designed to provide information about an individual’s profile, and don’t measure the foundational skills that build strong readers (see our December newsletter).

On the other end of the continuum, teachers administer in-depth assessments to establish each student’s reading level. These “formative” assessments are done periodically, and take 30-45 minutes per student (that’s 45 hours for a classroom of 20 students three times/year). These teaching or diagnostic tools were designed to get to know the reader, determine her instructional level when working with specific text(s), and inform instructional planning for that reader.

But often these data are aggregated and scores are compared across readers—even though each result is necessarily based on observations and judgments that differ depending on who administers them. And the assigned levels are regularly used to group students, despite the fact that in any given group, one child could have trouble with decoding and another could have underdeveloped vocabulary (see our November newsletter)—and therefore need different supports.

#2. The gap created by a missing assessment type

The examples above speak to the problem of using assessments in ways they weren’t designed, and as a result, spending inordinate amounts of time assessing students; in both cases, there is not the right kind of data to use for aggregating and grouping by profile. These practices signal efforts to use assessment to improve instruction with the wrong assessment types. In fact, to address the gap between the high-stakes test for population-based program evaluation, on the one hand, and the formative assessment for in-depth diagnostic purposes on the other, means turning to “screening” assessments—brief, standardized tools that measure specific skills against an established reference point outside the curriculum.

Much like those in the doctor’s office (e.g., blood pressure, heart rate), screeners allow educators to quickly pinpoint what component skills (word reading fluency, vocabulary, etc.) are progressing at age-appropriate rates, and which areas, and readers, need further attention and investigation. It takes about 15 minutes to screen every child, which is ~15 hours for three sessions a year.

With these gaps addressed, there could be less time spent administering assessments, and grouping and supporting children would occur based on their skill profiles—to get to the effective, targeted teaching every student needs.

Close

Close
Close