Educators, like most professionals, are given more responsibility as their expertise and experience grow. Most school principals, content specialists, and district-level personnel have been classroom teachers. Because the summative end-of-year assessments results did not meet their needs when they were teachers, they are understandably skeptical of those tests meeting their needs as educational leaders. Consequently, they look for better options and often adopt commercially available diagnostic/growth assessment systems.
Where Part 1 of this blog series focused on statewide interim assessments, we now turn to commercially available diagnostic/growth assessments. These are usually administered by individual schools or districts, rather than statewide, and therefore, results are not typically used in states’ school accountability determinations. Like statewide interim assessments, the use of diagnostic assessments has grown in recent years as a supplement to summative end-of-year assessments. The hope is that they will provide more individualized feedback for students and teachers.
Many commercial systems are available to educators, but they are extremely variable in terms of their scope and the information and support they provide. The common thread among them is that they provide different and potentially more useful information for educators to use in their efforts to teach relevant content to students compared to end-of-year summative assessments. This assertion should be vigorously investigated and evaluated, both to ensure that the cost of these programs is justified and to verify that educators and their students benefit from participation in them. But because they are not typically given the same attention as the statewide assessments used in federal accountability, those investigations are not given the same scrutiny. Several of the vendors, to their credit, seek external validation and maintain rigorous research agendas for their assessment systems, but they are not subjected to the same peer review as statewide assessments.
Lessons Learned
HumRRO has conducted evaluations for several of these programs, each with its own unique challenges. There are, however, some commonly encountered lessons we have learned while evaluating diagnostic/growth assessments that are worth pointing out.
Caroline Wiley, Ph.D., Principal Scientist at HumRRO, co-authored this blog.
This is the second in a three-part blog series highlighting HumRRO’s experience evaluating state K-12 assessment systems and exploring some of our early lessons learned. The first installment focused on interim assessments. The third blog, to be published tomorrow, will focus on competency-based local assessments.







