Vocabulary-Matching in Preservice Coursework: Learning Critical Terminology and Principles of Progress Monitoring

Posted on
Michelle Popham Poster Session

All levels of education require valid and reliable assessments to measure student learning. Curriculum-Based Measurement (CBM) is one assessment method that has demonstrated reliability and validity for measuring content-area knowledge in secondary-level, content-area classes (Espin et al., 2013). The use of CBM in a preservice setting allows for both vocabulary knowledge to be assessed and for the curriculum-based measurement procedure to be modeled. 

The purpose of the presentation was to describe technical features of vocabulary-matching measures used in a special education course, as well as preservice teachers’ satisfaction with vocabulary activities to support their learning about progress monitoring. The research questions included: (a) Are vocabulary-matching measures used in an introductory course on special education reliable and valid?, and (b) How do preservice teachers rate their satisfaction with using vocabulary-matching progress measures across the semester?                            

Research Methods

The study was conducted across a 15-week semester in three sections of an introductory-level special education course. Participants included 114 undergraduates either majoring or minoring in education. Every other week, participants took a 4-minute, 20-item vocabulary-matching measure that covered critical terms addressed in the textbook (Hallahan, Kauffman, & Pullen, 2015). At pre- and posttest, participants took a 74-item, multiple-choice vocabulary assessment. Participants received scores on their previous vocabulary measure and graphed their progress during weeks in which a vocabulary probe was not administered. At the end of the semester, a cumulative, 70-item, multiple-choice final exam covering knowledge across the course was administered. Participants completed a questionnaire in which they rated their satisfaction with taking the vocabulary measures across the course and the contribution of the vocabulary activities to their knowledge.

Data Analysis and Results

Pearson correlations were calculated to examine the validity of the measures as indicators of performance. Results for internal consistency demonstrated evidence of moderate reliability (r = .45 – .75). Correlations between the vocabulary-matching measures and the vocabulary multiple-choice vocabulary assessment, and multiple-choice final exam demonstrated moderate concurrent validity (r = .47 – .58) and low-to-moderate predictive validity (r = .28 – .49). Student feedback indicated that the use of vocabulary-matching probes helped them to better understand progress monitoring, as well as their performance across the course. 


The use of vocabulary-matching measures with preservice teachers may be a practical way to provide hands-on experience with progress monitoring probes in content areas. Activities may be used to illustrate principles of progress monitoring, and graphed results provide feedback to individual students about progress in the course. Further investigation into the use of vocabulary-matching measures for special education vocabulary is needed to establish validity and reliability of measures. 


Espin, C. A., Busch, T. W., Lembke, E. S., Hampton, D. D., Seo, K., & Zukowski, B. A. (2013). 

              Assessment for Effective Intervention, 38(4), 203-213.

Hallahan, D. P., Kauffman, J. M., & Pullen, P. D. (2015). Exceptional learners: An introduction 

              to special education (13th ed.). Boston: Pearson.