Monitoring Progress of Students with Learning Disabilities in Content Areas
Monitoring Progress of Students with Learning Disabilities in Content Areas
While students identified as having learning disabilities are often given supports in the core academic areas of reading, math, and writing, these students also engage in content areas such as science and social studies. Content areas require students to learn a wide range of new vocabulary within the academic language (Mooney & Lastrapes, 2016). Additionally, teachers in both special and general education may struggle to frequently monitor student progress in learning new concepts and academic language since most assessments in these areas appear to be summative in nature (i.e. end of unit tests). While students with learning disabilities are regularly monitored in their skills in the core academic areas, it seems logical to assume that teachers would also want a method to observe growth in the content areas. This article will discuss recent developments in content area Curriculum-Based Measurement (CBM; Deno, 2003) and how vocabulary-matching measures can be developed and incorporated into inclusive classrooms.
CBM has been established as a formative measure used in the core academic areas to help teachers plan their instruction and subsequently monitor progress (Hosp, Hosp, & Howell, 2016).It has been recognized as a technically adequate measure, as it is both reliable and valid (Deno, 2003). More recently, CBM has been developing in the content areas of social studies (e.g. Beyers, Lembke, & Curs, 2013; Espin, Busch, Shin, & Kruschwitz, 2001; Espin, Shin, & Busch, 2005; Mooney & Lastrapes, 2016;) and science (e.g. Espin et al., 2013; Johnson, Semmelroth, Allison, & Fritsch, 2013).
Two forms of CBM that have been specifically investigated in the content areas are maze and vocabulary-matching. Maze has been established as a screening and progress-monitoring tool in reading in both elementary and secondary settings, however; the idea of using maze in the content areas has primarily been investigated at the middle school level (Johnson et al., 2013). Less investigation has been conducted on the use of content area maze at the elementary level. Furthermore, at the secondary level, when comparing maze and vocabulary-matching measures, vocabulary-matching CBM seems to produce more reliable measures and is more effective at predicting performance in content areas (Espin & Foegen, 1996; Mooney & Lastrapes, 2016).
Vocabulary-Matching CBM in Content Areas
A wealth of vocabulary exists within content area subjects such as science, social studies, and even math. Success in the content area requires knowledge of general concepts and academic language (describe, compare, analyze) as well as content-specific terms (photosynthesis, gerrymandering, algorithm) that are described as "tiers" of words in the literature (Beck, McKeown & Kucan, 2013). Not only must teachers identify pertinent vocabulary within a lesson, they also must provide instruction on these terms to develop vocabulary knowledge at the required level. Vocabulary knowledge has been described as a continuum – from no knowledge of a word to deep understanding of the word that includes the ability to identify the meaning out of context, relationship to other words, and metaphorical uses of the word (Phythian-Sence & Wagner, 2007). Furthermore, to ensure that students acquire the relevant terminology, it is necessary to assess whether or not students have mastered the vocabulary to the desired level of knowledge. Since teachers are actively involved in the development of vocabulary-matching CBMs for their curriculum, they can assess the students' progress toward the desired level of vocabulary knowledge mastery.
Vocabulary-matching CBMs are a formative measure that teachers implement to monitor a student’s movement through a curricular area over the school year. There are a variety of benefits to incorporating this type of measure in the classroom. Deno (2003) notes that CBM is a process that is easy to learn and incredibly time efficient (i.e., only 5 minutes). Furthermore, the measures are sensitive over time, meaning they can detect small changes. That is, they provide the teacher with important information allowing them to modify or change instruction as needed at any given point during a school year. When thinking about students with a learning disability, vocabulary-matching CBM is an ideal measure to assist with monitoring because it is tool that is teacher-developed and can be tailored to the students’ curriculum. In mostly upper elementary and middle school samples (e.g., Beyers et al., 2013; Espin et al., 2013; Mooney & Lastrapes, 2016) these measures have demonstrated promising results in predicting and monitoring student performance in social studies and science.
While these measures take some time to develop since the vocabulary needs to be compiled from the curriculum, creating the vocabulary matching probes is a relatively easy, systematic process (See Figure 1). An overview of this process, based on previous research, is outlined below.
To expedite the process of creating a pool of terms, special education teachers with the help of general education, content area teachers and school psychologists may consider collaborating on this effort (Busch & Espin, 2003).As a first step, teachers, collectively or individually, need to identify relevant terms for the pool. To identify relevant terms, teachers may consider using the glossary of the content area textbook, exams, quizzes, and even grade level or state level standards. Vannest et al. (2012) suggest identifying the terms in groups after each unit or in one sitting. Once the initial pool has been identified, teachers collectively can review the pool to ensure that all relevant and key terms have been included. The collection of key vocabulary may result in upwards of 200 terms for the pool. Once the key vocabulary has been determined, consideration should be given to the definitions for each term. In more recent studies, definitions have been kept to a maximum of 15 to 16 words (Beyers et al., 2013; Espin et al., 2005). This allows students taking the probe to focus on key elements within the definition.
Once teachers collaboratively establish the pool of words and definitions, the subsequent step is to create individual probes, which is also a simple process. From the pool, randomly select 20 terms and definitions. Two additional definitions that do not have a corresponding term are also chosen in order to reduce the process of elimination resulting in a total of 22 definitions (Espin et al., 2001; Larson & Ward, 2006). To design the probe, 20 terms are placed on the left hand side of the page in alphabetical order. The 22 definitions are randomly placed on the right hand side of the page (Espin et al., 2001). See Figure 2 for a sample probe that was created for eighth grade science. This process is then repeated until the desired number of probes has been created. For instance, if the teacher plans to monitor vocabulary acquisition weekly, then approximately 30 probes are needed. It is important to reiterate that each probe consists of terms and definitions that are covered over the course of the entire year. For example, Probe 1 given during the first week of the semester will include words from the beginning, middle, and end of the year. This set up is what allows teachers to see a student’s progress through the curriculum over the school year.
Administration of Vocabulary-Matching CBM
Administration can be quick and efficient if it is built into a weekly routine. While the initial explanation of this may take some time at the beginning of the school year, a few weeks in, this process will become straightforward. Preparing folders with probes or incorporating technology, as a means to administer the probe ahead of time will expedite the administration process. Administration of the probe itself is only 5-minutes long; distribution and collection of materials takes an additional few minutes. Thus, the time frame is relatively short. The following standardized directions based on Espin (n.d.) can be used:
“When I say begin, you may start matching the terms to their corresponding definitions. Match each term. Please note that on the left hand side of the page there are 20 vocabulary terms, and on the right, there are 22 definitions. Therefore, you will have two terms that will not be used. I will be timing you to determine how long it takes to complete the probe. Are there any questions? You may begin”.
The final phase in the process includes the inter-related aspects of scoring, graphing, and decision-making. After administration, probes must be scored. It is favorable to score the probe immediately after administration and to build this into the classroom routine as well; it should also be noted that scoring the probe is fairly quick.
There are two different options to choose from in terms of how to score the probes. The first is to assist students with self-scoring their own probes; this allows for the students to see their errors immediately. A second option would be to have students switch with a nearby partner and score the partner’s probe (Larson & Ward, 2006). Teachers should consider their classroom and students, and choose the option that is best for them.
After probes are scored, data should be graphed allowing instructional decisions to easily be made; these two steps go hand and hand. Student data can be graphed individually. Again, depending on your classroom routines, students could graph their data themselves or the teacher could score them. Graphing individual student data allows the teacher to examine each student’s progress over time ultimately assessing a student’s movement through a curriculum. The teacher may also choose to graph student data collectively by calculating the class average per probe and then graphing the average. This allows teachers to compare individual student scores to the average scores, which provides additional information to the teacher.
Teachers can expect an increase of 1 word every two weeks (Busch & Espin, 2003; Espin et al., 2005), which can help teachers and students set a goal to work towards. For more guidance on setting goals with vocabulary matching CBM please see Goran, Conoyer, and Hoffman (2015). Teachers can also conduct an error analysis to determine which terms students miss. If students miss terms already covered throughout the year, this provides valuable information to teachers, as they may need to revisit and/or reteach some of the vocabulary to individual students or to a group. These two pieces (graphing data and analyzing errors) allow teachers to make decisions based on a data, which as Stecker, Lembke, and Foegen (2008) note is a critical piece in this process.
Related to process of creating vocabulary-matching measures, we encourage teachers to work smarter, not harder, by turning to technology that can assist development, administration, scoring, and graphing. See Figure 3 for a quick reference guide to potential assistance in each of these areas. Along with this graphic, we urge teachers to consider the resources that they have access to within their classroom and school district. These resources may provide an outlet for supporting teachers to create and implement these formative measures to use for progress monitoring.
Overall,incorporating CBMs into content area classrooms may increase a teacher’s awareness of key concepts and terms that are more challenging for students with learning disabilities. As previously noted, research suggests that progress monitoring vocabulary knowledge may offer teachers a better indication of content area knowledge and comprehension of concepts. While developing vocabulary-matching CBM is a relatively systematic process, it does take time and resources to initially organize and implement. However, this preparation gives teachers the ability to identify the specific areas that students may need extra assistance in and gather information that is relative to their daily instruction.
Beck, I. L., McKeown, M. G., & Kucan, L. (2013). Bringing words to life: Robust vocabulary instruction. Guilford Press.
Beyers, S.J., Lembke, E.S. & Curs, B. (2013). Social studies progress monitoring and intervention for middle school students. Assessment for Effective Intervention, 38, 224 - 235. doi:10.1177/1534508413489162
Busch, T. & Espin, C.A. (2003). Using curriculum-based measurement to prevent failure and assess learning in content areas. Assessment for Effective Intervention, 28, 49-58. doi:10.1177/073724770302800306
Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37, 184-192.
Espin, C. A. (n.d.). Expert connection: Curriculum-based measures: Are there ways touse CBM in content areas? Retrieved July 29, 2015, from TeachingLD website: http://teachingld.org/questions/12
Espin, C. A., Busch, T., Lembke, E. S., Hampton, D. D., Seo, K., & Zukowski, B. A. (2013). Curriculum-Based Measurement in Science Learning: Vocabulary-matching as an indicator of performance and progress. Assessment for Effective Intervention, 38, 203-213. doi: 10.1177/1534508413489724
Espin, C.A., Busch, T., Shin, J., & Kruschwitz, R. (2001). Curriculum-based measures in the content areas: Validity of vocabulary-matching measures as indicators of performance in social studies. Learning Disabilities Research and Practice, 16, 142-151. doi: 10.1111/0938-8982.00015
Espin, C. A., & Foegen, A. (1996). Validity of general outcome measures for predicting secondary students' performance on content-area tasks. Exceptional Children, 62, 497-514.
Espin, C. A., Shin, J., & Busch, T. W. (2005). Curriculum-based measurement in the content areas: Using vocabulary matching as an indicator of progress in social studies learning. Journal of Learning Disabilities, 38, 353-363.doi:10.1177/00222194050380041301
Goran, L., Conoyer, S.J., & Hoffman, K.E. (2015). 5 ways: To incorporate vocabulary curriculum based measurement into your secondary content-area classrooms. LD Forum. 2-7.
Hosp, M. K., Hosp, J. L., & Howell, K. W. (2016). The ABCs of CBM: A practical guide to curriculum-based measurement (2nd ed.). New York, NY: Guilford.
Johnson, E., Semmelroth, C., Allison, J., & Fritsch, T. (2013). The technical properties of science content maze passages for middle school students. Assessment for Effective Intervention, 38(4), 214-223.doi: 10.1177/1534508413489337
Larson, W. C. & Ward, C. (2006). Developing a predictor of university students’ course grades using curriculum-based measurement: An initial investigation. Journal of College Reading and Learning, 37, 45-60. doi:10.1080/10790195.2006.10850192
Mooney, P., & Lastrapes, R. E. (2016). The benchmarking capacity of a general outcome measure of academic language in science and social Studies. Assessment for Effective Intervention, doi:10.1177/1534508415624648
Phythian-Sence, C., & Wagner, R.K. (2007). Vocabulary acquisition: A primer. In R. K. Wagner, A.E. Muse, & K.R. Tannenbaum (Eds.), Vocabulary acquisition: Implications for reading comprehension (pp. 1-14). New York: Guilford Press
Stecker, P. M., Lembke, E. S., & Foegen, A. (2008). Using Progress-monitoring data to improve instructional decision making. Preventing School Failure, 52, 48-58. doi: 10.3200/PSFL.52.2.48-58
Vannest, K.J., Smith, S.L., Hoskins, J.L., Williams, L.E., Parker, R.I. (2012). Progress monitoring in science using key word vocabulary. Teaching Exceptional Children, 44, 66-72. doi:10.1177/004005991204400607