I am looking for some insight on the use of mazes to progress monitor reading comprehension . I teach in a middle school (6-8) and am struggling with using this to measure reading comprehension with fluent readers. So much of their reading comprehension in class is measured by determining main idea , recalling basic facts, inferencing, and analyzing the use of literary elements. It seems that when the Maze is used to monitor reading comprehension, it doesn’t offer much information about the reader. Often students rush through it and circle words just to complete it in the time allotted and score exactly the same as students who are reading and choosing the correct word, but do not complete it in the allotted time. It seems like student motivation is a critical component of the accuracy of these scores.
Is the Maze an effective way to measure passage comprehension, or is it simply a way to measure sentence comprehension? Do you have any suggestions on what else could be used? I appreciate your help with this and look forward to your response.
John Guthrie developed Maze in the 1970s to determine how well students could read particular texts. Let’s say you have a 7th grade science book and want to know who in your class is likely to struggle with that book.
To figure this out you’d test students on several passages from that science book. According to Guthrie, students who score 50% or higher on Maze should be able to handle this book.
The benefit of Maze is that it is easy to construct, administer, and score and maze results are reasonably accurate and reliable. (To design a Maze test, you select a passage of 150-200 words in length, delete a word from the second sentence, and every 5th or 7th word after that. Provide the students with three word choices in random order: the correct word, a word that is the same part of speech but incorrect, and a word that is the wrong part of speech.)
As you point out, maze tells you nothing about what comprehension skills students have or how well they can answer certain kinds of questions. However, question-and-answer comprehension questions can’t tell you that either, so switching tests won’t solve that problem for you.
I was at the University of Delaware during the 1970s where John Guthrie was working at the time. He’d told the late Aileen Tobin, my office mate, a funny thing about Maze. He told her that they had tried it out with individual sentences and with passages (as described above) and it didn’t make any difference. Even when sentences were presented randomly students seemed to perform equally well.
We laughed a lot about that. It just didn’t make sense to us. We wondered if that was also true of other popular measures such as Cloze tests. (Cloze is similar to Maze, but harder to administer because instead of multiple-choice it requires students to fill in the blanks.)
Our banter over this issue ended up in a series research studies that I carried out. We found just what you surmised. Students performed as well on sequential order passages and on passages that we had scrambled the orders of the sentences. Imagine reading Moby Dick, starting with sentence 16, then 5, then 32, then 1, etc. (Randomizing sentence order doesn’t hurt Maze or Cloze performance, but it wreaks havoc on summary writing.)
I also found that Cloze correlated best with multiple-choice reading comprehension tests that asked questions based on information from single sentences. Correlations were lower if students had to synthesize information across the passages.
Cloze and maze tests provide reasonable predictions of reading comprehension, but they do this based on how well students interpret single sentences. For most readers, the prediction works because it is unusual that someone develops the ability to read sentences without developing the ability to read texts.
If you want to know who is going to struggle with your literature anthology, Maze can be a tool that will help you to accomplish that. If you want to identify specific reading comprehension skills so you can provide appropriate practice, maze won’t help, but neither will the testing alternatives that you could consider.
You say you want to monitor your students’ reading comprehension. I suspect that means you need a way of determining at various points during the year whether your students are reading better. For this, I would suggest that you use a collection of graded passages (using Lexiles or some other text evaluation method to put these on a difficulty continuum). Identify the levels of difficulty your students can handle successfully (this could be done with maze tests of those passages), and then later in the year, check to see if the students can now handle passages that are even harder.
Monitoring comprehension means not tabulating specific skills that have been accomplished, but what complexity of text language students can negotiate. Perhaps early in the year, your students will be able to score 50% or higher with texts written at 800 Lexiles. By mid-year you’d want them to score like that with harder passages (e.g., 900L-950L). That kind of a testing regimen would allow you to identify who is improving and who is not.