What’s a History Test?

pencil on multiple-choice answer sheetHistory teachers have lots of cheerful arguments about assessments these days (projects? essays? tests?), but one thing everyone seems to take for granted in the conversations I’ve been in is what a history test looks like. In a time of great change in our field, the test conversation seems mired in two camps:  multiple choice or essay tests.

It might be time to take a longer look at that.

Many people are writing better multiple-choice questions than they used to, thanks to recent conversations around standardized testing. If you follow best practices–make each distractor capture a likely misunderstanding, write questions that require application of understanding to a new example rather than identification of a memorized factoid, eliminate any possible sources of confusion, and check yourself for cultural assumptions–I think multiple-choice can be a valid tool in standardized assessments, where it’s needed to deal with testing on a massive scale. As a teacher, however, such tests give me limited information about why a student did not succeed:  was it nerves? careless marking on the wrong line? lack of preparation? or a fundamental misunderstanding? The only time I, personally, use multiple choice questions is in computerized factual knowledge quizzes with unlimited retakes, where they actually function as a sort of stealth flash card system that motivates the students to keep reviewing until they get 100%. In that scenario, I am not so much assessing anything as incentivizing repetition.

essay Essay questions certainly give me much more information about a student’s thoughts and abilities, but that information is often very muddied. If a student writes a very weak essay, there is sometimes so little right that it can be hard to figure out the root cause. Even in a mid-range essay, it can be easier to say what’s wrong than why. If a student writes a clear, literate essay with interesting ideas but little evidence, there are several possible explanations: maybe she hasn’t understood what it looks like to properly support an argument, maybe she knows perfectly well how to use evidence but prepared so poorly that she has no evidence to offer, or maybe she was rushing and forgot to include evidence. Each of those three is a quite different problem, requiring different actions from the student and different help from me. This is especially true for beginners–I teach mostly ninth graders, and they have been the catalyst for a shift in my own test-writing–but I have certainly read college essays where I was left unable to say whether the writer was, for instance, careless or ignorant.

Writing is a critical skill for historians (and others), and we must never stop teaching it. I certainly would not want to stop assigning students to write–early and often–or teaching them how to do it. Writing and discussion, taken together, are the only ways to truly assess the depth, complexity, and nuances of student understanding.

I do think, however, that when what we want to know is more specific than that, we should also assess it more specifically. If we want to know whether a student can analyze evidence, they don’t always need a whole essay to show us that. To find out whether a student knows, in the abstract, what constitutes concrete proof, we can ask them just that.

Here’s what I do these days: 1) assess each specific skill I’ve taught separately (“which of the following is a debatable thesis?” “How does the specific word choice in this sentence affect your confidence in the writer’s description of India?” “List three pieces of concrete evidence that support the following claim”) 2) include targeted skills questions but then ask a student to use their answers on those questions to write an essay (the starter questions both serve as scaffolding for the student and let me track down exactly where things went wrong) 3) as these skills become second nature, drop the skills questions (which might be replaced by other questions, targeting more sophisticated skills) and keep the essay).

The result is that I can, very quickly, understand much more about what, precisely, my students can and can’t do. Maybe more importantly, so can they:  I can write “needs evidence!” in the margins of essays till I’m blue in the face, but when a student gets the answer wrong to a question asking him to provide three examples of evidence, he is somehow much more likely to show up at my office saying, “I don’t understand why what I wrote isn’t evidence.” And then the conversation can start.

In case you’re wondering, I’ve included some examples of what that might look like for me. I’d love to see what your targeted skills questions look like.

 

Some examples of targeted skills assessment:

About wordsarestrong

I teach at an independent school outside Baltimore.
This entry was posted in Assessment and tagged , . Bookmark the permalink.

Leave a comment