A couple of years ago, a thoughtful colleague challenged our faculty to think more systematically about how we assess class discussions. (Thanks, Jeff!) I’ve been puzzling over it ever since, and this is where I’ve gotten so far.
For most history classes, discussion is a central part of the experience, whether it’s formal socratic seminars or something more free flowing: when you’re trying to figure out whether students have understood the complexities of a historical situation, there’s no substitute for hearing them talk about it. Academic discussion lets students practice making meaning of facts on their own, digging beyond the textbook narrative to actually do something with what they know. It lets them hear ideas they may not have had themselves and practice talking about disagreements. At its best, academic discussion is thinking out loud, collectively, each member of the conversation adjusting and intensifying the insights of the others until the group reaches an understanding no one would have reached on their own. We all know, though, that student academic discussion is not always at its best. Participating in a discussion is a high level skill, and if we want the results, we have to teach the skill. Part of that is valuing it, assessing it, and giving students feedback.
For all its importance in our classes, fairly few teachers I know assess discussion in any systematic way. Discussions get rolled into a generalized “participation” grade at the end of the term (which helps perpetuate the student conviction that what we’re measuring in discussions is quantity, not quality), or they become the fudge factor that lets us discreetly adjust a grade to fit our subjective sense of where a student should be. Many of us seem to feel uncomfortable actually putting a hard grade on students’ discussion performance. It can be hard to do: actively leading a discussion is much like conducting an orchestra–doing it well requires all one’s attention. And discussions happen in real time, so you can’t go back and reconsider details later. Still, I think it’s important: assessment is one way we tell students what we value, and it’s clear that we do value discussions. Assessment is also how we make them better.
So here are some things I’ve tried, and a few thoughts on what has worked for me so far. First, what didn’t work. In moments of insanity, I’ve tried video-taping discussions to view and grade afterward. For about a year, I kept transcripts of entire discussions in short hand and went back through them after class. Both approaches were too demanding on my time to do often and too slow to make the feedback useful to students: by the time I had pored over the discussion and written some feedback, the students had forgotten what they said. I’ve come to the conclusion that good discussion has to be quick to give and more or less immediate, since, unlike an essay, the “document” is ephemeral.
I’ve tried various approaches to giving quicker, shorter feedback while still facilitating the discussion. One helpful trick is to give feedback to only a few students each time and rotate. I use cards for this, and I’ve used them various ways. Sometimes, I pick specific students beforehand and pay special attention to them; as they speak, I jot down a thought or two on an index card and hand it to them on their way out. In the course of a couple weeks, I can get to everybody. Other times, I keep a stack of cards handy and write a single note with a name on each when someone does something particularly praiseworthy or problematic. For classes struggling with problems like interrupting or disrespecting each other, I actually color code them and deliver green, yellow, and red cards in real time (red cards, in this case, mean you have to leave the table for a five minute “penalty”). Cards work very nicely as formative assessments, giving them an immediate response, but they have some drawbacks. The biggest is that they distract me from facilitating: so long as I keep the number of cards low, I can just about do both tasks, but only barely, and it produces the sort of divided attention I try not to model. Cards are also not terribly useful as summative assessment, since I don’t have time to make a record before I give them to the students.
I also give feedback orally to the whole group at the end of a discussion; I invite students to reflect themselves on what went well and what they’d like to improve next time, and then I add my own comments, offering shout outs to students who made particularly good moves. (I don’t deliver specific criticism to specific people, which is one limitation of this approach). This works well but provides more collective than individual feedback, and, like the cards, it doesn’t give me anything I can put in a grade book.
For myself, at least, I’ve come to the conclusion that I need to separate the tasks of facilitation and assessment. I just can’t do both really well at the same time. Independent discussion is something I emphasize, anyway. From the beginning of the year, I encourage students to run their own discussions, and I slowly diminish my role in them. That lets me designate some discussions as graded discussions which I will simply observe. Not every discussion needs to be graded, of course. So what I’ve ended up with is a rotation: some discussions I facilitate, to a greater or lesser degree, some I just watch and give oral feedback in our debrief at the end, and some I grade on a formal rubric. I aim for one graded discussion every 2-3 weeks to give students a clear picture of their progress.
Even without trying to simultaneously lead a discussion, recording enough to provide good feedback without requiring a lot of reworking later is a challenge. I’ve tried all sorts of note-taking approaches, from table maps to annotated class lists. I think I’ve finally got a system that works, though.
My rubric has four categories–participation level, use of evidence, adding value, and showing leadership–so I create a chart with those four categories across the top and student names down the side. As they speak, I don’t try to take notes on what they say. Instead, I make a mark next to their name every time they speak and one of a variety of marks in each of the boxes. If they do something particularly good in one category, I put a plus in that box. If they do something egregious, it’s a minus. If it’s a no-harm-no-foul sort of thing, I just put a hash mark in the box, and if the comment didn’t qualify in that category at all, I leave it blank. If I have time, I try to jot down a phrase explaining anything particularly bad or good. That lets me look back over my notes and see at a glance that Suzy spoke nine times but only supported a comment with evidence once, or that Mary spoke rarely but provided unusually good insights every time. From that snapshot, I have enough to fill out the rubrics and jot down a short comment–one strength, one thing to focus on for next time–for each student. Doing that takes me less than 5 minutes per student, and I can send them the rubric electronically by the end of the day. It’s not perfect: there are times, going back over something with a student, that I wish I had given myself more clues as to why I wrote that minus sign in the value column, but those times are rare, and I find I have a pretty good sense of when I need to write myself an extra note these days.
I’m going to keep fine-tuning, but for the first time, I have a system that lets me give specific, data-based discussion feedback to every student without overwhelming myself with work. It’s working for me. Do you have an approach that works for you? Drop me a line in the comments and share!