This is the second in a series of posts about testing a standards-based approach in my 9th-grade classroom. I’ve described the project here.
Any thinking about standards-based grading has to start with the standards themselves. My basic assumption in putting together mine was that they should be based on high level skills or habits of mind that can reasonably be expected to develop over the course of a whole year. This ruled out standards based on unit-specific content. Instead, I tried to capture that knowledge in one broad standard I called “historical understanding.”
I’ve been using skills-based rubrics and talking a lot about skills sequences over the last few years, so it wasn’t hard to know, in broad strokes, what categories I wanted to include. Here are the standards I used for the first semester:
- Use of evidence (Here, I lumped together primary source skills like close-reading and sourcing, secondary source skills like finding and evaluating information online and in the library, and the skills of building an evidence-based argument: making deductions, supporting those deductions with concrete evidence, and connecting evidence to deductions with persuasive analysis. I’m still thinking about whether I should sub-divide this category, but it has worked better in practice than it sounds as though it would in the abstract)
- Communication of arguments (writing and academic discussions–next term, I think I’m going to split these two up)
- Historical understanding (factual recall, but also habits of mind like thinking about cause and effect, applying historical concepts to new situations, showing geographical awareness, etc. I see factual recall as the base level, emerging proficiency, while deeper understanding is necessary to achieve mastery)
- Ownership of learning (more about this one later)
The details are giving me more trouble. Here are some of the questions about which I’m still on the fence. My hope is to use the rest of this year to experiment with them.
There is overlap between “use of evidence” and “communication of arguments.” I wanted to have a category that addressed evidence and argument independently of writing, because I teach it and assess it as its own building block. It seemed to me logical when I set up the system that this standard would remain as a portion of every writing or discussion grade, with the skills specific to writing or discussion (essay structure, grammar, active listening, etc.) making up the “communication” grade. In practice, however, this doesn’t communicate what I wanted it to. Under this logic, for example, thesis statements would fall under “evidence” rather than under “communication,” so that a student could score mastery in writing without being able to create a thesis statement. That sends a problematic message, however internally consistent it might be. Instead, I’ve been including all the aspects of my writing rubric in my “communications” standard, even those that could logically fall under “evidence” instead. This double-dipping doesn’t really bother me, because crafting an argument seems to me so central to being a good history student that it is not inappropriate for it to influence two different standards.
For the future: I may move the argument and deduction aspects from the “use of evidence” standard into the “communication of arguments” standard. I’m also thinking about how to distinguish more clearly between making and communicating a deduction, and about renaming the categories entirely (to something like “making arguments” and “writing mechanics”).
How much detail?
Each of the categories above is a constellation of quite different, if related skills. Lumping them all together means less precision: the student who shows an instinctive understanding of chronological arguments but doesn’t bother to study for tests and the student who dutifully memorizes lists of vocabulary but cannot apply those concepts to another historical situation are both lacking in the category of “historical understanding,” but in fundamentally different ways.
Broad categories also make it difficult to show students their progress: while the second student essay is sometimes dramatically better in every way than the first, it is more common that a student simply makes different mistakes the second time around. I want to be able to celebrate a student’s newfound ability to, say, write a debatable thesis while making it clear that that student has a lot of other aspects of her writing that still need attention.
So for the second semester, I decided to track a finer level of detail, based on the categories I already used in rubrics for each type of task. I think this level of detail might be overwhelming for parent reports (I chose to keep my original 4 standards for that), but my plan was to share the more fine-grained standards with students and then resolve each category of standards into a single mark at the end. You can see my detailed standards here.
That led to a new set of problems, however. On a practical level, breaking standards down like that slowed grade entry down so much that it felt impractical to me. It also didn’t solve the issue of tracking progress as well as I had hoped, because it was challenging to be sure I was collecting data on all the standards often enough to give students a real chance to improve. In a perfect world, with enough time, both problems would be surmountable, but at the moment, at least, I was left feeling as though assessment was dominating both my time–because I was spending more time entering grades–and my students’, because I found myself giving more summative assessments to capture all the detailed standards.
I realized that, for many categories, at least, the detailed standards are already available to the students in the form of rubrics for each type of task. For now, my plan is to return to a simple list of 4 or 5 standards in the grade book but to look for ways to encourage students to keep their rubrics and track their own progress across them and to be very clear about the relationship between those rubrics and their progress on the standards.
For the future, I wonder whether there is a technological way to go directly from my task rubrics to the grade book. I also suspect that I could rework my assessments to specifically target more standards at the same time, and that this would make the assessments better overall.