This week I’ve been thinking a lot about the purpose of summative assessments and about my students’ performance.
I’m not thrilled about how my students performed on their final exams this week. A few of my students scored at the “proficient” level on the recent district periodic assessment; most of them scored in the “below basic” or “far below basic” categories.It’s really difficult to look at these results and not feel like a total failure. However, I do feel that my students are learning something–we’re moving in the right direction but we’re just not all at the goal yet. I’m still hoping that by June a lot more of my students will be in the “proficient” or “advanced” category.
So that I don’t feel so bad, I am also reminding myself about the three or four weeks of lost instructional time at the beginning of the semester. If we had three or four more weeks together, I’m confident students would have done better on their exams. Many of the questions that students got wrong were questions on material that we just didn’t get to.
Because students did so poorly, I’ve been giving students the opportunity to make up a large portion of points lost if they correct their mistakes (they have to include a written explanation of their work). If students make an effort, they can raise their grade on any exam to an A or B. I even let students work together and I offer help. But still some students don’t take advantage of the make up points. I know that I have to ween students off the make up points eventually so that they can get the problems right the first time, but at least for now I feel this scheme helps them learn more math, learn how to learn from a mistake, and not feel so bad about their exam score.
This experience is making me consider why I don’t offer exam make up points in my college classes…
One thought on “Results of final exams”
Part of what you are talking about raises the question to me about what grades are for. I taught in a school without grades for a long time, and I often question their value. In my school we assigned scores for tests, and included numerical scores as data with narrative evaluations, but never assigned letter grades. If the student corrected a test, we could show both scores — before and after corrections.
To me, the requirement of assigning a range of letter grades to the various students in a class impedes the ability to give students “credit” for learning by redoing. This is because it is not considered OK for everyone to be able to achieve an “A.” The grades are there to put students into categories. “A” students are “the best students.” People talk about a “good test” as one that has a wide spread of grades. In contrast, I think of a good test as one that enables all students to demonstrate their level of proficiency, diagnose their misconceptions, and engage in a bit of challenge. Maybe they will even learn something, have a bit of fun, and move some info from short to long term memory in the process. A narrow range of scores on a test would mean that the class has successfully moved to a certain level of understanding/proficiency together. Doesn’t seem like such a bad thing, especially if they are moving forward at a reasonable pace. I am also fine with individualized instruction, in which students can move at their own pace. In this case evaluations can focus on whether the students are meeting their individual goals, not how they compare to each other.
Short version: I think that feedback aids learning, but grades are really for someone other than the student.