This post is part of a series in which I am blogging my way through a new course on partial differential equations (PDEs) that I am about to teach in a few weeks. (Links to part 1 and part 2.)
This post by Lani Horn reminded me that timed tests can be harmful for students’ self-efficacy as mathematics learners and their perceptions of mathematics as a whole. People have and will always argue for and against timed tests. In my view, most of the underlying disagreement is because we conflate automaticity and speed.
A friend told me that teaching basically involves moving students from
- unconscious ignorance to
- conscious ignorance to
- conscious competence to
- unconscious competence.
Embedded in this pithy statement is some notion of automaticity: the state of having so deeply internalized a mathematical skill or concept that you know when to use it and can use/apply it correctly with relatively little cognitive demand.
A colleague of mine at Mudd often says that in calculus students finally learn all their pre-calculus skills well; in differential equation they learn all their calculus skills well. I think what he is saying here relates to automaticity too. As much as educators sometimes poo-poo it, practice is necessary for developing automaticity. The reason why automaticity is important in mathematics is that there are lots of things that build on top of each other.
Here’s an example: To correctly perform the integral
one has to (1) factor the denominator, (2) decompose the rational expression into simpler fractions (which also requires one to solve a system of linear equations), (3) use a substitution to integrate one of the pieces, and of course, (4) perform all of those algebraic manipulations without making any mistakes. If the cognitive demand of any of these subordinate steps is too high, a student can easily lose sight of the forest for the trees. In this problem there are many opportunities for tiny errors. To perform this integral correctly, it helps if those subordinate skills are automatic.
With the ubiquity of computer algebra systems and online services like Wolfram Alpha, some might wonder why automaticity is still important. But, think about how frustrating it would be to read a novel in an unfamiliar language. Yes, you could slog through it by looking up the definition for every other word, but you wouldn’t suggest that as a way for someone to learn a language–it would be too off-putting for most. If the goal is just to help someone learn how to get by in a foreign country then that is fine. My goal is to help students be deeply fluent in mathematics and so I believe some automaticity is desirable and necessary.
Whether you agree with me about the importance of automaticity in mathematics, the central issue of this post is that automaticity is not the same as speed, though they are closely related. The problem is that the speed at which one student completes a task with automaticity may be different from the speed at which another student completes the same task with the same level of automaticity. To disallow that variation in speed is to assume that all students think and do mathematics in exactly the same way.
Perhaps some of the arguments about timed tests can be resolved if instructors were more forthcoming and conscious about whether their underlying goal is automaticity. And if automaticity is a goal, instructors should find other ways of measuring it without using speed as a crude proxy for it. I believe it is the instructor’s job to give students enough practice (perhaps through homework or in-class tasks) so that students have the opportunity to develop automaticity, and to help students become self-aware enough to recognize when they have internalized a skill or concept to the desired level.
The best way to observe whether students have developed automaticity is to watch them doing those tasks. My suggestion is that we formatively assess for automaticity (perhaps through in-class tasks) rather than using timed tests to do so, and to reserve summative assessments for things that don’t rely as much on the automaticity of certain skills.
Another reason why I think that timed tests are harmful is that they introduce a non-trivial amount of anxiety (which leads to lower performance) for some students, particularly those who previously performed poorly in mathematics, or those who tend to doubt their skills. In college/university mathematics courses, these groups of students tend to overlap more with underrepresented minority and female students. If you agree with these two assertions, then is it not the case that timed tests can sometimes be a form of institutional racism or sexism? Let us not forget the theory of disparate impact, which holds that any practice or policy may be considered discriminatory if it has a disproportionate “adverse impact” on persons in a protected class.
(inserting a pause here so people can think about that…)
So back to PDEs. I was considering avoiding exams completely, but given my other plans for the class (more in another post), I think it would be best to give one comprehensive final exam that will contribute a relatively small percent of students’ final grades (maybe 10 to 15%). This goal of this PDEs final exam is to see whether students can synthesize the many skills and concepts that they will need to master in this course. It will focus on the first two learning objectives I listed in part 1. I can’t avoid computations on this final exam, but I can limit the complexity of the computations and focus on problems that ask students to synthesize or evaluate ideas instead of requiring them to have automaticity of certain computations.
At Harvey Mudd, we have the luxury of being able to assign take-home exams with relatively little concern about academic dishonesty. This is all the more reason that traditional timed exams can be replaced with something better on our campus. My current plan is to write a comprehensive take-home final exam.
Here’s my usual test-writing practice: After writing a test, I take the test and time myself. I multiply that time by 5 or 6 to arrive at a suggested duration for the test. That suggested duration is clearly indicated on the cover sheet of the exam, along with instructions to students that they can take more than the suggested time if they need it, without penalty. I ask students to take the exam in one contiguous block (with only short potty breaks), and to write the start and end time of the exam on the cover sheet. There are two reasons why I give a suggested duration for the test instead of just allowing for an unlimited time exam: (1) it helps students know about how much time to set aside in their schedule to take the exam, and (2) it helps students not use an excessive amount of time. There are some students (especially at Mudd) who, if given an unlimited amount of time, would use so much time that they would neglect other obligations (like eating, sleeping, or bathing–eewww). If I write my exam so that there are no “tricks” that require creative inspiration, then there should be some hard limit to the amount of time that students can productively spend on my exams. I don’t want them to use more time than that.
Love to hear your comments. In my next post, more on reducing time pressure, but on a much longer time scale.
2 thoughts on “PDEs Course Design (Part 3): Relieving Time Pressure During Tests”
I remember that honor code… I don’t have that luxury here 😦 In fact, quite the opposite. This is an aside, but a little related: We often give students “assignments” or essentially progressive take-home problem solving activities, throughout semester. Regularly, students either help each other and write the same answers or (unfortunately) just copy each other. Which has got me starting to wonder, what exactly it is to be academically dishonest on such an assessment? Obviously, if you do nothing other than copy out a friends work, that’s dishonest. But what about if you copied it out AND learn from it? What about if a friend tells you how they did the problem, showing you their solution, and then you wrote it? Where is the line?
For what it’s worth, I definitely agree that practice is important and that guiding students (somehow) to develop that degree of automaticity is part of the job we have. Particularly with math majors… perhaps a different story when we are just teaching into another program like business or engineering.
Dann’s comment seems to get at the difference between learning through collaboration (and maybe even from demonstration) and independence. In some courses/assignments, the goal is teamwork. For example, on a ropes course, you can’t complete some tasks alone, so the goal is not independence but it might be teamwork and contributions from all team members. Most courses have as a goal some degree of independence of each student. That’s where the collaboration gets tricky. How do we facilitate/allow collaboration while still promoting independence?
I think that’s why we test students individually, even though in real jobs it is almost never against the rules to collaborate. There’s a perception in academia that to be competent (or to be a worthy contributor) you have to go it alone. Is that valid given the reality of the workplace? I really don’t know the answer. I do know that I struggle with the whole idea of giving letter grades as a proxy for competence, because I think there are many ways to be competent and to contribute that are hard to capture in a grade-based evaluation system. I appreciate the ways you are thinking about testing here because I think you are doing a nice job of working to align what you value with the way students will demonstrate and be judged on their competence.
The issue with academic honesty also relates to the workplace for me, because I think there are many ways/opportunities to give credit (or not) when it is due. I hope that in academia we can help students enter the workplace with the inclination to give attribution.