Published: Mar 10, 2023
Last Modified: Mar 15, 2024
As of March 2024, I'm still figuring oufiguring outt these ideas. I am going to try the review assignment suggested at the end of this write-up with a class, and update this note accordingly.
I’m going to share some thoughts about assessment design in mathematics. The reason that I’m writing this note is to think through my deeply ingrained (and probably maladaptive) assumptions about assignment design. Most of what I say will be most applicable to low-level courses such as introductory calculus. However, some of the ideas discussed below are also relevant to proof-heavy courses.
Contemporary assessment design practices in mathematics suffer from a sustainability issue. Typically, instructors create assessments which have essentially unique “right answers”. In this approach to assessment design, student submissions are assessed according to a set of solutions which must remain hidden from the students. If the students get the solutions before the assessment is graded, all sorts of havoc breaks loose. The students copy solutions verbatim^{1}. Academic integrity issues arise. The assessment becomes invalidated as an tool for measuring progress. The link between assessments and their solutions is too tight.
First, I will consider a bit of a straw-man. The following assignment is so silly, that no one in tertiary mathematics education would ever assign it to their students. But I feel that this arithmetic assessment makes strong point, and has several features worth considering.
Q1 Calculate the following sums.
If this seems too silly to consider, then I will offer a sample from a recent multivariable calculus course that I taught. The problem are essentially arithmetical, but involve passing to a limit. I would argue that the multivariable calculus assignment has the same issues as the arithmetic assignment, even though it is much more plausible as an assessment in tertiary education.
Q1 Evaluate the following surface integrals $\displaystyle \iint\limits_S \mathbf{F} \cdot d\mathbf{S}$.
There are several issues with these assessments. First, as we have noted above, the solutions must remain secret in order for them to be useful evaluations of progress. Once this assessment has been used, it cannot be re-used in a future iteration of the course. Producing a new assignment, with its secret set of solutions, each time a course is offered is unsustainable. And so, to put a positive spin on it, I’m going to call this first issue^{2} “sustainability”.
The sustainability issue is especially relevant now that the internet is ubiquitous. When I first started teaching, I inherited a collection of assignments multivariable calculus spanning 1995-2020. Looking at the collection, it was clear that prior to 2010, there was a lot of re-use of material. Every four years or so, the same midterm and exam questions would pop up. The same assignments, essentially verbatim, would be re-used. Prior to 2010, it was much more reasonable to assume that materials were not being archived online. And now, in 2024, you can assume that everything is being stored somewhere.
The second issue with the arithmetic questions above is that, at some level, the questions and their answers are essentially meaningless. It does not matter to anyone that $3 + 5 = 8$. Students rightly find this sort of drudgery boring. Calculate, get a number, write it down, move on. All calculation-based questions have this essential boring-ness^{3}. There is, however, a higher level sense in which this assignment is pointless. Once the student has completed the assignment, and the instructor graded it, there is no useful by-product of the work done. The completed assignment is, in a certain sense, disposable.
To address this issue, the open educational resources (OER) community developed a notion of “non-disposable” assignments. Seraphin et al.^{4} (p. 85) characterized non-disposable assignments as something which:
I think that the first three points chracterize “assignments” broadly. Anything which you do with your students, as an instructor, ought to satisfy these very mild conditions. The fourth condition is the non-trivial one. It is the condition which creates a novel sort of assessment. Elsewhere in their paper, Seraphin et al.^{4} write: “At its most basic level, an NDA could be consider any assigned task that adds value to the world.” (Emphasis in the original.) And that’s the crux of the matter: how can a calculation-based mathematics assignment add value to the world?
And so, it remains to discuss some possible assessment designs that might be sustainable or renewable. I’m going to start with something which represents an extremal position: the most intense, real world, revelant, renewable, and sustainable assessment that I can imagine.
This assessment would be extremely challenging for early undergraduates. It is deploys skills that are usually (if ever) taught in graduate school. However, I like this task for a number of reasons: it asks students to do something that real scientists do frequently, it has no single right answer which must be kept hidden from students, it can be re-used indefinitely, and it requires that students to communicate with the outside (or out of class) world.
My major concern with this assessment is that there is so much material that is possibly relevant, and distinguishing the possibly relevant from the really relevant is very difficult. To be a bit more concrete: Suppose that this was assigned to a first year calculus class. There are lots and lots of papers which do things which look taking a derivative. However, there are (probably) no papers which say: Let $\phi(x)$ be the such-and-such function. We take its derivative and get $\phi^\prime(x)$. Given this state of affairs, it is hard to imagine students with a first-year finding anything. Supporting students for this assessment would require a massive amount of scaffolding. Scaling back a bit from looking for a paper on the arXiv, one might assign an assessment like the following.
This assessment effectively replaces the current arXiv canon with an narrowly defined list of relevant texts, and scales back from “the world at large” or “the author of the paper” to the rest of the class. The thing that I like about this assessment is that the problem is narrowly enough that there will be fewer possibly but not really relevant leads. There will be material that aligns closely with the material the students are studying. Moreover, I like that this assessment has help-seeking behavious built in to it. It provides a reason for students to go out and engage with other people about mathematics.
And who can blame a student for copying solutions verbatim? It seems like a great strategy. If your goal is to maximize your grade, then copying the solutions produced by the instructor seems like it must maximize the correctness of the solution. ↩︎
Another name for this issue could be “re-usability”. ↩︎
I hesitate to say it is a “design flaw” that calculation-based questions are boring. There is a strong case to be made for doing boring things repeatedly. For example, lifting weights is rather boring per se, but lifting has incredible health benefits. ↩︎
Seraphin, Sally B., et al. “A conceptual framework for non-disposable assignments: Inspiring implementation, innovation, and research.” Psychology Learning & Teaching 18.1 (2019): 84-97. https://doi.org/10.1177/1475725718811711 ↩︎
Thanks for reading! If you have any comments or questions about the content, please let me know. Anyone can contact me by email.