Instructors often balk when students ask “will this be on the test?” However, from a student perspective this is an entirely rational question, especially in a large-lecture class, where the sheer number of students often leads instructors to assign a small number of high-stakes assessments in an effort to minimize grading pressure.
Whether you like it or not, your students will interpret the assessments you choose and the grade percentage you assign to them as indications of what you want your students to learn and how you want them to learn it. Therefore, make sure that your assessments are sending the right message!
- On this page:
- Choose assessments that support your learning goals
- Clear standards
- Machine-grading & peer-review
- Multiple-choice exams
Choose assessments that support your learning goals
The Field-tested Learning Assessment Guide (FLAG) has detailed information about several different types of assessment and a chart that shows how the different assessment types support different types of learning goals.
The Carleton College “Cutting Edge” Course Design tutorial also contains detailed explanations of several assessment types. Although these sites are aimed at science, technology, engineering and mathematics (STEM) faculty, the advice and resources they contain are not unique to STEM, so instructors from other fields will also find them useful.
Regardless of assessment type, the standards for assessment need to be incredibly clear
Rubrics are a great way to make standards known, make grading less subjective and can make grading faster. Furthermore, if you rely on TAs for help with grading, taking the time to construct a detailed rubric will make grading more consistent and transparent across sections. Within Canvas, you can create rubrics, align them with course outcomes (i.e., learning goals), and use them to grade assignments and quizzes.
Need some inspiration for creating a rubric? The educational management system Rcampus hosts a vast public gallery of rubric examples.
Machine-grading and peer-review can lessen the human grading burden
In a large lecture, it can be difficult to construct assignments and assessments that measure student progress in a meaningful way, but which don’t present an undue grading burden. A multiple-choice exam is probably the easiest assessment type to grade, especially if you have it machine-graded through the Office of Educational Assessment’s ScorePak System (a UW budget number is required). In addition to student scores, the ScorePak report contains a detailed item analysis for test improvement and validation.
For written or project-based assessments, having students review each other’s work can be an efficient way to provide feedback. It also benefits the reviewers by giving them an opportunity to develop analysis skills and expert-like thinking (National Research Council, 2000). You can assign peer reviews through Canvas and students can even employ a rubric that you’ve constructed for the assignment to guide their feedback. Note that as of this writing anonymous peer review is not possible in Canvas…your students will know who they are reviewing and who is reviewing them.
The peer-review system SWoRD does support anonymous peer-review. SWoRD supports many different file types, so it can be used for reviewing presentations, videos, art projects, etc., in addition to text. It also uses algorithms that prevent students from colluding to impact the class average on an assignment. Note SWoRD is not free. Departments can purchase a SWoRD license for a given number of students, or an instructor can require students to buy their own access.
It is possible to write multiple-choice exams that assess more than simple recall of information
Although multiple-choice exams have a reputation for testing only low-level thinking skills, it is in fact possible to write multiple-choice questions (MCQs) that assess conceptual understanding and the higher-order levels of Bloom’s taxonomy (Bloom, 1956). The FLAG page on multiple-choice questions contains a comprehensive and well-researched description of how to write an effective MCQ.
Students can develop meta-cognitive skills by reflecting on their performance on a multiple-choice test
Alison Crowe, Clarissa Dirks and Mary Pat Wenderoth at UW Seattle developed the Blooming Biology Tool (BBT) for assigning the proper level of Bloom’s Taxonomy to exam questions and other course materials. Once students have applied the BBT to an assessment, they can use the Bloom’s-based Learning Activities for Students (BLASt), which the team also developed, to strengthen their study skills for each level of Bloom’s Taxonomy (Crowe, Dirks and Wenderoth, 2008). Although these tools are specific to the biological sciences, the approach could be adapted to any field.
Joann Montepare, a psychology professor at Emerson College, uses a ”self-correcting” approach to multiple-choice testing (Montepare, 2005). After her students complete the exam in class, they are allowed to take a copy of their exam home to correct. They turn in the corrected exam later to augment their in-class exam score. Although Prof. Montepare teaches classes of around 60 students, this approach could easily be adapted to the large-lecture format. Students could record their answers on a scantron form and then take the exam packet home to correct. TAs could grade the corrected exams, but time burden would be minimal if the TAs are grading only for corrections and not explanations.
- Field-tested Learning Assessment Guide (FLAG):
- Carleton College “Cutting Edge” course design tutorial discussion of several assessment types
- Canvas help guides:
- For creating rubrics,
Aligning rubrics with course outcomes (i.e., learning goals)
Using rubrics to grade assignments and quizzes. - For assigning peer reviews.
- For creating rubrics,
- Rcampus’s public gallery of rubric examples
- Office of Educational Assessment ScorePak System
- Computer-aided rhetorical analysis tool Docuscope
- Advice on how to assess higher-order thinking skills and conceptual understanding with multiple-choice questions:
- https://mirjamsophiaglessmer.wordpress.com/tag/voting-cards/
- http://chronicle.com/blogs/profhacker/multiple-choice-questions-on-exams/23020
- Tobias, S and Raphael, J. (1997). The hidden curriculum-faculty-made tests in science, Part I: Lower-division courses. New York: Plenum Press.
- Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.
- Balch, W. R. (1998). Practice versus review exams and final exam performance. Teaching of Psychology, 25, 181-185.
- Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals: Handbook I, cognitive domain. New York: Longmans, Green.
- Friedman, H. (2002). Immediate feedback, No return test procedure for introductory courses. In R. Griggs (Ed.). Handbook for teaching introductory psychology (p. 132). New York: Lawrence Erlbaum Associates.
- McClain, L. (1983). Behavior during examinations: A comparison of A, C, and F students. Teaching of Psychology, 10, 69-71.
- Crowe, A., Dirks, C., and Wenderoth, M. P. (2008). Biology in bloom: implementing Bloom’s taxonomy to enhance student learning in biology. CBE-Life Sciences Education, 7(4), 368-381.
- Montepare, J. (2005). A self-correcting approach to multiple choice tests. Observer, 18.
- National Research Council. How People Learn: Brain, Mind, Experience and School: Expanded Edition. Washington, DC: The National Academies Press, 2000.