John Powers ’26 is a Public Policy major hailing from Brooklyn, N.Y. He is a Resident Assistant in Hardy Hall, a member of the Undergraduate Moot Court competition team and a member of the Sigma Alpha Epsilon fraternity. John is a huge Adele fan. Email him at jdpowers@wm.edu.
The views expressed in this article are the author’s own.
Change is in the air as students at the College of William and Mary return from winter break, setting the stage for a new semester full of fresh opportunities and growth. With classes beginning, new course widgets adorn students’ Blackboard pages, and our campus grows lively again. Some things never change, though. What will remain constant is the participation grade, a mainstay of syllabi and an often looming source of discontent for students. As syllabus week commences, now is an appropriate time to examine the issues with participation grades.
Before examining those issues, it is important to acknowledge that the idea of evaluating a student’s communication skills is entirely reasonable. Public speaking is a vital tool needed for a well-rounded education and a successful career. The idea that we should axe participation grades because some students feel uncomfortable speaking in front of others is akin to saying exams should be abolished because they cause anxiety.
Student participation spurs engagement in the classroom, making discourse more exciting. If discussion is moderated successfully, participation can help students make connections between different ideas and improve understanding of the material. Non-oral exercises can also improve the quality of learning.
The problem is not participation grades per se, but participation grades in practice. Consider when participation grades are posted. All semester, students are told they should come to class prepared and ready to share their thoughts on the assigned materials or respond to questions from the professor. Typically, students only know how they performed on this aspect of the course at the very end of the semester. By grading participation last, professors leave no room for their students to improve.
It gets lost how unreasonable this grading practice is because it is a cemented feature of higher education. Imagine taking a calculus class where your final course grade is the average of a few exams. If you do not know the grade of each exam until your final course grade is posted to Banner, then how could you assess your achievement in the course and adjust your preparation for future exams? The participation grade is marked by a similar lack of feedback. While participation is not usually weighted as heavily as exams, this is no excuse for ditching feedback. Students receive feedback — whether through a grade or written comments — on all sorts of assignments, big and small. Participation shouldn’t be an exception.
Moreover, the delayed release of participation grades also contributes to an imbalanced professor-student relationship. A student wishing to challenge a grade on a test, assignment or presentation could point to an identifiable deliverable to support their claims. Likewise, the professor could use that deliverable to defend the original grade.
Yet in the case of participation grades, there is nothing substantial to point to. From my experience in various discussion-heavy courses, it seems most professors do not have the time to moderate a discussion, present content and write detailed participation notes all at the same time. Perhaps they may note if a student spoke, but it is unlikely professors can remember the quality of each student’s contribution during each class when it is time to grade. This could easily result in grades that are an inaccurate representation of a student. My intention is not to categorically paint professors as unable to fairly grade. Rather, the point is that grading participation is a uniquely challenging endeavor that opens up a higher risk of an unfair grade.
Indeed, University of Notre Dame professor James M. Lang echoed this point in a 2021 article for the Chronicle of Higher Education. He rightly noted that participation is “subject to too many biases.” As the Columbia Business School’s Arthur J. Samberg Institute for Teaching Excellence said, participation grades could be affected by the recency effect, in which a professor might remember a student who participated more at the end of the semester than one who participated more at the beginning. Biases like these can cloud judgment, and combined with a lack of identifiable deliverables for an abstract and subjective concept like participation, they can yield an unfair result.
A final factor to consider is the profound lack of expectation-setting with respect to participation grades. Syllabi will usually say something along the lines of “exemplary participation means coming to class prepared to share questions and thoughts with others,” yet this statement is unclear. Is it simply sharing those thoughts that is the exemplary participation, or the quality of those thoughts? Unclear written instructions on assignments are usually mitigated by the fact that professors talk about what they like to see in these assignments during class time, providing students with valuable insights. That is unfortunately not a feature of participation grades.
This lack of expectation-setting has the ripple effect of diminishing the quality of discussion as students overly-participate in an effort to receive a good grade because they are unsure of what they should be doing. In this case, not everyone has a chance to make their voice heard.
So, what could be done to solve this problem? For one, Professor Lang stopped grading student participation in his classes. I wrote another op-ed arguing for the use of a completely different discussion format in more classes. However, these solutions, even with their merits, are not scalable. Incremental steps that apply to a broader set of circumstances, on the other hand, can make a real difference.
Professors could start by providing more feedback, which could be as simple as splitting the participation grade into two grades, one before fall or spring break and one after. That way, students have a chance to change their behavior and work towards a higher grade. This is a no-brainer.
We might also consider shifting emphasis to other means of participation in addition to oral contributions. I had one professor who based our participation grade on both in-class discussion and Blackboard blog posts. The K. Patricia Cross Academy offered another idea called the 3-2-1 technique, in which participants write three things they learned in the lecture, two things they found interesting from the lecture, and one question. Approaches like these mitigate the lack of deliverables problem with participation grades, which can make grading more accurate and grading appeals easier.
Clarifying expectations should be another goal to work towards. It would be overburdensome to ask professors to devise complicated rubrics, but more guidelines would certainly help. Professors should expand upon possible grade criteria such as engagement with assigned readings, relevance of class comments, concision of class comments and civility. Not only should these expectations be clearly spelled out in the syllabus, but they should also be discussed in class, just like professors would do with other assignments.
This semester, let’s improve participation grading by providing more feedback, diversifying criteria and clarifying expectations. It’s long overdue.