Student course evaluations opened Friday, Nov. 23 for students to fill out online through Blackboard, and will close after the last day of classes, Friday, Dec. 7. At the College of William and Mary, these evaluations are used to assess teaching effectiveness and are utilized in decisions about professorial advancement. However, the ability of students to accurately and fairly evaluate the quality of their professors has consistently been called into question.
How are student course evaluations used?
Student evaluations are used by department chairs and promotion committees in decisions about tenure, merit and promotion. They are also used by faculty as a way to gain feedback from students about their perceived performance during the semester.
Each department has the freedom to create their own questions for the evaluation; the only requirement is that they include a question about teaching effectiveness. According to Planning Analyst Carol MacVaugh there are over 600 different questions across all departments. Departments have complete autonomy to alter the questions for their courses, however most questions in circulation were inherited from the old paper system and continue to be used today.
Some programs have noticed that different classes within their department necessitate different methods of evaluation. Art and art history department chair Sibel Zandi-Sayek said that her department decided to split the student evaluations depending on whether they were part of studio art or art history. Likewise, Associate chair of educational policy in the modern languages department Alexander Prokhorov said that his department decided to split evaluations between content courses and language courses.
“Every department program does its own thing,” Dean of Undergraduate Studies Janice Zeman said. “That’s one of the wonderful things about William and Mary is that we have a lot of department and program autonomy. Of course that makes for a much more complicated, complex system because everybody is its own little fiefdom. But it does allow departments and programs to tailor their questions to ask exactly what it is they are trying to get at.”
The results are used in official evaluations and promotions of faculty and included on tenure dossier. A professor’s student evaluations are used as one of two metrics of teaching during pre-tenure review, tenure review and promotion. Professors are also reviewed on their merit every year, and student evaluations may also be used in these reviews.
“There are many different layers in the tenure process, so at each layer those teaching evaluations are considered.”
“There are many different layers in the tenure process, so at each layer those teaching evaluations are considered,” Zeman said.
However, the entire evaluation is not always included. According to Zandi-Sayek, for the art and art history department, the only answers reported are those in response to the teaching effectiveness question, which is the singular standard question across all departments. The other quantitative questions serve to help the student slowly recall what they learned over the course of the semester and encourage them to think about the class statistically. She said that when assessing the effectiveness of these scores, the faculty committees pays specific attention to the progression of the professor’s rating in order to evaluate their growth and trajectory over time.
Despite this sometimes narrow application of evaluations, professors themselves review their own course evaluations in much more depth. The qualitative portion of the questions provide professors with more comprehensive student feedback. Zandi-Sayek said that she would like to move away from numerical evaluations toward more qualitative feedback because of how much her department values these questions.
“Students are generously sharing their comments and we take these very seriously,” Zandi-Sayek said. “They really give you a good sense of what is going on in that classroom.”
According to Zeman, if a professor receives consistently negative evaluations from a significant number of students in their class it might trigger action. After noticing these red flags, the professor may be initially contacted by the chair of the department or the program director. They could meet to discuss whether the issues are systemic or a result of extenuating circumstances, and what changes could be made to address the cause and improve the class.
“The red flags are when you have a consistently negative message that’s held by a lot of students, and we certainly do take note and look at those,” Zeman said.
According to Anne Pietrow ’20, being able to provide feedback on student evaluations is one way students have the power to make change on campus and express their agency.
“I want the professors who I think have done a fantastic job to be rewarded and I want those who have done a poor teaching job to be reprimanded or not offered a full time position here,” Pietrow said in a written statement.
Professors also appreciate the feedback from students in order to evaluate what has been working well in their courses and what they need to tweak, Zeman said. Course evaluations provide an anonymous forum for students to tell professors what they really think.
“I think it’s a really great form to get some feedback if you’re trying new things and students often have great feedback in terms of how you can make a class better,” Zeman said. “And it’s also nice to know what you’re doing well. So just like students like to get feedback on papers and assignments about what they’ve done well, as well as corrective feedback, it’s the same for faculty.”
Student evaluations: Valuable metric of professor performance?
Student evaluations measure student satisfaction more than anything else, according to the Chronicle of Higher Education. Student answers are also prone to bias, even regarding supposedly empirical questions. Furthermore, questions often ask students to extrapolate beyond their knowledge of the subject.
All of these factors pose problems for the evaluations’ influence over significant decisions of hiring and promotion. This raises the question of whether student evaluations should be weighed as heavily as they are now or changed to more accurately assess professor performance.
“There’s a lot of literature and debate about what is the best way to evaluate one’s classes, it’s actually a really contentious issue among faculty, and it sounds like also for students,” Zeman said. “I don’t think we’ve arrived at, nationwide, at a good system for evaluating classes. I think this is the best we have right now, but there’s certainly lots of room for improvement.”
A potential source of inaccuracy is that the course evaluations are not mandatory. They have a response rate of roughly 70 percent. This is after several reminders and much promotion, both over email from Zeman and oftentimes from professors during class.
The missing 30 percent of students creates a potential source of skewness in the evaluation results, because students who feel more strongly about a class or professor might be more inclined to fill out an evaluation than a student who feels more neutral.
According to MacVaugh, the student response rate dropped when the College moved from paper to online evaluations in the 2012-13 academic year. She said that this might be due to a feeling of less personal responsibility among students when filling out digital forms as opposed to physical ones, and possible fears about anonymity.
The administration has attempted to quell these fears by increasing language in emails about the evaluations that emphasize the student’s anonymity. Only two people at the College have access to the identities of respondents, MacVaugh and a fellow administrative employee.
According to Zandi-Sayek, there is also a correlation between how a student rates their professor and their anticipated grade in the class. Students may tend to be more flattering of their instructor if they have been receiving better grades in the class, and alternatively may wish to punish a professor that they do not believe graded them fairly.
“That doesn’t mean that the professor is not good and effective,” Zandi-Sayek said. “Sometimes, and oftentimes, the professor might be holding students to very high standards. And we know enough to be able to say, ‘Well here is a student who has been dissatisfied clearly and does not want to live up to these standards’ … with professors who hold students to high standards you often see perhaps evaluations that are slightly less flattering.”
Zandi-Sayek said that further limitation of the evaluations is a result of their quantitative nature. There are open-ended questions as well, but these are usually at the end and few in number.
Most of the questions ask for rankings of professor performance on a scale. These numerical responses are then aggregated to find mathematical measures of significance within departments, like the mean and median.
MacVaugh said that students filling out these evaluations may regard a specific numerical score differently than their classmates. This difference creates a problem when attempting to interpret quantitative data without context, and even more so when it is averaged in with other student’s data that may not adhere to the same scale of significance.
“It’s important to give students a voice in their own education, so on the one hand this is a very important tool but, again, the reservations about the metrics.”
“It’s important to give students a voice in their own education, so on the one hand this is a very important tool but, again, the reservations about the metrics,” Zandi-Sayek said.
Madison Miller ’20 said that she takes the evaluation process seriously and strives to provide valuable feedback in order to improve the College’s academics and the experiences of students who come after her.
“I give honest feedback based on my experiential knowledge of that course [and] professor to help our school improve, not only for the professor, but for future students too,” Miller said in a written statement.