More On Evaluating Student Evaluations

Braga, Paccagnella and Pellizari’s (2014) article is a useful contribution to the debate about student evaluations. This research does has limitations, e.g. it is based on one school. Furthermore, the very strength of the article (random assignment of students to classes) also means that the context may be unrepresentative. The most significant quibble is that the authors use “objective measure of teacher effectiveness” (page 72) to describe later student grades. Grades in later classes are not an objective measure of teaching quaity. Nor are higher grades in later classes the objective of education. It is quite possible that students learned more but get worse grades in later classes. (Learning to think rather than obssess about grades is one of the key things we teach).

That said the authors have made reasonable choices given the constraints of real world research. My main criticism is that their recommendations are generally impractical or may create more problems than they solve.

They suggest peer review of teaching which makes me nervous. I worry that academics already spend more time congratulating and denigrating each other than engaging with non-professors.

They suggest questionaires a long time after the end of a course. I’m not convinced even they think this will cut down on bias. “Obviously, this would also pose problems in terms of recall bias and possible retailiation for low grading”. (page 86)

They suggest weighting the evaluations of good students’ opinions more highly while noting this is at odds with evaluations being anonyous. Interestingly good, motivated students seem not to reward “easy” professors. Trying to ensure students are motivated should help alleivate the problems of student evaluation they note. (Plus forcing curves stops professors “buying” positive evaluations with easy As.)

They also suggest having different people grade than teach and using new exam formats. These may be worth exploring but the obvious problems with these alternatives mean it isn’t clear the change would be for the better. A fair conclusion might be that we should test alternative approaches.

My main worry is that casual readers might conclude that student evaluations should be scrapped. The real test of whether student evaluations should be used is not whether they are accurate in some academic sense. The real test of whether they represent good policy is whether they lead to a better world. To test this randomly assigning students isn’t the test we need. We really need to randomly assign policies to schools, some will use student evaluations and others not. Over time we should observe which deliver better results. My hunch is that student evaluations probably encourage professors to take teaching more seriously lifting the quality of education. These evaluations could therefore be “unfair” to professors yet still be a good idea if they raise the quality of education.

Randomly assigning policies to schools would be a great experiment to do. Note that I too can make impractical recommendations.

Read:Michela Braga, Marco Paccagnella and Michele Pellizzari (2014) Evaluating Students’ Evaluations of Professors, Economics of Education Review 41, 71-88