The Daily Pennsylvanian is a student-run nonprofit.

Please support us by disabling your ad blocker on our site.

From Carl Seaquist's, "Ahann Ahim," Fall '97 From Carl Seaquist's, "Ahann Ahim," Fall '97 My former department just sent me the student evaluations for the course I taught last summer. Student evaluations are always hard to interpret, and this semester's were no exception. And when students have to fill in the little machine-readable ovals, most professors in most courses get average evaluations: lots of threes, plus a couple of twos and fours out of five. What can a person conclude from feedback like this? Everything's basically alright, but students don't know what they want, or else want conflicting things. So this sort of feedback is not much help to the instructor who wants to improve his teaching, and is looking to his clients --- his students --- for that feedback. I also have had students tell me they always try to give graduate students good evaluations because they know we do not have much power in the system, and they figure we need good recommendations to get jobs in future semesters. I appreciate the sentiment expressed by these students, but such an attitude really does little good for either the University or, in many cases, the graduate student himself. At the University of Pittsburgh, for example, student evaluations never go to the department: they are sent directly to the instructor. So there is no need in a case like this to be easy on the instructor. And since graduate students are just learning how to teach, this sort of kindness cuts off one good source of feedback. Another common criticism is that students like easy courses and are unhappy with challenging courses. This argument has a certain a prior appeal, because it fits in with the attitude a lot of instructors have of undergraduates, and probably has a certain amount of empirical support. But I suspect that, at least at a school like Penn, such criticisms are exaggerated. After a fairly careful reading of the entries for certain departments in the Penn Course Review's latest Undergraduate Course Guide, I suspect the overall grade given to a course is correlated less with the difficulty of the course and more with the quality of the instructor. In fact, most courses have a difficulty rating of between 2.0 and 3.0 out of 4.0, whereas both instructor and course ratings seem to vary more widely. So even if easy courses did receive higher marks, it would appear there are not that many easy courses being offered. The exceptions to this generalization are fairly easy to locate: language- and mathematics-heavy courses tend to be seen as more difficult than general education and survey courses. The fact that Physics 361 (Electromagnetism I), the course that made me abandon a Physics major some years ago, is given a difficulty rating of only 3.3 indicates to me that Penn students in fact DO expect to do a fair amount of work in their courses. The positive correlation between instructor and course rankings is more interesting than the negative correlation with difficulty. As an undergraduate, I always thought that most learning was supposed to go on outside of the classroom. After all, one hour of class time is supposed to mean two to three hours of homework, plus time studying for exams or writing papers. But not everyone feels this way. Compare the rankings in the course guide with the appended commentary, and you can see what aspects of teaching are of the most concern to Penn undergraduates. Instructors with good ratings are usually praised for being interesting and for giving good lectures, whereas instructors with low ratings tend to be seen as boring. Course design and selection of readings are mentioned in the written comments, but seem to correlate much less with instructor quality than are lecturing skills and the appearance of accessibility to students. Students are, as I said earlier, our clients and our customers. One of the great things about the corporatization of the academy is that this is now becoming more widely acknowledged. Because they are our customers, we should be responsible to them for the education we provide. And quality of instruction should certainly be considered in hiring and promotion decisions. If it is recognized what aspects of teaching student evaluations are good at measuring, then they become a necessary and useful part of the process of judging instructors and departments. But they are not transparent documents, and to simply look at the aggregate numbers they provide is to misuse an important source of information on teaching at the University. If a university wants to evaluate the quality of its teaching, it must first have public discussion of what constitutes good teaching, then create means of evaluation that focus in on those qualities that it wants to measure. The current system used in most universities does not do this. It tacitly presumes that everyone knows what constitutes good teaching, and therefore assumes that all measures of teaching quality are compatible. Such an approach will not improve teaching, and might end up, if applied universally, selecting teachers for odd and restricted abilities.

Comments powered by Disqus

Please note All comments are eligible for publication in The Daily Pennsylvanian.