User Tools (skip):
Course evaluations have existed at McGill for over two decades, but how effective and useful are they to the university community? That's the question being tackled by a working group headed by Associate Provost (Academic Programs) Dr. Martha Crago (who is also dean of Graduate and Postdoctoral Studies). She recently spoke to the Reporter about the challenge of improving the course evaluation process for the benefit of instructors, administrators and students.
What's the objective of course evaluations?
There are three main objectives. Evaluations can be useful for students in terms of providing relevant information related to their course selection. Evaluations are also a useful tool — but certainly not the only tool — for evaluating a professor's teaching performance. Last, but not least, there is a formative benefit, in that feedback from students can be discussed by the chair/director and the instructor and used to improve an instructor's teaching.
How does the existing course evaluation process work at McGill?
Departments and/or faculties decide on the content of the course evaluation questionnaire. In some faculties, the faculty sets the content and all departments adhere to it. In other faculties, it's done on a department-by-department basis. So we have varying questionnaires, which may suit the varied nature of different disciplines. However, this makes it hard for students to get comparative information across units at a glance. Then there is the question of permission. Professors are asked if they are willing to disseminate the results of their evaluation within McGill. If the answer is yes, the results are disseminated. If the answer is no, the results go only to the professor and the chair. As you may imagine, it's hard to chase down all these permissions, and even harder to do it every semester. Then there are also certain exceptions. There are no course evaluations for courses where the professor has taught at McGill for fewer than three years, or in courses where groups are considered too small to generate meaningful results.
And how's that working for us?
Not well. One of the first things our working group did was review the existing process for course evaluations. We looked at a cross-section of departments to see how many of the courses were actually being evaluated and how many of the evaluated courses' results were available to students. We came up with shocking results. Although the McGill policy stipulates that all courses are to be evaluated, data was only collected for approximately 60 percent of the course lectures. Even fewer were given permission for dissemination and were therefore actually available for students to consult. Many courses were not being evaluated at all and we cannot figure out exactly why. We don't know whether students ever filled out evaluation cards or if the cards were ever taken to Network and Communication Services (NCS), or what happened post-NCS. Because they are dealt with by so many different parts of the university, there are lots of cracks to fall through. In a nutshell, we do not find the current system to be very informative for students since there are so many courses that are not accessible and the output is not user-friendly.
How is the working group attacking this problem?
During the summer, our group worked hard to find ways to improve the system. One of the things that we've piloted is a system called MOLE, the McGill Online Evaluation system, which collects and disseminates course evaluation information electronically. The online system avoids many of the cracks. Under the current system, cards are physically distributed to students in classes and then scanned by computer at NCS. With MOLE, no cards have to be delivered around campus. Students fill out evaluations online from their own computers. The results can be converted into statistics automatically and students can see the results online. Under the leadership of Laura Winer, a member of our working group, we tried out MOLE during the summer semester and are now completing larger trials in the fall and winter terms. The reaction from students has been very positive. The number of students completing online evaluations has gone up and we're continuing to expand MOLE.
What's next for the working group?
We're looking into all the other factors connected with the course evaluation policy and process. That covers everything from disseminating results in a more user-friendly format to collecting professor permissions in a better way. We're also in the final stages of redrafting the policy that was written in 1980 and revised in 1992 and 2001. We want to make the policy simpler and bring it up to date. If we do switch over to a university-wide on-line system, we also need a plan for how and when this would happen.
Have your own courses ever been evaluated?
Certainly. I'm a believer in the intrinsic value of course evaluations. I completed course evaluations when I was a student and I've benefitted from the results of evaluations as a professor. I've always found the process to be useful and positive, though I have seen one or two odd comments over the years, like "She really should be taller." I'm working on that.