Interpreting Student Evaluations
One way to assess our effectiveness as teachers from the student's point of view is through student evaluations. Yet student evaluations can be confusing at best and discouraging at worst, especially given the effort that we put into our teaching. Talking through your evaluations with a trusted colleague or a McGraw Center consultant can help offset the complex reactions sometimes set in motion by reading them. Keeping our perspective is a necessary first step to making practical use of evaluations. Rather than viewing student evaluations primarily as judgments of teaching performance, we may find it more meaningful to look at student reports as reflecting the spectrum of ways that students as novices learn and think within our disciplines. Below are some suggestions to help you interpret your evaluations and use them most effectively to inform your teaching and course planning.
Thinking through the numbers
Looking at the distribution of numerical ratings across an item yields more insights than noting only the average. Although the average is better than other statistical measures to give an overall impression of students' views on an issue, it may not reflect much of a consensus response.
Making sense of the comments
Accepting the positive and interpreting the negative with caution can help you maintain balance in your view of the overall tone of the comments. Because student evaluations are anonymous, positive comments are usually genuine, and you should not minimize their importance. Extremely negative comments, on the other hand, can reflect pressures students feel and their dissatisfaction with a broad range of educational issues, over and beyond your teaching, so don't overemphasize them.
Organizing your student comments can help you make sense of the variety of responses. You may find it useful to read responses from each separate question at the same time; for example, reading all the comments under “strengths of the course” may help you see which aspect of the teaching process students found most effective (e.g., course structure, faculty/student interaction, or feedback on written work). Likewise, are there patterns that emerge when you read what students listed as weaknesses of the course? If so, these may be areas that you want to think about further.
Comparing the organized written comments with the quantitative feedback can help you differentiate between idiosyncratic student responses and more general learning issues. Do the written comments help explain the overall ratings on particular items? Are the comments consistent or variable? Recurring comments may help you identify a specific teaching/learning issue, whereas comments that range from very positive to very negative may reflect differences in students' expectations of the course, their backgrounds, or their intellectual development or preferred learning styles.
Expanding your sources of student feedback
Giving a mid-semester evaluation and responding to student suggestions at that time can help you identify issues early enough to make productive changes and communicate your course goals more clearly. These evaluations are used only by the instructor and may be designed as open-ended responses or rated items. (See the McGraw Center web site, www.princeton.edu/mcgraw, for some examples.)
- Gathering more information about the meaning of your evaluations after the course is over may provide you with significant insights. You may ask a group of students to get together with you at the beginning of the next semester and elaborate on some of the suggestions or concerns raised on the evaluation. To encourage students to speak more freely, you may ask a McGraw Center consultant to conduct such a focus group.<\p>
For additional information on gleaning useful information from your evaluations, please contact the McGraw Center for Teaching and Learning at 609-258-2575 or email@example.com.
Using Student Evaluations to Improve Teaching", Speaking of Teaching Stanford University Newsletter of Teaching, 9:1 (Fall 1997).
“Interpreting and Using Student Ratings of Teaching Effectiveness.” Center for Support of
Teaching and Learning, Syracuse University. http://cstl.syr.edu/cstl/T-Lint-stdrate.htm.