13
Courtney Vengrin
Introduction
While nearly every campus and class conduct evaluations of teaching in some capacity, the effect and use of these evaluations vary widely[1]. The majority of these evaluations are done at the end of the semester, and if any changes are made, these only impact future students. This feedback is also difficult for faculty to digest. It can be scary and uncomfortable to “look under the hood” and see what is working and what is going wrong. Furthermore, these evaluations are often discussed in high-stakes contexts, such as annual reviews. In this chapter, we dissect the evaluation of the teaching process and ask critical questions: Are these evaluations helping us become better educators? How are these data used? Is this process equitable for students and instructors? How should evaluations be framed? Should students be trained? What about the mid-semester evaluation process? Are there other options? What improvements can we make?
Teaching evaluations are useful but have to be structured, reviewed, and administered in the appropriate context to garner maximum value both for the time of the educator and of the student. The research will be discussed on both end-of-the-semester and mid semester evaluation processes. A variety of instruments and approaches will be presented, and an outline for training students will be included.
This chapter will discuss…
- The history of student evaluations of teaching
- The collection process for student evaluation data
- How to create a plan for utilizing your student evaluation data
History of Student Evaluation of Teaching (SETs)
The methods by which we determine what “good teaching” looks like have been the topic of much debate for almost as long as the field of education has existed. As such, there are a multitude of techniques for evaluating the effectiveness of one’s teaching, ranging from peer evaluations and committee reviews to the Student Evaluation of Teaching (SET). Teaching evaluations take many forms throughout K-12 and higher education classrooms and are often used to determine improvements to be made and even in discussing promotion and advancement (Dodeen, 2013). One of the most common metrics by which teaching is judged is the SET.
Much of the current literature discusses appropriate use of the SET, but there is not one clear, uniform means by which to assess and utilize student evaluations of teaching (Braga et al., 2014; Dodeen, 2013). Traditional evaluations of teaching are often done at the end of the semester, collected, and left unread until an official review by administration or other outside impetus forces us to confront how our students felt. It’s all too easy to put reading and reflecting on these evaluations off as we typically get the information near holiday or summer break. We have other things to do, family to see, grants to write. Additionally, we question the usefulness of these evaluations, wondering, for example, whether it is just a popularity contest. Students also question how these evaluations are used, if at all (Brown, 2008).
In addition to the traditional end of semester SET, mid semester evaluations are a burgeoning topic in higher education. Through using a mid semester evaluation faculty are able to make adjustments to teaching practice to better address current student needs and thereby provide a better educational experience (Blash et al., 2018). Quality teaching means constant improvement. This improvement should not just come at the end of the semester but can and should be a continual process of reflection.
Finally, we would be remiss to not acknowledge the place of equity in the subjective nature of student evaluations of teaching. Research has shown that SETs tend to favor the white male instructor archetype (Boring, 2017; Dodeen, 2013). One study in particular used an online class where students did not have face to face time with the instructor but were given an instructor name that sounded either traditionally female or male (MacNell et al., 2015). The course content was identical but the (presumably) male instructor received higher ratings. Female-presenting instructors are also more likely to get comments on their appearance than male presenting instructors.
Any subjective measure is open to implicit bias. Causes of bias also vary widely, from implicit bias to bias stemming from an expected grade (Boring, 2017; Nasser-Abu Alhija, 2017). We must recognize that these measures are imperfect and utilize the data accordingly. The appropriate use of SET data is discussed further later in this chapter, taking into account these factors of bias. We will also review some potential methods of mitigating bias in SETs.
Collecting SETs
Almost every institution uses some form of teaching evaluation. These processes are often driven by a top-down approach, resulting in less-than-ideal feedback for the instructor. As a new instructor, a good practice is to look into the form or forms available, understand the process, and see what options exist. Are you able to add additional questions? Are there department-specific questions? What is the review process like for these forms? Can you conduct your own “off cycle” evaluation? Become involved in the process so that you can advocate for evaluations that will benefit your teaching and your students.
Understanding the form, knowing what questions are on it, knowing the timing of when it will be available to students, when you as an instructor receive the results, and who receives the results are all important. In some cases, departments or colleges may limit the number of evaluations or who is evaluated. For example, if you are a veterinary faculty member specializing in dentistry, you may only guest lecture two hours all semester with the rest of your teaching done in a clinical setting. In some cases, due to the survey burden, there may be rules in place that say instructors with fewer than five hours of teaching time are not evaluated. Or perhaps you are a graduate student doing some guest lecturing; you only have one hour of teaching time, but you need evaluations of your teaching to add to your job packet for future employment. Seek out assistance from departments to see if exceptions can be made. Additionally, with “off cycle” evaluations, it is important to know the timing. If you guest lecture in the third week of the course but the evaluations do not become available to students until week 12, they are likely to not remember you well, and are therefore more likely to give inaccurate information.
While nearly every campus and class conduct evaluations of teaching in some capacity, the effect and use of these evaluations varies widely. Most of these evaluations are done at the end of the semester and if any changes are made in the teaching, these only impact future students. Mid-semester evaluations offer an alternative. By “taking the temperature” of a class at the midpoint of the semester, faculty are able to check in and see what students are enjoying and what changes may need to be made. Students also respond positively to the use of mid-semester evaluations given that they can directly benefit from the results and are therefore more inclined to respond to the evaluation whereas at the end of the semester they are more apt to skip the survey, get their grade, and head on to the next course (Brown, 2008; Frick et al., 2009). Providing assessment and feedback opportunities during a course rather than at the end, students are given the power to advocate for their own educational experiences and faculty are more empowered to make changes to better the educational environment for all involved.
Engaging with SET Data
Once we have collected some form of SET data, either through our own means or that of external forces, the next step is to engage with our data. Half the time it will sit there for a long while, ignored as we go into the summer holiday or summer research venue. But eventually you have to look at it.
Engaging with our feedback data as part of reflexive teaching practice is one of the most beneficial things we can do, but it is often one of the most difficult. Looking at our evaluations and feedback often brings up feelings of judgement, apprehension, and inadequacy. To push past this discomfort, it is important to remember that the data are not there to sit in judgement of you. It’s there to help you. It’s there for you to respond to as well. No, not directly to those students, but to the next class, in most cases. Your response comes in the changes you make, the ways you continue to improve over the course of your career. Sometimes the feedback is painful. Sometimes it’s funny. Sometimes it could be more constructive, but it’s feedback, nonetheless. It’s an assessment. And those data need to be used. Not just by administration and end of semester reports, but by you. How will you respond? How will you use YOUR data?
Sometimes, ripping off the Band-Aid is the best approach. Just sit down with a strong cup of coffee and skim through it. You may be surprised. Often, you will find at least one compliment or positive note to move forward with. Sometimes it’s even just a simple “Thank you” that helps us feel appreciated. Find those and hold on to them.
We know that if given no incentive, typically the most satisfied and most dissatisfied individuals respond to evaluations. They love you or they hate you. So don’t go in expecting all sunshine and daisies. Expect to get feedback that is less than pleasant and know that you have room to improve. We all do.
When scoring SETs, especially at the departmental or program level, ranking is not advised (Sachs & Parsell, 2014; Schmelkin et al., 1997). This includes ranking within a course for which you are instructor in charge. The scores are typically on a Likert scale and when these scores are reported to faculty, typically they have been mean scored, which in some recommendations is inappropriate for Likert-scaled items. To further extrapolate this score into a ranking is not appropriate. Furthermore, ranking employees, faculty, or staff in general lowers morale and is inadvisable (Sachs & Parsell, 2014). Additionally, it is important to recall the bias present in SETs when reviewing these as a department or program review. Given these issues with SETs, the data gathered from these subjective measures are most useful as formative feedback for the instructor and course, and not for making personnel- or department-related decisions.
However, scores, even those that are mean scored, can be useful if we first work to understand the score in context. What was going on during this time? What occurred this semester? Were there any abnormalities or issues that arose? Were there any large-scale world or campus events that may have impacted students? What bias may be present within the data? Additionally, how was the SET distributed? Was it available for a long period of time, or a short time period? Was it discussed in class or only sent out via link? Research indicates that all of these variables can impact the score. In one article, the weather significantly impacted the teaching effectiveness scores (Braga et al., 2014).
Moving past the somewhat arbitrary score, the bulk of the SET tends to be in the nebulous comments section. The comments can be the most daunting and analytically confusing section for individuals to engage with, given that students’ responses vary widely from constructive feedback to inappropriate comments on appearance. So how do you examine the comments?
There are two methods for examining the comments: in-depth analysis, and a surface-level view. The end use of the insight gained should be considered when determining which approach to take. Did you try something new with students and want to see how it went? Was there a specific question about a portion of your teaching that you added to the evaluation form? What level of data would be most useful for your purposes? Answering these questions will help determine how to go about breaking down the comment section.
Surface analysis
In order to do a surface analysis, all you will need to do is a quick read-through of each comment. See where the majority of student opinion is falling. Get a blank piece of paper (or spreadsheet) and divide it in three columns labeled “theme,” “positive,” and “negative.” Do a quick scan of the comments. Read each one and write the general theme down in the first column, and then place a check or tally mark for either positive or negative. Each time you come across the same theme, add a new mark in that theme’s section. Create new themes as you go through the comments. Using this method, it should not take very long to paint a concise picture of what is going well and what changes you may want to make. See table 13.1 for an example.
Comment | Code |
---|---|
Not give as much time for questions. | Pace & Timing |
She was very enthusiastic and eager to improve my education. She also presented the material in a way that was accessible to me. | Engagement |
Overall, the instructor did a great job. She was very passionate and knowledgeable about the material which made learning more interesting and enjoyable. | Engagement |
The PowerPoints were hard to follow. The font was too small from where I was sitting | PowerPoints |
You talk really fast sometimes and it was hard for me to keep up as English is not my first language. | Pace & Timing |
In-depth analysis
In some cases, an in-depth analysis will be more useful. You may be just starting out in your teaching career and want to better understand how things are going, or maybe you tried a new technique like flipped classroom and want to use the feedback to decide if changes are needed. Similar to the surface analysis, you can create an Excel spreadsheet of your comments. Often, you can get the comments already loaded into Excel from your evaluation software or the entity that does the evaluation data collection. Mirroring the technique of the surface analysis, you will create additional columns beside the comment column—however instead of positive and negative, you will create two to three theme categories. Often, there are lengthy comments that address several elements, so use a column for each theme. As you read through your comments, note the themes. Once you have completed this, use Excel’s “sort” function and sort the themes alphabetically. You can then create summaries of all the comments in that theme and determine what is working well and what you may need to change. Make additional notes on which areas you want to focus on moving forward. It is sometimes useful to do an additional calculation of the percentages of the themes, thereby quantifying the data for further review. In doing this you may find that 40% of students commented that the pace of the course was too slow, while 10% found it too fast. See what changes can be made and save this file for comparison against your next SET. This can be useful in your annual review as metrics of your efforts and changes made to better your teaching practice.
However, when engaging with this process, remember to take all the feedback with a grain of salt. Not just the bad parts, but the good, too. You may be perfect in one student’s eyes, the one who got an A, loves the discipline, knew you from another class, and thinks you walk on water. But are you really all that that student makes you out to be? No. And are you really a horrible, terrible, mean, imbecile like the other student said? Also no. Where are you? Probably in between. So be humble and self-reflective. Know that you aren’t perfect but know that by reading and analyzing your feedback, you can make your teaching practice better. Remember that it’s your data. The assessment of your efforts (but not of you as a person, remember that). So, use it to your advantage.
Using SET Data
There are several ways you can use your SET data once you’ve gone over it. One potential option is to add it to your teaching portfolio, as a way to add detail to your teaching effectiveness. You can use your SET data in the form of summarized student evaluations of teaching, including response rate and relationship to departmental average. You may also want to include a select number of written comments from students on class evaluations. To supplement these, you can also include comments from a peer observer or a colleague teaching the same course.
Another use of your SET data is for course improvement. This will take some additional searching and analysis to determine what data you have access to. Many institutions use two forms of evaluations, a teaching evaluation and a course evaluation. If you are the lead instructor for the course, you most likely will receive both. However, if you are a secondary instructor within the course, you may not receive the course evaluation. It would be a good idea to reach out to the lead instructor to ask if you might see the course-level data. Sometimes this data will need to be amended to remove comments about other faculty members. Once you have access to these data you can analyze them for the comments regarding your teaching as well as the comments regarding the course structure. What changes can you make? What are reasonable suggestions? What worked well? These data also serve as a great opportunity to demonstrate to future students that you are listening! You may wish to take a few comments and add them to your syllabus or go over them on the first day of class and let students know about the changes you have made based on last year’s class comments. Remember, this is YOUR data to help you improve your teaching as well as your course. Use it! Don’t just let it go to waste!
A final use for this evaluation data is as a component of an overarching programmatic improvement. This will often be at the department or college level. The process for this involves reviewing the data as a whole. As stated earlier, this department-level review process should not rely solely on data from SETs, but should provide a full picture of the program, including interviews, observations, and document analysis. Once the data are collected, areas for improvement can be pinpointed, much the same as with a smaller, classroom-level evaluation. Once these areas are identified, leadership should meet with teaching teams as well as individuals to discuss the findings and agree upon a forward direction. If done correctly, this should be a positive experience. Evaluations and evaluation data should not be used to punish or shame departments, programs, or individuals.
Conclusion
In summary, remember that your SET data are just that, yours! You can use it to help you improve your teaching, your course, and your department. As with all subjective measures, not all of your evaluators will agree. Find the evaluation comments and sections that you feel are the most true and applicable for your teaching. Harness your data and make improvements where you can. Never stop growing as an educator!
Reflection Questions
- How can you make better use of your SET data?
- If you could ask your former students for feedback on any one thing related to your teaching, what would it be?
- What parts of your current SET process are working well for you? Which parts could be improved?
References
Blash, A., Schneller, B., Hunt, J., Michaels, N., & Thorndike, J. (2018). There’s got to be a better way! Introducing faculty to mid-course formative reviews as a constructive tool for growth and development. Currents in Pharmacy Teaching and Learning, 10(9), 1228–1236. https://doi.org/10.1016/j.cptl.2018.06.015.
Boring, A. (2017). Gender biases in student evaluations of teaching. Journal of Public Economics, 145, 27–41. https://doi.org/10.1016/j.jpubeco.2016.11.006.
Braga, M., Paccagnella, M., & Pellizzari, M. (2014). Evaluating students’ evaluations of professors. Economics of Education Review, 41, 71–88. https://doi.org/10.1016/j.econedurev.2014.04.002.
Brown, M. J. (2008). Student perceptions of teaching evaluations. Journal of Instructional Psychology, 35(2), 177–182.
Dodeen, H. (2013). Validity, reliability, and potential bias of short forms of students’ evaluation of teaching: The case of UAE University. Educational Assessment, 18(4), 235–250. https://doi.org/10.1080/10627197.2013.846670.
Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2009). College student perceptions of teaching and learning quality. Educational Technology Research and Development, 57(5), 705–720.
MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291–303. https://doi.org/10.1007/s10755-014-9313-4.
Nasser-Abu Alhija, F. (2017). Teaching in higher education: Good teaching through students’ lens. Studies in Educational Evaluation, 54, 4–12. https://doi.org/10.1016/j.stueduc.2016.10.006.
Sachs, J., & Parsell, M. (2014). Introduction: The place of peer review in learning and teaching. In J. Sachs & M. Parsell (Eds.), Peer Review of Learning and Teaching in Higher Education (pp. 1–9). Springer Netherlands. https://doi.org/10.1007/978-94-007-7639-5_1.
Schmelkin, L. P., Spencer, K. J., & Gellman, E. S. (1997). Faculty Perspectives on Course and Teacher Evaluations. 18.
- How to cite this book chapter: Vengrin, C. 2022. Engaging the Fear: How to Utilize Student Evaluations, Accept Feedback, and Further Teaching Practice. In: Westfall-Rudd, D., Vengrin, C., and Elliott-Engel, J. (eds.) Teaching in the University: Learning from Graduate Students and Early-Career Faculty. Blacksburg: Virginia Tech College of Agriculture and Life Sciences. https://doi.org/10.21061/universityteaching License: CC BY-NC 4.0. ↵