A course evaluation is a paper or electronic questionnaire, which requires a written or selected response answer to a series of questions in order to evaluate the instruction of a given course. The term may also refer to the completed survey form or a summary of responses to questionnaires.
They are a means to produce useful feedback which the teacher and school can use to improve their quality of instruction. The process of (a) gathering information about the impact of learning and of teaching practice on student learning, (b) analyzing and interpreting this information, and (c) responding to and acting on the results, is valuable for several reasons. They are beneficial because instructors can review how others interpret their teaching methods, thereby improving their instruction. The information can be also used by administrators, along with other input, to make summative decisions (e.g., decisions about promotion, tenure, salary increases, etc.) and make formative recommendations (e.g., identify areas where a faculty member needs to improve). Typically, these evaluations are combined with peer evaluations, supervisor evaluations, and results of student’s test scores to create an overall picture of teaching performance. Course evaluations are implemented in one of two ways, either summative or formative.
Course evaluation instruments
Course evaluation instruments generally include variables such as communication skills, organizational skills, enthusiasm, flexibility, attitude toward the student, teacher – student interaction, encouragement of the student, knowledge of the subject, clarity of presentation, course difficulty, fairness of grading and exams, and global student rating. Examples of standardized course evaluation instruments are provided by evaluation tools such as TrainingCheck, and CE-Gen.
Summative evaluation occurs at the end of a semester, usually a week or two before the last day of class. The evaluation is performed by the current students of the class. Students have the option to reflect on the teachers’ instruction without fear of punishment because course evaluations are completely confidential and anonymous. This can be done in one of two ways; either with a paper form or with online technology. Typically, in a paper based format, the paper form is distributed by a student while the teacher is out of the room. It is then sealed in an envelope and the teacher will not see it until after final grades are submitted. The online version can be identical to a paper version or more detailed, using branching question technology to glean more information from the student. Both ways allow the student to be able to provide useful and honest feedback. This feedback is to be used by teachers to improve the quality of their instruction. The information can also be used to evaluate the overall effectiveness of a teacher, particularly for tenure and promotion decisions.
Formative evaluation typically occurs when changes can take place during the current semester, although many institutions consider written comments on how to improve formative as well. Typically this form of evaluation is performed by peer consultation. Other experienced teachers will review one of their peer’s instructions. The purpose of this evaluation is for the teacher to receive constructive criticism on teaching. Generally, peer teachers will sit in on a few lessons given by the teacher and take notes on their methods. Later on the team of peer teachers will meet with the said teacher and provide useful, non-threatening feedback on their lessons. The peer team will offer suggestions on improvement, which the said teacher can choose to implement.
Peer feedback is given to the instructor typically in the form of an open session meeting. The peers first reflect on the qualities that were good in the instruction. Then they move on to areas that need improvement. Next the instructor will make suggestions for improvement and receive feedback on those ideas.
Student feedback can be an important part of formative evaluation. Student evaluations are formative when their purpose is to help faculty members improve and enhance their teaching skills. The teachers may require their students to complete written evaluation, participate in ongoing dialogue or directed discussions during the course of the semester. The use of a 'Stop, Start Continue' format for student feedback has been shown to be highly effective at generating constructive feedback for course improvement. At the Faculty of Psychology of the University of Vienna, Twitter was used for formative course evaluation.
Criticism of course evaluations as measures of teaching effectiveness
Summative student evaluations of teaching (SETs) have been widely criticized, especially by teachers, for not being accurate measures of teaching effectiveness. Surveys have shown that a majority of teachers believe that a teacher's raising the level of standards and/or content would result in worse SETs for the teacher, and that students in filling out SETs are biased in favor of certain teachers' personalities, looks, disabilities, gender and ethnicity. The evidence that some of these critics cite indicates that factors other than effective teaching are more predictive of favorable ratings. In order to get favorable ratings, teachers are likely to present the content which can be understood by the slowest student. Consequently, the content has been affected. Many of those who are critical of SETs have suggested that they should not be used in decisions regarding faculty hires, retentions, promotions, and tenure. Some have suggested that using them for such purposes leads to the dumbing down of educational standards. Others have said that the typical way SETs are now used at most universities is demeaning to instructors and has a corrupting effect on students' attitudes toward their teachers and higher education in general.
The economics of education literature and the economic education literature is especially critical. For example, Weinberg et al. (2009) finds SET scores in first-year economics courses at Ohio State University are positively related to the grades instructors assign but are unrelated to learning outcomes once grades are controlled for. Others have also found a positive relationship between grades and SET scores but unlike Weinberg et al. (2009) do not directly address the relationship between SET scores and learning outcomes. A paper by Krautmann and Sander (1999) find that the grades students expect to receive in a course are positively related to SET scores. Isely and Singh (2005) find it is the difference between the grades students expect to receive and their cumulative GPA that is the relevant variable for obtaining favourable course evaluations. Another paper by Carrell and West (2010) use a data set from the U.S. Air Force Academy where students are randomly assigned to course sections (reducing selection problems). It found that calculus students got higher marks on common course examinations when they had instructors with high SET scores but did worse when they took later courses requiring calculus. The authors discuss a number of possible explanations for this finding, including that instructors with higher SET scores may have concentrated their teaching on the common examinations in the course rather than giving students a deeper understanding for later courses. Hamermesh and West (2005) find that students at the University of Texas at Austin gave attractive instructors higher SET scores than less attractive instructors. However, the authors conclude that it may not be possible to determine if attractiveness increases the effectiveness of an instructor, possibly resulting in better learning outcomes. It may be the case that students pay more attention to attractive instructors.
The empirical economics literature is in sharp contrast to the educational psychology literature which generally argues that teaching evaluations are a legitimate method of evaluating instructors and are unrelated to grade inflation. However, similar to the economic literature other researchers outside of educational psychology have offered negative findings on course evaluations. For example, some papers have examined online course evaluations and found them to be heavily influenced by the instructor’s attractiveness and willingness to give high grades in return for very little work.
Another criticism of these assessment instruments is that largely the data they produce are difficult to interpret for purposes of self- or course-improvement, given the number of variables that can affect evaluation scores. Finally, paper based course evaluations can cost a university thousands of dollars over the years, while an electronic survey is offered at minimal cost to the university.
Another concern that has been raised by instructors is that response rates to online course evaluations are lower (and therefore the results may be less valid) than paper-based in class evaluations. The situation is more complex that response rates alone would indicate. Student-faculty engagement is offered as an explanation, where course level, instructor rank, and other variables lacked explanatory power.
- Educational assessment
- Educational evaluation
- Donald Kirkpatrick, founder of the 'Four Level Model' of training evaluation
- Ronald Ferguson (economist), a researcher who studied student evaluation of teachers
- Rahman, K. (2006). Learning from your business lectures: using stepwise regression to understand course evaluation data. Journal of American Academy of Business, Cambridge, 19(2), 272–279.
- Dunegan, K. J., & Hrivnak, M. W. (2003). Characteristics of mindless teaching evaluations and the moderating effects of image compatibility. Journal of Management Education, 27(3), 280–303.
- Kim, C., Damewood, E., & Hodge, N. (2000). Professor attitude: its effect on teaching evaluations. Journal of Management Education, 24(4), 458–473.
- Tang, T. L.-P. (1997). Teaching evaluation at a public institution of higher education: factors related to the overall teaching effectiveness. Public Personnel Management, 26(3), 379–391.
- Mohanty, G., Gretes, J., Flowers, C., Algozzine, B., & Spooner, F. (2005). Multi-method evaluation of instruction in engineering classes. Journal of Personnel Evaluation in Education, 18(2), 139–151.
- Hoon, A.E., Oliver, E., Szpakowska, K., and Newton P.M. 2014. Use of the ‘Stop, Start, Continue’ method is associated with the production of constructive qualitative feedback by students in higher education. Assessment and Evaluation in Higher Education. DOI:10.1080/02602938.2014.956282
- Stieger, S., & Burger, C. (2010). Let's go formative: continuous student ratings with Web 2.0 application Twitter. Cyberpsychology, Behavior, and Social Networking, 13(2), 163–167.
- Emery, C. R., Kramer, T. R., & Tian, R.G. (2003). Return to academic standards: a critique of student evaluations of teaching effectiveness. Quality Assurance in Education, 11(1), 37–46. Retrieved 2011-06-16.
- Merritt, D. (2008). Bias, the brain, and student evaluations of teaching. St. John's Law Review, 82, 235–287. Retrieved 2011-06-16.
- J. Scott Armstrong (2012). "Natural Learning in Higher Education". Encyclopedia of the Sciences of Learning.
- Birnbaum, M. H. (1999). A survey of faculty opinions concerning student evaluations of teaching. The Senate Forum (California State University, Fullerton), 14(1), 19–22. Longer version with references. Retrieved 2011-06-16.
- J. Scott Armstrong (2012). "Natural Learning in Higher Education". Encyclopedia of the Sciences of Learning.
- Gray, M., & Bergmann, B. R. (September–October 2003). "Student teaching evaluations: inaccurate, demeaning, misused", Academe Online, 89(5). Retrieved 2011-06-16.
- Platt, M. (1993). What student evaluations teach. Perspectives on Political Science, 22(1), 29–40. Retrieved 2011-06-16.
- Weinberg, B. A., Hashimoto, M., & Fleisher, B. M. (2009). Evaluating teaching in higher education. Journal of Economic Education, 40(3), 227–261.
- McPherson, M. A., Jewell, R. T., & Kim, M. (2009). What determines student evaluation scores? A random effects analysis of undergraduate economics classes. Eastern Economic Journal, 35(1), 37–51.
- Langbein, L. (2008). Management by results: student evaluation of faculty teaching and the mis-measurement of performance. Economics of Education Review, 27(4), 417–428.
- Krautmann, A. C., & Sander, W. (1999). Grades and student evaluations of teachers. Economics of Education Review, 18(1), 59–63.
- Isely, P., & Singh, H. (2005). Do higher grades lead to favorable student evaluations? Journal of Economic Education, 36(1), 29–42.
- Carrell, S. E., & West, J. E. (2010). Does professor quality matter? Evidence from random assignment of students to professors. Journal of Political Economy, 118(3), 409–432. Retrieved 2011-06-16.
- J. Scott Armstrong (2012). "Natural Learning in Higher Education" (PDF). Encyclopedia of the Sciences of Learning.
- Hamermesh, D. S., & Parker, A. (2005). Beauty in the classroom: instructors’ pulchritude and putative pedagogical productivity. Economics of Education Review, 24(4), 369–376.
- Felton, J., Mitchell, J., & Stinson, M. (2004a). Web-based student evaluations of professors: the relations between perceived quality, easiness and sexiness. Assessment & Evaluation in Higher Education, 29(1), 91–108.
- Felton, J., Mitchell, J., & Stinson, M. (2004b). Cultural differences in student evaluations of professors. Journal of the Academy of Business Education, Proceedings. Retrieved 2011-06-16.
- Marks, P. (2012). Silent Partners: student course evaluations and the construction of pedagogical worlds. Canadian Journal for Studies in Discourse and Writing, 24(1).
- Anderson, J., Brown, G., & Spaeth, S. (Aug/Sept 2006). Online student evaluations and response rates reconsidered., Innovate (Fischler School of Education and Human Services, Nova Southeastern University), 2(6). Retrieved 2011-06-16.