Abstract
Recent developments in higher education are likely to lead to increased evaluation of teaching and courses and, in particular, increased use of student evaluation of teaching and courses by questionnaire. Most studies of the validity of such evaluations have been conducted in terms of the relationship between traditional measures of ‘how much’ students learn and their ratings of teaching and courses. But there have been few if any studies of the relationship between students' rating of teaching and the quality of student learning, or in how the students approached their learning.
For the evaluation of teaching and courses by questionnaire to be valid we would expect that (1) those students reporting that they adopted deeper approaches to study would rate the teaching and the course more highly than those adopting more surface strategies and, more importantly, (2) those teachers and courses which received higher mean ratings would also have, on average, students adopting deeper strategies.
In the paper we report the results for eleven courses in two institutions. The results, in general, support the validity of student ratings, and suggest that courses and teaching in which students have adopted deeper strategies to learning also have higher student ratings.
Similar content being viewed by others
References
Abrami, P. C., Cohen, P. A. and d'Apollonia, S. (1988). ‘Implementation problems in meta-analysis’, Review of Educational Research58: 151–179.
Australian Government Department of Education and Training (1988). Higher Education: A Policy Statement. Canberra: Australian Government Printing Service.
Australian Vice-Chancellors' Committee/Australian Committee of Directors and Principals (1988). Report of the Working Party on Performance Indicators. Braddon, Australian Capital Territory.
Cohen, J. (1977). Statistical Power Analysis for the Behavioral Sciences. New York: Academic Press.
Cohen, P.A. (1981). ‘Student ratings of instruction and student achievement: a meta-analysis of multisection validity studies’, Review of Educational Research51: 281–309.
Entwistle, N.J. and Ramsden, P. (1983). Understanding Student Learning. London: Croom Helm.
Entwistle, N. J. and Tait, H. (1990). ‘Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments’, Higher Education19 (2): 169–194.
Gunstone, R. F. and White, R.T. (1981). ‘Understanding gravity’, Science Education65: 291–299.
Johansson, B., Marton, F. and Svensson, L. (1985). ‘An approach to describing learning as change between qualitatively different conceptions’, in West, L. H. T. and Pines, A. L. (eds.), Cognitive Structure and Conceptual Change. London: Academic Press.
Marsh, H. W. (1987). ‘Students' evaluation of university teaching: research findings, methodological issues, and directions for the future’, Internationaljournal of Educational Research11: 253–388.
Meyer, J. H. F. and Parsons, P. (1989). ‘Approaches to studying and course perceptions using the Lancaster inventory - A comparative study’, Studies in Higher Education14: 137–153.
Moses, I. (1986). ‘Self and student evaluation of academic staff’, Assessment and Evaluation in Higher Education11: 76–86.
Prosser, M. and Millar, R. (forthcoming). ‘The how and what of learning physics’, The European Journal of the Psychology of Education.
Van Rossum, E. J. and Schenk, S. M. (1984). ‘The relationship between learning conception, study strategy and learning outcome’, British Journal of Educational Psychology54: 73–83.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Prosser, M., Trigwell, K. Student evaluations of teaching and courses: Student study strategies as a criterion of validity. High Educ 20, 135–142 (1990). https://doi.org/10.1007/BF00143697
Issue Date:
DOI: https://doi.org/10.1007/BF00143697