Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter April 15, 2020

An index of teaching performance based on students’ feedback

  • Marco Marozzi ORCID logo EMAIL logo and Shovan Chowdhury

Abstract

Evaluation of teaching performance of faculty members, on the basis of students’ feedback, is routinely performed by almost all tertiary education institutions. Objective assessment of faculty members requires a comprehensive index of teaching performance. A composite indicator is proposed to assess teaching performance of faculty members. It is based on the combination of several items evaluated by students such as punctuality, communication ability and subject coverage. Robustness of the indicator is assessed applying uncertainty analysis. An application to a data set from an Indian institution is presented. It is shown that the proposed index can be used to rank faculty members from the least to the worst performer according to students’ feedback.

MSC 2010: 62H99; 62P25; 62-07

Award Identifier / Grant number: IRIDE-B-10-2016-9-2018

Funding statement: The first author’s research has been partially supported by the following research grant: IRIDE-B-10-2016-9-2018, Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice.

References

[1] C. A. Bana e Costa and M. D. Oliveira, A multicriteria decision analysis model for faculty evaluation, Omega 40 (2012), 424–436. 10.1016/j.omega.2011.08.006Search in Google Scholar

[2] W. E. Becker and M. Watts, How departments of economics should evaluate teaching, Amer. Econ. Rev. 89 (1999), 344–349. 10.1257/aer.89.2.344Search in Google Scholar

[3] W. J. Campion, D. V. Mason and H. Erdman, How faculty evaluations are used in Texas community colleges, Commun. College J. Res. Practice 24 (2000), 169–179. 10.1080/106689200264132Search in Google Scholar

[4] M. Cardin, M. Corazza, S. Funari and S. Giove, Building a global performance indicator to evaluate academic activity using fuzzy measures, Neural Nets and Surroundings, Springer, Berlin (2013), 217–225. 10.1007/978-3-642-35467-0_23Search in Google Scholar

[5] D. R. Cox, R. Fitzpatrick, A. E. Fletcher, S. M. Gore, D. J. Spiegelhalter and D. R. Jones, Quality-of-life assessment: can we keep it simple (with discussion)?, J. Roy. Statist. Soc. Ser. A 155 (1992), 353–393. 10.2307/2982889Search in Google Scholar

[6] M. del Carmen Bas, S. Tarantola, J. M. Carot and A. Conchado, Sensitivity analysis: A necessary ingredient for measuring the quality of a teaching activity index, Soc. Indicators Res. 131 (2017), 1–16. 10.1007/s11205-017-1578-4Search in Google Scholar

[7] S. P. Desselle, T. J. Mattei and R. P. Vanderveen, Identifying and weighting teaching and scholarship activities among faculty members, Amer. J. Pharmaceutical Educ. 68 (2004), 1–11. 10.5688/aj680490Search in Google Scholar

[8] W. Edwards and F. H. Barron, SMARTS and SMARTER: Improved simple methods for multiattribute utility measurement, Organizational Behavior Human Decis. Process. 60 (1994), 306–325. 10.1006/obhd.1994.1087Search in Google Scholar

[9] B. Hoskins, M. Saisana and C. M. H. Villalba, Civic competence of youth in Europe: Measuring cross national variation through the creation of a composite indicator, Soc. Indicators Res. 123 (2015), 431–457. 10.1007/s11205-014-0746-zSearch in Google Scholar PubMed PubMed Central

[10] A. Hsu, Environmental performance index, Technical Report, Yale University, New Haven, 2016. Search in Google Scholar

[11] R. L. Keeney, Value–Focused Thinking: A Path to Creative Decision Making, Harvard University, Harvard, 1992. Search in Google Scholar

[12] D. Kember, D. Leung and K. Kwan, Does the use of student feedback questionnaires improve the overall quality of teaching?, Assessment Eval. Higher Ed. 27 (2002), 411–425. 10.1080/0260293022000009294Search in Google Scholar

[13] M. Marozzi, A composite indicator dimension reduction procedure with application to university student satisfaction, Statist. Neerlandica 63 (2009), 258–268. 10.1111/j.1467-9574.2009.00422.xSearch in Google Scholar

[14] M. Marozzi, Measuring trust in European public institutions, Soc. Indicators Res. 123 (2015), 879–895. 10.1007/s11205-014-0765-9Search in Google Scholar

[15] M. Mills and A. E. Hyle, Faculty evaluation: A prickly pair, Higher Ed. 38 (1999), 351–371. 10.1023/A:1003735227936Search in Google Scholar

[16] G. Munda, M. Nardo, M. Saisana and T. Srebotnjak, Measuring uncertainties in composite indicators of sustainability, Int. J. Environmental Technol. Manag. 11 (2009), 7–26. 10.1504/IJETM.2009.027185Search in Google Scholar

[17] A. Mustafa and M. Goh, Multi–criterion models for higher education administration, Omega 24 (1996), 167–178. 10.1016/0305-0483(95)00053-4Search in Google Scholar

[18] OECD, Handbook on Constructing Composite Indicators, OECD, 2008. Search in Google Scholar

[19] M. Saisana, A. Saltelli and S. Tarantola, Uncertainty and sensitivity analysis techniques as tools for the quality assessment of composite indicators, J. Roy. Statist. Soc. Ser. A 168 (2005), 307–323. 10.1111/j.1467-985X.2005.00350.xSearch in Google Scholar

[20] M. Saisana, B. D’Hombres and A. Saltelli, Rickety numbers: Volatility of university rankings and policy implications, Res. Policy 40 (2011), 165–177. 10.1016/j.respol.2010.09.003Search in Google Scholar

[21] F. M. E. Uzoka, A fuzzy–enhanced multicriteria decision analysis model for evaluating university Academics’ research output, Inform. Knowledge Syst. Manag. 7 (2008), 273–299. Search in Google Scholar

Received: 2019-02-18
Accepted: 2020-03-12
Published Online: 2020-04-15
Published in Print: 2020-06-01

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 19.4.2024 from https://www.degruyter.com/document/doi/10.1515/mcma-2020-2059/html
Scroll to top button