Skip to main content
Log in

How do you define and measure research productivity?

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Productivity is the quintessential indicator of efficiency in any production system. It seems it has become a norm in bibliometrics to define research productivity as the number of publications per researcher, distinguishing it from impact. In this work we operationalize the economic concept of productivity for the specific context of research activity and show the limits of the commonly accepted definition. We propose then a measurable form of research productivity through the indicator “Fractional Scientific Strength (FSS)”, in keeping with the microeconomic theory of production. We present the methodology for measure of FSS at various levels of analysis: individual, field, discipline, department, institution, region and nation. Finally, we compare the ranking lists of Italian universities by the two definitions of research productivity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Although the overall coverage of the two databases does differ significantly, evidence suggests that, with respect to comparisons at large scale level in the hard sciences, the use of either source yields similar results (Archambault et al. 2009).

  2. The subject category of a publication corresponds to that of the journal where it is published. For publications in multidisciplinary journals the scaling factor is generally calculated as the average of the standardized values for each subject category.

  3. A host of studies have demonstrated the positive effect of proximity of private research on the research productivity of public laboratories (Siegel et al. 2003).

  4. The complete list is accessible on http://attiministeriali.miur.it/UserFiles/115.htm, last accessed on Feb. 13, 2014.

  5. Mathematics and computer sciences; physics; chemistry; earth sciences; biology; medicine; agricultural and veterinary sciences; civil engineering; industrial and information engineering.

  6. We assume that other production factors are equally available to all researchers. If not, their value should be taken into account.

  7. A preceding article by the same authors demonstrated that the average of the distribution of citations received for all cited publications of the same year and subject category is the most effective scaling factor (Abramo et al. 2012c).

  8. The weighting values were assigned following advice from senior Italian professors in the life sciences. The values could be changed to suit different practices in other national contexts.

  9. In a preceding article the authors demonstrated that the average of the productivity distribution of researchers with productivity above 0 is the most effective scaling factor to compare the performance of researchers of different fields (Abramo et al. 2013c).

  10. We note again that a field is not an organizational unit, rather a classification of researchers by their scientific qualifications. This does not mean that all the researchers in the same field and organization will necessarily form a single research group that works together. As an example, we quote the SDS description for FIS/03-Condensed matter physics: “The sector includes the competencies necessary for dealing with theory and experimentation in the state of atomic and molecular aggregates, as well as competencies suited to dealing with properties of propagation and interaction of photons in fields and with material. Competencies in this sector also concern research in fields of atomic and molecular physics, liquid and solid states, semiconductors and metallic element composites, dilute and plasma states, as well as photonics, optics, optical electronics and quantum electronics”. In the Italian academic system it is quite common to find “Condensed matter physics” researchers working in two different departments (physics and engineering) at the same university.

  11. The weight represents the relative size (in terms of cost of labor) of the SDS of each university.

  12. In this case, universities with research staff in the hard sciences below 30 units were not considered.

References

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2012a). What is the appropriate length of the publication period over which to assess research performance? Scientometrics, 93(3), 1005–1017.

    Article  Google Scholar 

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2012b). A sensitivity analysis of researchers’ productivity rankings to the time of citation observation. Journal of Informetrics, 6(2), 192–201.

    Article  Google Scholar 

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2012c). Revisiting the scaling of citations for research assessment. Journal of Informetrics, 6(4), 470–479.

    Article  Google Scholar 

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2012d). Revisiting size effects in higher education research productivity. Higher Education, 63(6), 701–717.

    Article  Google Scholar 

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2013a). Individual research performance: A proposal for comparing apples to oranges. Journal of Informetrics, 7(2), 528–529.

    Article  Google Scholar 

  • Abramo, G., & D’Angelo, C. A. (2007). Measuring science: Irresistible temptations, easy shortcuts and dangerous consequences. Current Science, 93(6), 762–766.

    Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). Research productivity: Are higher academic ranks more productive than lower ones? Scientometrics, 88(3), 915–928.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Di Costa, F. (2013b). Investigating returns to scope of research fields in universities. Higher Education,. doi:10.1007/s10734-013-9685-x.

    Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Rosati, F. (2013c). The importance of accounting for the number of co-authors and their order when assessing research performance at the individual level in the life sciences. Journal of Informetrics, 7(1), 198–208.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Rosati, F. (2013d). Measuring institutional research productivity for the life sciences: The importance of accounting for the order of authors in the byline. Scientometrics, 97(3), 779–795.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Solazzi, M. (2010). National research assessment exercises: A measure of the distortion of performance rankings when labor input is treated as uniform. Scientometrics, 84(3), 605–619.

    Article  Google Scholar 

  • Archambault, É., Campbell, D., Gingras, Y., & Larivière, V. (2009). Comparing bibliometric statistics obtained from the Web of Science and Scopus. Journal of the American Society for Information Science and Technology, 60(7), 1320–1326.

    Article  Google Scholar 

  • Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078–1092.

    Google Scholar 

  • Butler, L. (2007). Assessing university research: A plea for a balanced approach. Science and Public Policy, 34(8), 565–574.

    Article  Google Scholar 

  • Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429–444.

    Google Scholar 

  • Garfield, E. (1979). Is citation analysis a legitimate evaluation tool? Scientometrics, 1(4), 359–375.

    Article  Google Scholar 

  • Glänzel, W. (2008). Seven myths in bibliometrics. About facts and fiction in quantitative science studies. Kretschmer & F. Havemann (Eds) Proceedings of WIS fourth international conference on webometrics, informetrics and scientometrics and ninth COLLNET meeting, Berlin, Germany.

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572.

    Article  Google Scholar 

  • King, D. A. (2004). The scientific impact of nations—What different countries get for their research spending. Nature, 430, 311–316.

    Article  Google Scholar 

  • Laudel, G., & Origgi, G. (2006). Introduction to a special issue on the assessment of interdisciplinary research. Research Evaluation, 15(1), 2–4.

    Article  Google Scholar 

  • Leydesdorff, L., & Bornmann, L. (2011). How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science. JASIST, 62(2), 217–229.

    Article  Google Scholar 

  • Leydesdorff, L., & Opthof, T. (2011). Remaining problems with the “New Crown Indicator” (MNCS) of the CWTS. Journal of Informetrics, 5(1), 224–225.

    Article  Google Scholar 

  • Lotka, A. J. (1926). The frequency distribution of scientific productivity. Journal of the Washington Academy of Sciences, 16(12), 317–324.

    Google Scholar 

  • May, R. M. (1997). The scientific wealth of nations. Science, 275(5301), 793–796.

    Article  Google Scholar 

  • Moed, H. F. (2005). Citation analysis in research evaluation. Springer, ISBN: 978-1-4020-3713-9.

  • Moed, H. F., Burger, W. J. M., Frankfort, J. G., & Van Raan, A. F. J. (1985). The application of bibliometric indicators: Important field- and time-dependent factors to be considered. Scientometrics, 8(3–4), 177–203.

    Article  Google Scholar 

  • Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423–430.

    Article  Google Scholar 

  • Pepe, A., & Kurtz, M. J. (2012). A Measure of total research impact independent of time and discipline. PLoS ONE, 7(11), e46428.

    Article  Google Scholar 

  • Pontille, D. (2004). La Signature Scientifique: Une Sociologie Pragmatique de l’Attribution. Paris: CNRS ÉDITIONS.

    Google Scholar 

  • QS-Quacquarelli Symonds. (2013). World University Rankings 2013. Retrieved Feb. 10, 2014 from http://www.topuniversities.com/university-rankings/world-university-rankings.

  • RIN (Research Information Network). (2009). Communicating Knowledge: How and Why Researchers Publish and Disseminate Their Findings. London, UK: RIN. Retrieved Februarys 10, 2014 from www.jisc.ac.uk/publications/research/2009/communicatingknowledgereport.aspx.

  • Scheel, H. (2000). EMS: Efficiency Measurement System. A Data Envelopment Analysis (DEA) Software. http://www.holger-scheel.de/ems/. Accessed 6 March 2014.

  • SJTU-Shanghai Jiao Tong University. (2013). Academic Ranking of World Universities 2013. Retrieved November 14, 2013 from http://www.shanghairanking.com/ARWU2011.html.

  • THE-Times Higher Education. (2013). World University Rankings 20132014. Retrieved November 8, 2013 from http://www.timeshighereducation.co.uk/world-university-rankings/2013-14/world-ranking.

  • Thompson, B. (1993). GRE percentile ranks cannot be added or averaged: A position paper exploring the scaling characteristics of percentile ranks, and the ethical and legal culpabilities created by adding percentile ranks in making “high-stakes” admission decisions. Paper presented at the Annual Meeting of the Mid-South Educational Research Association, New Orleans, LA.

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., Van Eck, N. J., et al. (2012). The Leiden ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432.

    Article  Google Scholar 

  • Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Abramo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Abramo, G., D’Angelo, C.A. How do you define and measure research productivity?. Scientometrics 101, 1129–1144 (2014). https://doi.org/10.1007/s11192-014-1269-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-014-1269-8

Keywords

Navigation