skip to main content
10.1145/3194164.3194173acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Evaluating domain-specific metric thresholds: an empirical study

Authors Info & Claims
Published:27 May 2018Publication History

ABSTRACT

Software metrics and thresholds provide means to quantify several quality attributes of software systems. Indeed, they have been used in a wide variety of methods and tools for detecting different sorts of technical debts, such as code smells. Unfortunately, these methods and tools do not take into account characteristics of software domains, as the intrinsic complexity of geo-localization and scientific software systems or the simple protocols employed by messaging applications. Instead, they rely on generic thresholds that are derived from heterogeneous systems. Although derivation of reliable thresholds has long been a concern, we still lack empirical evidence about threshold variation across distinct software domains. To tackle this limitation, this paper investigates whether and how thresholds vary across domains by presenting a large-scale study on 3,107 software systems from 15 domains. We analyzed the derivation and distribution of thresholds based on 8 well-known source code metrics. As a result, we observed that software domain and size are relevant factors to be considered when building benchmarks for threshold derivation. Moreover, we also observed that domain-specific metric thresholds are more appropriated than generic ones for code smell detection.

References

  1. T. Alves, C. Ypma, J. Visser. "Deriving Metric Thresholds from Benchmark Data". In Proceedings of 26th International Conference on Software Maintenance (ICSM), pp. 1--10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. F. Buschmann, R. Meunier, Ha. Rohnert, P. Sommerlad, M. Stai. "Pattern-Oriented Software Architecture: A System of Patterns". Volume 1, Wiley, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Chidamber, C. Kemerer. "A Metrics Suite for Object Oriented Design". IEEE Transactions on Software Engineering, vol. 20, Issue 6, 476--493, Jun., 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. CK Tool, available at https://github.com/mauricioaniche/ckGoogle ScholarGoogle Scholar
  5. D. Coleman, B. Lowther, and P. Oman. "The Application of Software Maintainability Models in Industrial Software Systems". Journal on System Software, vol. 29, Issue 1, pp. 3--16, Feb., 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Concas, M. Marchesi, S. Pinna, andN. Serra. "Power-Laws in a Large Object-Oriented Software System". IEEE Transactions on Software Engineering, vol. 33, Issue 10, pp. 687--708, Oct., 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. T. DeMarco. "Controlling Software Projects: Management, Measurement, and Estimates". Prentice Hall, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. E. Fernandes, G. Vale, L. Sousa, E. Figueiredo, A. Garcia, and J. Lee. "No Code Anomaly is an Island Anomaly Agglomeration as Sign of Product Line Instabilities". In proceedings of the 16th International Conference on Software Reuse (ICSR), pp. 48--64, 2017.Google ScholarGoogle Scholar
  9. E. Fernandes, J. Oliveira, G. Vale, T. Paiva and E. Figueiredo. "A Review-based Comparative Study of Bad Smell Detection Tools". In proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering (EASE), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. Ferreira, M. Bigonha, R. Bigonha, L. Mendes, H. Almeida. "Identifying Thresholds for Object-Oriented Software Metrics". Journal of Systems and Software, vol. 85, Issue 2, pp. 244--257, Feb., 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. F. Fontana, M. Zanoni, A. Marino, and M. Mantyla. "Code Smell Detection: Towards a Machine Learning-based Approach". In Proceedings of the 29th International Conference on Software Maintenance (ICSM), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Fowler. "Refactoring: Improving the Design of Existing Code". In Addison-Wesley Professional, 1999.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. Herbold, H. Grabowski and S. Waack. "Calculation and optimization of thresholds for sets of software metrics". Empirical Software Engineering, vol. 16, Issue 6, pp. 812--841, Dec. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. D. Hubbard. "How to Measure Anything: Finding the Value of "Intangibles" in Business". John Wiley & Sons, 2014.Google ScholarGoogle Scholar
  15. B. Kitchenham, "What's up with software metrics? - A preliminary mapping study". The Journal of Systems and Software, vol. 83, Issue 1, pp. 37--51, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Lanza, R. Marinescu. "Object-Oriented Metrics in Practice". Springer, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Linares-Vásquez, S. Klock, C. McMillan, A. Sabané, D. Poshyvanyk, and Y. Guéhéneuc. "Domain matters: bringing further evidence of the relationships among anti-patterns, application domains, and quality-related metrics in Java mobile apps". In Proceedings of the 22nd International Conference on Program Comprehension (ICPC), pp. 232--243, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Lorenz and J. Kidd. "Object-Oriented Software Metrics: APractical Guide". Prentice Hall, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R. Marinescu. "Detection strategies: metrics-based rules for detecting design flaws". In Proceedings of the 20th International Conference on Software Maintenance (ICSM), pp. 350--359, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. D. Mazinanian, N. Tsantalis, R. Stein, and Z. Valenta. "JDeodorant: clone refactoring". In Proceedings of the 38th International Conference on Software Engineering Companion (ICSE). 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. T. McCabe. "A complexity measure". IEEE Transactions on Software Engineering, vol. 2, Issue 4, pp. 308--320, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. E. Murphy-Hill, T. Zimmermann, and N. Nagappan. "Cowboys, ankle sprains, and keepers of quality: how is video game development different from software development?" In Proceedings of the 36th International Conference on Software Engineering (ICSE), pp.1--11, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. P. Oliveira, M. Valente, and F. Lima. "Extracting relative thresholds for source code metrics". In Proceedings of the 18th International Conference on Software Maintenance and Reengineering (CSMR), pp 254--263, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. W. Oizumi, A. Garcia, L. Sousa, B. Cafeo, and Y. Zhao. "Code anomalies flock together: exploring code anomaly agglomerations for locating design problems". In Proceedings of the 38th International Conference on Software Engineering (ICSE), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. Padilha, J. Pereira, E. Figueiredo, J. Almeida, A. Garcia, and C. Sant'Anna. "On the Effectiveness of Concern Metrics to Detect Code Smells: An Empirical Study". In proceedings of the 26th International Conference on Advanced Information Systems Engineering (CAiSE), 2014.Google ScholarGoogle ScholarCross RefCross Ref
  26. P. Perkusich, A. Medeiros, L. Silva, K. Gorgônio, H. Almeida, and A. Perkusich. "A Bayesian network approach to assist on the interpretation of software metrics". In Proceedings of the 30th Annual ACM Symposium on Applied Computing (SAC), 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. B. Ray, D. Posnett, V. Filkov, and P. Devanbu. "A large scale study of programming languages and code quality in GitHub". In Proceedings of the International Symposium on Foundations of Software Engineering (FSE), 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. D. Spinellis. "A tale of four kernels. In Proceedings of the 30th International Conference on Software Engineering (ICSE), pp. 381--390, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. Schumacher, N. Zazworka, F. Shull, C. Seaman, and M. Shaw. "Building empirical support for automated code smell detection". In Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Support Website, Available at https://labsoft-ufmg.github.io/techdebt-2018/.Google ScholarGoogle Scholar
  31. TechDebt 2018 - International Conference on Technical Debt Website, Available at https://2018.techdebtconf.org/track/TechDebt-2018-papers.Google ScholarGoogle Scholar
  32. E. Tempero, C. Anslow, J. Dietrich, T. Han, J. Li, M. Lumpe, H. Melton, and J. Noble. "Qualitas corpus: a curated collection of java code for empirical studies". In Proceedings of the 17th Asia Pacific Software Engineering Conference (APSEC), pp. 336--345, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. G. Vale, D. Albuquerque, E. Figueiredo, and A. Garcia. "Defining metric thresholds for software product lines: a comparative study". In Proc of the International Software Product Line Conference. (SPLC), pp. 176--185, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. L. Veado, G. Vale, E. Fernandes, and E. Figueiredo. "TDTool: threshold derivation tool". In proceedings of the International Conference on Evaluation and Assessment in Software Engineering (EASE), Tools Session, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. S. Vidal, H. Vazquez, J. A. Diaz-Pace, C. Marcos, A. Garcia and W. Oizumi, "JSpIRIT: a flexible tool for the analysis of code smells". In Proc. of the International Conference of Chilean Computer Science Society (SCCC), 2015.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Evaluating domain-specific metric thresholds: an empirical study

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      TechDebt '18: Proceedings of the 2018 International Conference on Technical Debt
      May 2018
      157 pages
      ISBN:9781450357135
      DOI:10.1145/3194164

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 May 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate14of31submissions,45%

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader