skip to main content
10.1145/3478431.3499294acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

Hyperstyle: A Tool for Assessing the Code Quality of Solutions to Programming Assignments

Authors Info & Claims
Published:22 February 2022Publication History

ABSTRACT

In software engineering, it is not enough to simply write code that only works as intended, even if it is free from vulnerabilities and bugs. Every programming language has a style guide and a set of best practices defined by its community, which help practitioners to build solutions that have a clear structure and therefore are easy to read and maintain. To introduce assessment of code quality into the educational process, we developed a tool called Hyperstyle. To make it reflect the needs of the programming community and at the same time be easily extendable, we built it upon several existing professional linters and code checkers. Hyperstyle supports four programming languages (Python, Java, Kotlin, and Javascript) and can be used as a standalone tool or integrated into a MOOC platform. We have integrated the tool into two educational platforms, Stepik and JetBrains Academy, and it has been used to process about one million submissions every week since May 2021.

Skip Supplemental Material Section

Supplemental Material

hyperstyle_sigcse2022_final.mov

mov

176.1 MB

References

  1. 2021. Artifacts and supplementary material. https://doi.org/10.5281/zenodo. 5749825Google ScholarGoogle Scholar
  2. 2021. Checkstyle. https://checkstyle.sourceforge.io/Google ScholarGoogle Scholar
  3. 2021. Class cohesion measuring tool for Python. https://github.com/mschwager/ cohesionGoogle ScholarGoogle Scholar
  4. 2021. Codacy. https://www.codacy.com/Google ScholarGoogle Scholar
  5. 2021. flake8. https://flake8.pycqa.org/en/latest/Google ScholarGoogle Scholar
  6. 2021. Hyperstyle docker image. https://hub.docker.com/r/stepik/hyperstyleGoogle ScholarGoogle Scholar
  7. 2021. Hyperstyle tool. https://github.com/hyperskill/hyperstyleGoogle ScholarGoogle Scholar
  8. 2021. JetBrains Academy. https://www.jetbrains.com/academy/Google ScholarGoogle Scholar
  9. 2021. Maintainability Index. https://radon.readthedocs.io/en/latest/intro.html# maintainability-indexGoogle ScholarGoogle Scholar
  10. 2021. Oracle Java code conventions. https://www.oracle.com/java/technologies/ javase/codeconventions-contents.htmlGoogle ScholarGoogle Scholar
  11. 2021. PEP 8 -- Style Guide for Python Code. https://www.python.org/dev/peps/pep0008/Google ScholarGoogle Scholar
  12. 2021. PMD. https://pmd.github.io/Google ScholarGoogle Scholar
  13. 2021. Pylint. https://www.pylint.org/Google ScholarGoogle Scholar
  14. 2021. Qodana. https://www.jetbrains.com/help/qodana/getting-started.htmlGoogle ScholarGoogle Scholar
  15. 2021. SonarQube. https://www.sonarqube.org/Google ScholarGoogle Scholar
  16. 2021. Stepik. https://stepik.org/Google ScholarGoogle Scholar
  17. 2021. Tutor demo version. https://www.hkeuning.nl/rpt/Google ScholarGoogle Scholar
  18. Korhan Akcura, Reza Shalchian, Abhijit Patil, Rattandeep Singh, and Jay Tanna. [n.d.]. Static Versus Dynamic Source Code Analysis. ([n. d.]).Google ScholarGoogle Scholar
  19. Amjad Altadmri and Neil CC Brown. 2015. 37 million compilations: Investigating novice programming mistakes in large-scale student data. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education. 522--527.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hannah Blau and J Eliot B Moss. 2015. FrenchPress gives students automated feedback on java program flaws. In Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education. 15--20.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jürgen Börstler, Harald Störrle, Daniel Toll, Jelle Van Assema, Rodrigo Duran, Sara Hooshangi, Johan Jeuring, Hieke Keuning, Carsten Kleiner, and Bonnie MacKellar. 2018. " I know it when I see it" Perceptions of Code Quality: ITiCSE'17 Working Group Report. In Proceedings of the 2017 ITiCSE Conference on Working Group Reports. 70--85.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Rohan Roy Choudhury, Hezheng Yin, and Armando Fox. 2016. Scale-driven automatic hint generation for coding style. In International Conference on Intelligent Tutoring Systems. Springer, 122--132.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Martin Fowler. 2018. Refactoring: improving the design of existing code. AddisonWesley Professional.Google ScholarGoogle Scholar
  24. Ruvo Giuseppe, Tempero Ewan, Luxton-Reilly Andrew, Rowe Gerard, and Giacaman Nasser. 2018. Understanding semantic style by analysing student code. 73--82. https://doi.org/10.1145/3160489.3160500Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Robert L Glass. 2002. Facts and Fallacies of Software Engineering. Addison-Wesley Professional.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Sudheendra Hangal and Monica S Lam. 2002. Tracking down software bugs using automatic anomaly detection. In Proceedings of the 24th International Conference on Software Engineering. ICSE 2002. IEEE, 291--301.Google ScholarGoogle ScholarCross RefCross Ref
  27. Willy Jimenez, Amel Mammar, and Ana Cavalli. 2009. Software vulnerabilities, prevention and detection methods: A review1. Security in model-driven architecture 215995 (2009), 215995.Google ScholarGoogle Scholar
  28. Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2017. Code quality issues in student programs. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education. 110--115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2019. How teachers would help students to improve their code. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education. 119--125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2021. A tutoring system to learn code refactoring. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. 562--568.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Kris MY Law, Victor CS Lee, and Yuen-Tak Yu. 2010. Learning motivation in e-learning facilitated computer programming courses. Computers & Education 55, 1 (2010), 218--228.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Robert C Martin. 2009. Clean code: a handbook of agile software craftsmanship. Pearson Education.Google ScholarGoogle Scholar
  33. Heather Miller, Philipp Haller, Lukas Rytz, and Martin Odersky. 2014. Functional programming for all! Scaling a MOOC for students and professionals alike. In Companion Proceedings of the 36th International Conference on Software Engineering. 256--263.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Norman Peitek, Sven Apel, Chris Parnin, André Brechmann, and Janet Siegmund. 2021. Program comprehension and code complexity metrics: An fmri study. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 524--536.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Nick Rutar, Christian B Almazan, and Jeffrey S Foster. 2004. A comparison of bug finding tools for java. In 15th International symposium on software reliability engineering. IEEE, 245--256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Caitlin Sadowski, Edward Aftandilian, Alex Eagle, Liam Miller-Cushon, and Ciera Jaspan. 2018. Lessons from building static analysis tools at google. Commun. ACM 61, 4 (2018), 58--66.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Liisa Sakerman. 2021. Overview of the advantages and disadvantages of static code analysis tools. (2021).Google ScholarGoogle Scholar
  38. Leo C Ureel II and Charles Wallace. 2019. Automated critique of early programming antipatterns. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 738--744.Google ScholarGoogle Scholar

Index Terms

  1. Hyperstyle: A Tool for Assessing the Code Quality of Solutions to Programming Assignments

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGCSE 2022: Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 1
          February 2022
          1049 pages
          ISBN:9781450390705
          DOI:10.1145/3478431

          Copyright © 2022 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 22 February 2022

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,595of4,542submissions,35%

          Upcoming Conference

          SIGCSE Virtual 2024

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader