Skip to main content

ASIF: An Internal Representation Suitable for Program Transformation and Parallel Conversion

  • Conference paper
  • First Online:
Ubiquitous Communications and Network Computing (UBICNET 2021)

Abstract

Adding the Transformation and Parallelization capabilities to a compiler requires selecting a suitable language for representing the given program internally. The higher level language used to develop the code is an obvious choice but supporting the transformations at that level would require major rework to support other higher languages. The other choice is to use the assembly representation of the given program for implementing transformations. But this would require rework when supporting multiple targets. These considerations lead to the development of an internal representation that is not tied to any specific higher level language or hardware architecture. However, creating a new Internal Representation for a compiler that ultimately determines the quality and the capabilities of the compiler offers challenges of its own. Here we explore the design choices that determine the flavor of a representation, and propose a representation that includes, Instructions and Annotations that together effectively represent a given program internally. The instruction set has operators that most resemble a Reduced Instruction Set architecture format, and use three explicit memory operands which are sufficient for translation purposes and also simplify Symbolic Analysis. In addition to instructions, we support Annotations which carry additional information about the given program in the form of Keyword-Value pairs. Together instructions and annotations contain all the information necessary to support Analysis, Transformation and Parallel Conversion processes. ASIF which stands for Asterix Intermediate Format, at the time of writing is comparable to the cutting-edge-solutions offered by the competition and in many instances such as suitability for Program Analysis is superior.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hennessy, J.L., Patterson, D.A.: Computer Architecture, Fifth Edition: A Quantitative Approach, 5th edn. Morgan Kaufmann Publishers Inc., San Francisco (2011)

    Google Scholar 

  2. Faigin, K.A., Weatherford, S.A., Hoeflinger, J.P., Padua, D.A., Petersen, P.M.: The Polaris internal representation. Int. J. Parallel Prog. 22(5), 553–586 (1994)

    Article  Google Scholar 

  3. Giordano, M., Furnari, M.M.: HTGviz: a graphic tool for the synthesis of automatic and user-driven program parallelization in the compilation process. In: Polychronopoulos, C., Fukuda, K.J.A., Tomita, S. (eds.) ISHPC 1999. LNCS, vol. 1615, pp. 312–319. Springer, Heidelberg (1999). https://doi.org/10.1007/BFb0094932

    Chapter  Google Scholar 

  4. Hall, M.W., et al.: Maximizing multiprocessor performance with the suif compiler. Computer 29(12), 84–89 (1996)

    Article  Google Scholar 

  5. SUIF. The suif 2 compiler system, February 2021. http://suif.stanford.edu/suif/suif2/

  6. Belevantsev, A.A.: Multilevel static analysis for improving program quality. Program. Comput. Softw. 43(6), 321–336 (2017)

    Article  MathSciNet  Google Scholar 

  7. Bastoul, C.: Code generation in the polyhedral model is easier than you think. In: Proceedings of the 13th International Conference on Parallel Architecture and Compilation Techniques, PACT 2004, pp. 7–16, September 2004

    Google Scholar 

  8. Trifunovic, K., Nuzman, D., Cohen, A., Zaks, A., Rosen, I.: Polyhedral-model guided loop-nest auto-vectorization. In: 18th International Conference on Parallel Architectures and Compilation Techniques, PACT 2009, pp. 327–337, September 2009

    Google Scholar 

  9. Pouchet, L., Bastoul, C., Cohen, A., Vasilache, N.: Iterative optimization in the polyhedral model: Part I, one-dimensional time. In: International Symposium on Code Generation and Optimization, CGO 2007, pp. 144–156, March 2007

    Google Scholar 

  10. Kandemir, M., Choudhary, A., Shenoy, N., Banerjee, P., Ramenujarn, J.: A linear algebra framework for automatic determination of optimal data layouts. IEEE Trans. Parallel Distrib. Syst. 10(2), 115–135 (1999)

    Article  Google Scholar 

  11. Fahringer, T., Scholz, B.: A unified symbolic evaluation framework for parallelizing compilers. IEEE Trans. Parallel Distrib. Syst. 11(11), 1105–1125 (2000)

    Article  Google Scholar 

  12. Saito, H., Stavrakos, N., Carroll, S., Polychronopoulos, C., Nicolau, A.: The design of the PROMIS compiler. In: Jähnichen, S. (ed.) CC 1999. LNCS, vol. 1575, pp. 214–228. Springer, Heidelberg (1999). https://doi.org/10.1007/978-3-540-49051-7_15

    Chapter  Google Scholar 

  13. Suganuma, T., et al.: Overview of the IBM java just-in-time compiler. IBM Syst. J. 39(1), 175–193 (2000)

    Article  MathSciNet  Google Scholar 

  14. Grosser, T., Groesslinger, A., Lengauer, C.: Polly–performing polyhedral optimizations on a low-level intermediate representation. Parallel Process. Lett. 22(04), 1250010 (2012)

    Article  MathSciNet  Google Scholar 

  15. Zakai, A.: Emscripten: an LLVM-to-JavaScript compiler. In: Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion, pp. 301–312 (2011)

    Google Scholar 

  16. Liao, C., Quinlan, D.J., Panas, T., de Supinski, B.R.: A ROSE-based OpenMP 3.0 research compiler supporting multiple runtime libraries. In: Sato, M., Hanawa, T., Müller, M.S., Chapman, B.M., de Supinski, B.R. (eds.) IWOMP 2010. LNCS, vol. 6132, pp. 15–28. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13217-9_2

    Chapter  Google Scholar 

  17. Dan, Q., Liao, C.: The rose source-to-source compiler infrastructure. In: Cetus Users and Compiler Infrastructure Workshop, in Conjunction with PACT, vol. 2011, p. 1. Citeseer (2011)

    Google Scholar 

  18. Lee, S.-I., Johnson, T.A., Eigenmann, R.: Cetus – an extensible compiler infrastructure for source-to-source transformation. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958, pp. 539–553. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24644-2_35

    Chapter  MATH  Google Scholar 

  19. Bae, H., et al.: The Cetus source-to-source compiler infrastructure: overview and evaluation. Int. J. Parallel Program. 41(6), 753–767 (2013)

    Article  Google Scholar 

  20. Gupta, S., Dutt, N., Gupta, R., Nicolau, A.: SPARK: a high-level synthesis framework for applying parallelizing compiler transformations. In: 2003 Proceedings of 16th International Conference on VLSI Design, pp. 461–466. IEEE (2003)

    Google Scholar 

  21. Lattner, C., Adve, V.: LLVM: a compilation framework for lifelong program analysis & transformation. In: Proceedings of the International Symposium on Code Generation and Optimization: Feedback-directed and Runtime Optimization, CGO 2004, pp. 75–86. IEEE Computer Society, Washington (2004)

    Google Scholar 

  22. LLVM.Developer. LLVM compiler infrastructure, February 2021. https://llvm.org/docs/

  23. AMD.Developer. Open64 compiler developer guide, February 2021. https://developer.amd.com/compiler-developer-guide/

  24. Gcc.gnu.org. GCC online documentation, February 2021. https://gcc.gnu.org/onlinedocs/

  25. Intel.Corporation. Intel\(\textregistered \) C++ compiler classic developer guide and reference, February 2021. https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top.html

  26. Intel.Corporation. Intel\(\textregistered \) fortran compiler classic 2021.1 and intel\(\textregistered \)fortran compiler (beta) developer guide and reference, February 2021. https://software.intel.com/content/www/us/en/develop/documentation/fortran-compiler-oneapi-dev-guide-and-reference/top.html

  27. Microsoft.Corporation. docs.microsoft.com, February 2021. https://docs.microsoft.com/en-us/

  28. Microsoft.Corporation. Microsoft C++, C, and assembler documentation, February 2021. https://docs.microsoft.com/en-us/cpp/?view=msvc-160

  29. PathScale. Inc., Ekopath documentation, February 2021. https://www.pathscale.com/documentation

  30. PathScale. Inc., Ekopath user guide, February 2021. https://www.pathscale.com/EKOPath-User-Guide

  31. Kalyur, S., Nagaraja, G.S.: ParaCite: auto-parallelization of a sequential program using the program dependence graph. In: 2016 International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), pp. 7–12, October 2016

    Google Scholar 

  32. Kalyur, S., Nagaraja, G.S.: Concerto: a program parallelization, orchestration and distribution infrastructure. In: 2017 2nd International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS), pp. 204–209, December 2017

    Google Scholar 

  33. Kalyur, S., Nagaraja, G.S.: AIDE: an interactive environment for program transformation and parallelization. In: 2017 2nd International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS), pp. 199–203, December 2017

    Google Scholar 

  34. Kalyur, S., Nagaraja, G.S.: Efficient graph algorithms for mapping tasks to processors. In: Haldorai, A., Ramu, A., Mohanram, S., Chen, M.-Y. (eds.) 2nd EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing. EICC, pp. 467–491. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-47560-4_35

    Chapter  Google Scholar 

  35. Kalyur, S., Nagaraja, G.S.: Evaluation of graph algorithms for mapping tasks to processors. In: Haldorai, A., Ramu, A., Mohanram, S., Chen, M.-Y. (eds.) 2nd EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing. EICC, pp. 423–448. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-47560-4_33

    Chapter  Google Scholar 

  36. Kalyur, S., Nagaraja, G.S.: CALIPER: a coarse grain parallel performance estimator and predictor. In: Miraz, M.H., Excell, P.S., Ware, A., Soomro, S., Ali, M. (eds.) iCETiC 2020. LNICST, vol. 332, pp. 16–39. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60036-5_2

    Chapter  Google Scholar 

  37. Kalyur, S., Nagaraja, G.S.: A survey of modeling techniques used in compiler design and implementation. In: 2016 International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), pp. 355–358, October 2016

    Google Scholar 

  38. Kalyur, S., Nagaraja, G.S.: A taxonomy of methods and models used in program transformation and parallelization. In: Kumar, N., Venkatesha Prasad, R. (eds.) UBICNET 2019. LNICST, vol. 276, pp. 233–249. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20615-4_18

    Chapter  Google Scholar 

  39. Canedo, A., Sowa, M., Abderazek, B.A.: Quantitative evaluation of common subexpression elimination on queue machines. In: v Parallel Architectures, Algorithms, and Networks, I-SPAN 2008, pp. 25–30, May 2008

    Google Scholar 

  40. Rinker, R., Hammes, J., Najjar, W.A., Bohm, W., Draper, B.: Compiling image processing applications to reconfigurable hardware. In: 2000 Proceedings of IEEE International Conference on Application-Specific Systems, Architectures, and Processors, pp. 56–65 (2000)

    Google Scholar 

  41. Chen, L.-L., Wu, Y.: Aggressive compiler optimization and parallelization with thread-level speculation. In: 2003 Proceedings of 2003 International Conference on Parallel Processing, pp. 607–614, October 2003

    Google Scholar 

  42. Bohm, W., Najjar, W., Shankar, B., Roh, L.: An evaluation of coarse grain dataflow code generation strategies. In: 1993 Proceedings of Programming Models for Massively Parallel Computers, pp. 63–71, September 1993

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kalyur, S., Nagaraja, G.S. (2021). ASIF: An Internal Representation Suitable for Program Transformation and Parallel Conversion. In: Kumar, N., Vinodhini, M., Venkatesha Prasad, R.R. (eds) Ubiquitous Communications and Network Computing. UBICNET 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 383. Springer, Cham. https://doi.org/10.1007/978-3-030-79276-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-79276-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-79275-6

  • Online ISBN: 978-3-030-79276-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics