ABSTRACT
In this paper, we present a general approach for collecting data flow information for shared memory parallel languages. This work can be used for any language that supports concurrent execution of threads, and consumer-producer synchronization or barrier synchronization between the threads. We assume that the traditional serial data flow information for each thread is available. We build on top of that to find new techniques and equations for collecting reaching definition, available expression, and live variables sets for parallel programs.
- 1.{AhSU} A. Aho, R. Sethi, J. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, Reading, Mass., 1986. Google ScholarDigital Library
- 2.{CaKe} D. Callahan and K. Kenne.dy. Compiling Progrmns for Dislributexi Memory Mulfiproceuor$. Journal of S~per. computing, Vol., 2,, pages 151-169, 1989.Google Scholar
- 3.{CaKS} D. Callahan, K. Kennedy, and J. Subhlok. Analysis of Event Synchronization in a Parallel Programming Tool. 2ha ACM Symposium on Principles & Practices of Parallel Programming, pages 21-30, 1990. Google ScholarDigital Library
- 4.{DuSB} M. Dubois, C. Scheurich, sad F.A. Briggs. Synchronization, Coherence, and event Ordering in Multiprocessors. IEEE Computer pages 9-21, Feb., 1988. Google ScholarDigital Library
- 5.{Kenn} K. Kennedy. a Survey of Data Flow Analysis Techniques. Program Flow Analysis. Prentice-Hall, Englewood Cliffs, New Jersey, 1981.Google Scholar
- 6.{KoMe} C. Koelbel and P. Mehrotra. Supporting Shared Data Structure on Distributed Memory Architecture2nd ACM Symposium on Principles & Practices of Parallel Programruing, pages 177-186, 1990. Google ScholarDigital Library
- 7.{MiPa} S.P. Mikdiff and D.A, Padua. Issues in the Optimization of Parallel Programs. Proceedings of the 1990 International Conference on Parallel Processing, Vol. 2 pages 105-113, 1990.Google Scholar
- 8.{Okee} M. O'keefe. Barrio MIMD Architecture. Ph.D. Thesis, Purdue University, Dept. of Elect. Eng., Rpt. No. TR-EE 90--50, Aug. 1990.Google Scholar
- 9.{PaWo} D.A. Padua, MJ. Wolfe. Advarw.ed Compiler Optimizations for supercomputers. Communication of the ACM, 29(12) pages 1184-1201, Dec. 1986. Google ScholarDigital Library
- 10.{ShSn} D. Shasha and M. SniL Efficient and Correct Execution of Parallel Programs that Share Memory. ACM TOPLAS, Vol. 2, No. 2, pages 282-312. April 1988. Google ScholarDigital Library
- 11.{Wolf} MJ. Wolfe. Optbnizing Supercompliers for supercomputers, MIT Press, Cambridge, Mass. 1989. Google ScholarDigital Library
- 12.{ZaDO} A. Z, aafrani, H. G. Dietz, and M. T. O'Keefe. Static Scheduling for Barrier MIMD Architectures.Proceedings of the 1990 international Conference on Parallel Processing, Vol. 2, pages 187-194, 1990.Google Scholar
Index Terms
- Data flow analysis for parallel programs
Recommendations
Parallel data race detection for task parallel programs with locks
FSE 2016: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software EngineeringProgramming with tasks is a promising approach to write performance portable parallel code. In this model, the programmer explicitly specifies tasks and the task parallel runtime employs work stealing to distribute tasks among threads. Similar to ...
Progress guarantee for parallel programs via bounded lock-freedom
PLDI '09Parallel platforms are becoming ubiquitous with modern computing systems. Many parallel applications attempt to avoid locks in order to achieve high responsiveness, aid scalability, and avoid deadlocks and livelocks. However, avoiding the use of system ...
Progress guarantee for parallel programs via bounded lock-freedom
PLDI '09: Proceedings of the 30th ACM SIGPLAN Conference on Programming Language Design and ImplementationParallel platforms are becoming ubiquitous with modern computing systems. Many parallel applications attempt to avoid locks in order to achieve high responsiveness, aid scalability, and avoid deadlocks and livelocks. However, avoiding the use of system ...
Comments