skip to main content
10.1145/209936.209951acmconferencesArticle/Chapter ViewAbstractPublication PagesppoppConference Proceedingsconference-collections
Article
Free Access

Optimal mapping of sequences of data parallel tasks

Authors Info & Claims
Published:01 August 1995Publication History

ABSTRACT

Many applications in a variety of domains including digital signal processing, image processing and computer vision are composed of a sequence of tasks that act on a stream of input data sets in a pipelined manner. Recent research has established that these applications are best mapped to a massively parallel machine by dividing the tasks into modules and assigning a subset of the available processors to each module. This paper addresses the problem of optimally mapping such applications onto a massively parallel machine. We formulate the problem of optimizing throughput in task pipelines and present two new solution algorithms. The formulation uses a general and realistic model for inter-task communication, takes memory constraints into account, and addresses the entire problem of mapping which includes clustering tasks into modules, assignment of processors to modules, and possible replication of modules. The first algorithm is based on dynamic programming and finds the optimal mapping of k tasks onto P processors in O(P4k2) time. We also present a heuristic algorithm that is linear in the number of processors and establish with theoretical and practical results that the solutions obtained are optimal in practical situations. The entire framework is implemented as an automatic mapping tool for the Fx parallelizing compiler for High Performance Fortran. We present experimental results that demonstrate the importance of choosing a good mapping and show that the methods presented yield efficient mappings and predict optimal performance accurately.

References

  1. 1.BOKHARI, S. Assignment Problems in Parallel and Distributed Computing. Kluwer Academic Publishers, 1987.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2.CHANDY, M., FOSTER, I., KENNEDY, K., KOELBEL, C., AND TSENG, C. Integrated support for task and data parallelism. International Journal of Supercomputer Applications 8, 2 (1994), 80-98.]]Google ScholarGoogle Scholar
  3. 3.CHAPMAN, B., MEHROTRA, P., VAN ROSENDALE, J., AND ZIMA, H. A software architecture for multidisciplinary applications: Integrating task and data parallelism. Tech. Rep. 94-18, ICASE, NASA Langley Research Center, Hampton, VA, Mar. 1994.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. 4.CHOUDHARY, A., NARAHARI, B., NICOL, D., AND SIMHA, R. Optimal processor assignment for a class of pipelined computations. IEEE Transactions on Parallel and Distributed Systems 5, 4 (April 94), 439-445.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.CROWL, L., CROVELLA, M., LEBLANC, T., AND SCOTT, M. The advantages of multiple parallelizations in combinatorial search. Journal of Parallel and Distributed Computing 21 (1994), 110-123.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6.DINDA, P., GROSS, T., O'HALLARON, D., SEGALL, E., STICH- NOTH, J., SUBHLOK, J., WEBB, J., AND YANG, B. The CMU task parallel program suite. Tech. Rep. CMU-CS-94-131, School of Computer Science, Carnegie Mellon University, Mar. 1994.]]Google ScholarGoogle Scholar
  7. 7.FOSTER, I., AVALANI, B., CHOUDHARY, A., AND XU, M. A compilation system that integrates High Performance Fortran and Fortran M. In Proceeding of 1994 Scalable High Performance Computing Conference (Knoxville, TN, October 1994), pp. 293-300.]]Google ScholarGoogle ScholarCross RefCross Ref
  8. 8.GROSS, T., O'HALLARON, D., AND SUBHLOK, J. Task parallelism in a High Performance Fortran framework. IEEE Parallel & Distributed Technology, 3 (1994), 16-26.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9.HIGH PERFORMANCE FORTRAN FORUM. High Performance Fortran Language Specification, Version 1.0, May 1993.]]Google ScholarGoogle Scholar
  10. 10.RAMASWAMY, S,, SAPATNEKAR, $., AND BANERJEE, P. A convex programming approach for exploiting data and functional parallelism. In Proceedings of the 1994 International Conference on Parallel Processing (St Charles, IL, August 1994), vol. 2, pp. 116-125.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. 11.SARKAR, V. Partitioning and Scheduling Parallel Programs for Multiprocessors. The MIT Press, Cambridge, MA, 1989.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.SUBHLOK, J., O'HALLARON, D., GROSS, T., DINDA, P., AND WEBB, J. Communication and memory requirements as the basis for mapping task and data parallel programs. In Supercomputing '94 (Washington, DC, November 1994), pp. 330- 339.]]Google ScholarGoogle ScholarCross RefCross Ref
  13. 13.SUBHLOK, J., STICHNOTH, J., O'HALLARON, D., AND GROSS, T. Exploiting task and data parallelism on a multicomputer. in A CM SIGPLAN Symposium on Principles and Practice of Parallel Programming (San Diego, CA, May 1993), pp. 13- 22.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. 14.VONDRAN, G. Optimization of latency, throughput and processors for pipelines of data parallel tasks. Master's thesis, Dept. of Electrical and Computer Engineering, Carnegie Mellon University, 1995. In preparation.]]Google ScholarGoogle Scholar
  15. 15.WEBB, J. Latency and bandwidth consideration in parallel robotics image processing. In Supercomputing '93 (Portland, OR, Nov. 1993), pp. 230-239.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16.YANG, B., WEBB, J., STICHNOTH, J., O'HALLARON, D., AND GROSS, T. Do&merge: Integrating parallel loops and reductions. In Sixth Annual Workshop on Languages and Compilers for Parallel Computing (Portland, Oregon, Aug 1993).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. 17.YANG, T. Scheduling and Code Generation for Parallel Architectures. PhD thesis, Rutgers University, May 1993.]] Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Optimal mapping of sequences of data parallel tasks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          PPOPP '95: Proceedings of the fifth ACM SIGPLAN symposium on Principles and practice of parallel programming
          August 1995
          234 pages
          ISBN:0897917006
          DOI:10.1145/209936

          Copyright © 1995 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 August 1995

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Acceptance Rates

          Overall Acceptance Rate230of1,014submissions,23%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader