skip to main content
10.1145/237502.237508acmconferencesArticle/Chapter ViewAbstractPublication PagesspaaConference Proceedingsconference-collections
Article
Free Access

Optimal latency-throughput tradeoffs for data parallel pipelines

Authors Info & Claims
Published:24 June 1996Publication History
First page image

References

  1. 1.BOKHARI, S. Assignment Problems zn Parallel and Distributed Computing. Kluwer Academic Publishers, 1987.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2.CHAKRABARTI, S., DEMMEL, J., AND YELICK, K. Modeling the benefits of mixed data and task parallelism. In Seventh Annual A CM Symposium on Parallel Algomthms and Architectures (Santa Barbara, CA, July 1995).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. 3.CHANDY, M., FOSTER, I., KENNEDY, #., KOELBEL, C., AND TSENG, C. Integrated support for task and data parallelism. International dournal of Supercomputer Applicat,ons 8, 2 (1994), 80-98.]]Google ScholarGoogle Scholar
  4. 4.CHAPMAN, B., MEHROTRA, P., VAN ROSENDALE, J., AND ZIMA, H. A software architecture for multidisciplinary applications: Integrating task and data parallelism. Tech. Rep. 94-18, ICASE, NASA Langley Research Center, Hampton, VA, Mar. 1994.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.#HOUDHARY, A., NARAHARI, B., NICOL, D., AND SIMttA, R. Optimal processor assignment for a class of pipelined computations. IEEE Transactions on Parallel and Distributed Systems 5, 4 (April 94), 439-445.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6.CROWL, L., CROVELLA, M., LEBLANC, T., AND SCOTT, M. The advantages of multiple parallelizations in combinatorial search. Journal of Parallel and Dzstmbuted Computing 21 (1994), 110-123.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. 7.DINDA, P., GROSS, T., O'HALLARON, D., SEGALL, E., STICHNOTH, J., SUBHLOK, J., WEBB, J., AND YANG, B. The CMU task parallel program suite. Tech. Rep. CMU-CS-94-131, School of Computer Science, Carnegie Mellon University, Mar. 1994.]]Google ScholarGoogle Scholar
  8. 8.FOSTER, I., AVALANI, B., CHOUDHARY, A., AND XU, M. A compilation system that integrates High Performance Fortran and Fortran M. In Proceeding of 1994 Scalable High Performance Comput,ng Conference (Knoxville, TN, October 1994), pp. 293-300.]]Google ScholarGoogle Scholar
  9. 9.Gaoss, T., O'HALLARON, D., AND SUBHLOK, J. Task parallelism in a High Performance Fortran framework. IEEE Parallel # Distributed Technology, 3 (1994), 16- 26.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10.HICH PERFORMANCE FORTRAN FORUM. High Performance Fortran Language Specification, Version 1.0, May 1993.]]Google ScholarGoogle Scholar
  11. 11.RAMASWAMY, S., SAPATNEKAR, S., AND BANERJEE, P. A convex programming approach for exploiting data and functional parallelism. In Proceedings of the 1994 International Conference on Parallel Processing (St Charles, IL, August 1994), vol. 2, pp. 116-125.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.SARKAR, V. Partitioning and Scheduling Parallel Programs for Multiprocessors. The MIT Press, Cambridge, MA, 1989.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. 13.SUBHLOK, J., O'HALLARON, D., GROSS, T., DINDA, P., AND WEBB, J. Communication and memory requirements as the basis for mapping task and data parallel programs. In Supcrcomputing '94 (Washington, DC, November 1994), pp. 330-339.]]Google ScholarGoogle ScholarCross RefCross Ref
  14. 14.SUBHLOK, J., STICHNOTH, J., O'HALLARON, D., AND GRoss, T. Exploiting task and data parallelism on multicomputer. In ACM SIGPLAN Sympos,um on Pmnciples and Practice of Parallel Programming (San Diego, CA, May 1993), pp. 13-22.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 15.SUBHLOK, J., AND VONDRAN, G. Optimal mapping of sequences of data parallel tasks. In Proceedzngs of the Fifth A CM SIGPLAN Symposium on Principles and Practice of Parallel Programming (Santa Barbara, CA, July 1995), pp. 134-143.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16.VONDRAN, G. Optimization of latency, throughput and processors for pipelines of data parallel tasks. Master's thesis, Dept. of Electrical and Computer Engineering, Carnegie Mellon University, May 1995.]]Google ScholarGoogle Scholar
  17. 17.WEB#, J. Latency and bandwidth consideration in parallel robotics image processing. In Supercomputzng '93 (Port}#nd, OR, Nov. 1993), pp. 230-239.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.YANG, B., WEBB, J., STICHNOTH, J., O'HALLARON, D., AND GROSS, T. Do&merge: Integrating parallel loops and reductions. In Sixth Annual Workshop on Languages and Compilers for Parallel Computing (Portland, Oregon, Aug 1993).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.YANG, T. Scheduling and Code Generat,on }or Parallel Architectures. PhD thesis, Rutgers University, May 1993.]] Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Optimal latency-throughput tradeoffs for data parallel pipelines

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              SPAA '96: Proceedings of the eighth annual ACM symposium on Parallel Algorithms and Architectures
              June 1996
              337 pages
              ISBN:0897918096
              DOI:10.1145/237502

              Copyright © 1996 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 24 June 1996

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • Article

              Acceptance Rates

              SPAA '96 Paper Acceptance Rate39of106submissions,37%Overall Acceptance Rate447of1,461submissions,31%

              Upcoming Conference

              SPAA '24

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader