Skip to main content

Single-Message vs. Batch Communication

  • Chapter
Algorithms for Parallel Processing

Part of the book series: The IMA Volumes in Mathematics and its Applications ((IMA,volume 105))

Abstract

The selection of appropriate communication mechanisms is a key issue in parallel computing. We argue that the current emphasis on single-message communication has led to inefficient systems and unnecessarily confusing code. In contrast, batch communication has substantial implementation advantages, is suitable for almost all parallel applications, and encourages a programming paradigm that is easy to reason about.

“The art of being wise is the art of knowing what to overlook.”

William James, American philosopher, 1842–1910

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. MPI: A message passing interface. Message Passing Interface Forum, June 1995.

    Google Scholar 

  2. J. Beetem, M. Denneau, and D. Weingarten, The GF11 supercomputer, in Proceedings of the 12th Annual International Symposium on Computer Architecture, May 1985, pp. 363–376.

    Google Scholar 

  3. G. Bilardi, K. T. Herley, A. Pietracaprina, G. Pucci, and P. Spirakis, BSP vs LogP, in Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, June 1996, pp. 25–32.

    Book  Google Scholar 

  4. M. A. Blumrich, K. Li, R. Alpert, C. Dubnicki, E. W. Felton, and J. Sandberg, Virtual memory mapped network interface for the SHRIMP multicomputer, in Proceedings of the 21st International Symposium on COmputer Architecture, April 1994.

    Google Scholar 

  5. D. Culler, A. Dusseau, S. Goldstein, A. Krishnamurthy, S. Lumetta, T. Von Eicken, and K. Yelick, Parallel programming in Split-C, in Supercomputing ‘83, November 1993.

    Google Scholar 

  6. D. Culler, R. Karp, D. Patterson, A. Sahay, K. E. Schauser, E. Santos, R. Subramonian, and T. Von Eicken, LogP: Towards a realistic model of parallel computation,in Fourth ACM Symposium on Principles and Practice of Parallel Programming, May 1993, pp. 1–12.

    Chapter  Google Scholar 

  7. A. C. Dusseau, D. E. Culler, K. E. Schauser, and R. P. Martin, Fast parallel sorting under LogP: Experience with the CM-5, IEEE Transactions on Parallel and Distributed Systems, 7 (1996), pp. 791–805.

    Article  Google Scholar 

  8. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam, PVM 3 user’s guide and reference manual, Tech. Rep. ORNL/TM12187, Oak Ridge National Laboratory, Oak Ridge, TN, May 1994.

    Google Scholar 

  9. A. V. Gerbessiotis and L. G. Valiant, Direct bulk-synchronous parallel algorithms, Journal of Parallel and Distributed Computing, 22 (1994), pp. 251–267.

    Article  Google Scholar 

  10. M. W. Goudreau, K. Lang, S. Rao, T. Suel, and T. Tsantilas, Towards efficiency and portability: Programming with the BSP model, in Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, June 1996, pp. 1–12.

    Chapter  Google Scholar 

  11. M. W. Goudreau, K. Lang, S. B. Rao, and T. Tsantilas, The Green BSP Library, Tech. Rep. CS-TR-95–11, Department of Computer Science, University of Central Florida, Orlando, Florida, June 1995.

    Google Scholar 

  12. J. M. D. Hill, B. Mccoll, D. C. Stefanescu, M. W. Goudreau, K. Lang, S. B. Rao, T. Suel, T. Tsantilas, and R. Bisseling, BSPlib: The BSP programming library, May 1997. http://www.bsp-worldwide.org/.

    Google Scholar 

  13. G. L. Miller, S.-H. Teng, W. Thurston, and S. A. Vavasis, Automatic mesh partitioning, in Sparse Matrix Computations: Graph Theory Issues and Algorithms, A. George, J. Gilbert, and J. Liu, eds., Springer-Verlag, 1993, pp. 5784.

    Google Scholar 

  14. S. Pakin, M. Lauria, and A. Chien, High performance messaging on workstations: Illinois Fast Messages (FM) for Myrinet, in Supercomputing’95, 1995.

    Google Scholar 

  15. L. G. Valiant, A bridging model for parallel computation, Communications of the ACM, 33 (1990), pp. 103–111.

    Article  Google Scholar 

  16. L. G. Valiant, General purpose parallel architectures,in Handbook of Theoretical Computer Science, J. van Leeuwen, ed., vol. A: Algorithms and Complexity, MIT Press, Cambridge, MA, 1990, ch. 18, pp. 943–971.

    Google Scholar 

  17. T. Von Eicken, D. E. Culler, S. C. Goldstein, and K. E. Schauser, Active messages: a mechanism for integrated communication and computation, in Proceedings of the 19th International Symposium on Computer Architecture, May 1992, pp. 256–266.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media New York

About this chapter

Cite this chapter

Goudreau, M.W., Rao, S.B. (1999). Single-Message vs. Batch Communication. In: Heath, M.T., Ranade, A., Schreiber, R.S. (eds) Algorithms for Parallel Processing. The IMA Volumes in Mathematics and its Applications, vol 105. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1516-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-1516-5_3

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4612-7175-8

  • Online ISBN: 978-1-4612-1516-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics