Skip to main content
Log in

Cost-effectiveness of concurrent supercomputers

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

This article introduces a mathematical function that describes the latent concurrency inherent in an arbitrary program with given initial conditions. The concurrency function is used to derive asymptotic estimates for speedup, including Amdahl's Law. It provides a new method for analyzing cost-effectiveness of the processor-memory-communications constituents of a computing system for applications where system cost and execution time are mutually elastic variables. The costs of programming and input/output are not taken into account in the present study. The methods are applied to study the relative advantages of serial versus concurrent processing; the relationship of the memory/processor ratio to cost-effectiveness; the conditions that determine the relative advantages of SIMD and MIMD control structures; the effects of various interprocessor communication strategies; and cost-effectiveness implications of the choice of data path width.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Amdahl, G. M. 1967. Validity of the single processor approach to achieving large scale computing abilities. Proc. AFIPS, 30, 483–485.

    Google Scholar 

  2. Batcher, K. E. 1980. Design of a massively parallel processor. IEEE Trans. on Computers, C-29, 9(Sept.), 836–840.

    Google Scholar 

  3. Batcher, K. E. 1976. The flip network in STARAN. 7976 Proc. Int. Conf. Parallel Processing, pp. 65–71.

  4. Brent, P. S., and Kung, H. T. 1980. The chip complexity of binary arithmetic. Proc. 12th Annual ACM Symp. Theory Computing, April, pp. 190–200.

  5. Dongarra, J. J., and Duff, I. S. 1985. Advanced architecture computers. Argonne National Laboratory Technical Memorandum No. 57, 4 Sept. (Draft).

  6. Graham, W. R. 1970. The parallel and the pipeline computer. Datamation (Apr.), 68–71.

  7. Greenberg, R. I., and Leiserson, C. E. 1985. Randomized routing on fat-trees (Preprint, 1 May).

  8. Kung, H. T., and Leiserson, C. E. 1980. Systolic arrays (for VLSI). In Introduction to VLSI Systems (C. A. Mead and L. fonway, eds.), Addison-Wesiey, Reading, Mass.

    Google Scholar 

  9. Lau, R. L., Siewiorek, D. P., and Mizel, D. W. 1982. A survey of highly parallel computing. IEEE Computer (June), 9–24.

  10. Leiserson, C. E. 1985. Fat-trees: universal networks for hardware-efficient supercomputing. 1985 Int. Conf. on Parallel Processing, to appear.

  11. Mead, C. A., and Conway, L. 1980. Introduction to VLSI Systems, Addison-Wesiey, Reading, Mass.

    Google Scholar 

  12. Minsky, M., and Papert S. 1971. On some associative, parallel, and analog computations. In Associative Information Techniques (E. J. Jacks, ed.), Elsevier, New York.

    Google Scholar 

  13. Patton, P. 1985. Multiprocessors: architecture and applications. IEEE Computer (June), 29–40.

  14. Schwartz, J. T. 1980. Ultracomputers. ACM Trans. on Programming Languages and Large Systems, 2, 484–521.

    Google Scholar 

  15. Thompson, C. D. 1979. Area time complexity for VLSI. Proc. 11th Annual ACM Symp. Theory Computing (Apr–May), 81–88.

  16. Ullman, J. D. 1984. Computational aspects of VLSI. Computer Science Press.

  17. Ware, W. H. 1972. The ultimate computer. IEEE Spectrum, 9, 3, 84–91.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Resnikoff, H.L. Cost-effectiveness of concurrent supercomputers. J Supercomput 1, 231–262 (1987). https://doi.org/10.1007/BF00128048

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00128048

Keywords

Navigation