skip to main content
article
Free Access

A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance

Published:01 November 1994Publication History
Skip Abstract Section

Abstract

Current I/O benchmarks suffer from several chronic problems: they quickly become obsolete; they do not stress the I/O system; and they do not help much in understanding I/O system performance. We propose a new approach to I/O performance analysis. First, we propose a self-scaling benchmark that dynamically adjusts aspects of its workload according to the performance characteristic of the system being measured. By doing so, the benchmark automatically scales across current and future systems. The evaluation aids in understanding system performance by reporting how performance varies according to each of five workload parameters. Second, we propose predicted performance, a technique for using the results from the self-scaling evaluation to estimate quickly the performance for workloads that have not been measured. We show that this technique yields reasonably accurate performance estimates and argue that this method gives a far more accurate comparative performance evaluation than traditional single-point benchmarks. We apply our new evaluation technique by measuring a SPARCstation 1+ with one SCSI disk, an HP 730 with one SCSI-II disk, a DECstation 5000/200 running the Sprite LFS operating system with a three-disk disk array, a Convex C240 minisupercomputer with a four-disk disk array, and a Solbourne 5E/905 fileserver with a two-disk disk array.

References

  1. ANON ET AL. 1985. A measure of transaction processing power. Datamation 31, 7 (Apr.), 112 118. Google ScholarGoogle Scholar
  2. BECHTOLSHEIM, A. V., AND FRANK, E.H. 1990. Sun's SPARCstation 1: A Workstation for the 1990s. In Procedures of the IEEE Computer Society International Conference (COMPCON). ACM, New York, 184 188.Google ScholarGoogle Scholar
  3. BERRY, M., CHEN, D., KOSS, P., KUCK, D., LO, S., PANG, Y., POINTER, L., ROLOFF, R., SAMEH, A., CLEMENTI, E., CHIN, S., SCHNEIDER, D., FOX, G., MESSINA, P., WALKER, D., HSIUNG, C., SCHWARZMEIER, J., LUE, K., ORSZAG, S., SEIDL, F., JOHNSON, O., GOODRUM, R., AND MARTIN, J. 1989. The perfect club benchmarks: Effective performance evaluation of' supercomputers. Int. J. Supercomput. Appl. (Fall).Google ScholarGoogle Scholar
  4. BRAY, T. 1990. Bonnie source code. Netnews posting USENET.Google ScholarGoogle Scholar
  5. CHEN, P.M. 1992. Input-output performance evaluation: SelLscaling benchmarks, predicted performance. Ph.D. dissertation, UCB/Computer Science Dept. 92/714, Univ. of' California. Google ScholarGoogle Scholar
  6. CHEN, P. M., AND PATTERSON, D.A. 1990. Maximizing performance in a striped disk array. In Proceedings of the 1990 International Symposium on Computer Architecture (Seattle, May). IEEE/ACM, New York, 322-331. Google ScholarGoogle Scholar
  7. DEC. 1990. DECstation 5000 Model 200 Technical Overview. Digital Equipment Corp., Palo Alto, Calif.Google ScholarGoogle Scholar
  8. FERRAm, D. 1984. On the foundations of artificial workload design. In Proceedings of the 1984 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems. ACM, New York, 8-14. Google ScholarGoogle Scholar
  9. GAEDE, S. 1982. A scaling technique for comparing interactive system capacities. In the 13th International Conference on Management and Performance Evaluation of Computer Systems. Computer Measurement Group, 62 67.Google ScholarGoogle Scholar
  10. GAEDE, S. 1981. Tools for research in computer workload characterization. Experimental Computer Performance and Evaluation, D. Ferrari, and M. Spadoni, Eds.Google ScholarGoogle Scholar
  11. HEWLETT-PACKARD. 1992. HP Apollo Series 700 Model 730 PA-RISC Workstation, Hewlett- Packard, Palo Alto, Calif.Google ScholarGoogle Scholar
  12. HORNING, R., JOHNSON, L., THAYER, L., LI, D., MEIER, V., DOWDELL, C., AND ROBERTS, D. 1991. System design fox' a low cost PA-RISC desktop workstation. In Procedures of the IEEE Computer Society International Conference (COMPCON). IEEE, New York, 208-213.Google ScholarGoogle Scholar
  13. HOWARD, J. H., K~ZAk, M. L, MENEES, S. G., NICHOLS, D. A., SATYANARAYANAN, M., SmEnOT~AM, R. N., AND WEST, M.J. 1988. Scale and performance in a distributed file system. ACM Trans. Comput. Syst. 6, 1 (Feb.), 51-81. Google ScholarGoogle Scholar
  14. OUSTERHOUT, J. K., AND DOUGLIS, F. 1989. Beating the I/O bottleneck: A case for logstructured file systems. SIGOPS 23, i (Jan.), 11-28. Google ScholarGoogle Scholar
  15. OUSTERHOUT, J. K., CHERENSON, A., DOUGLIS, r., AND NELSON, M. 1988. The Sprite network operating system. IEEE Comput. 21, 2 (Feb.), 23 36. Google ScholarGoogle Scholar
  16. PARK, A., AND BECKER, J.C. 1990. IOStone: A synthetic file system benchmark. Comput. Arch. News 18, 2 (June), 45 52. Google ScholarGoogle Scholar
  17. PATTERSON, D. A., GIBSON, G., AND KATZ, R.H. 1988. A case for redundant arrays of inexpensive disks (RAID). In the ACM International Conference on Management of Data (SIGMOD). ACM, New York, 109 116. Google ScholarGoogle Scholar
  18. ROSENSLUM, M., ANU OUSTERHOUT, J. K. 1991. The design and implementation of a logstructured file system. In Proceedings of the 13th ACM Symposium on Operating Systems Principles. ACM, New York. Google ScholarGoogle Scholar
  19. SPEC. 1991a. SPEC SDM Release 1.0 Manual. System Performance Evaluation Cooperative, Fairfax, Vs.Google ScholarGoogle Scholar
  20. SPEC. 1991b. SPEC SDM Release 1.0 Technical Fact Sheet. Franson and Hagerty Associates, Fairfax, Vs.Google ScholarGoogle Scholar
  21. SAAVEDRA-BARRERA, R. $., SMITH, A. J., AND MIYA, m. 1989. Machine characterization based on an abstract high-level language machine. IEEE Trans. Comput. 38, 12 (Dec.), 1659-1679. Google ScholarGoogle Scholar
  22. SCOTT, V. 1990. Is standardization of benchmarks feasible? In Proceedings of the BUSCON Conference3 (Long Beach, Calif.). Conference Management Corp., 139 147.Google ScholarGoogle Scholar
  23. TPPC. 1990. TPC Benchmark B Standard Specification. Transaction P:cocessing Performance Council, Freemont, Calif.Google ScholarGoogle Scholar
  24. TPPC. 1989. TPC Benchmark A Standard Specification. Transaction Processing Performance Council, Freemont, Calif.Google ScholarGoogle Scholar

Index Terms

  1. A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance

                  Recommendations

                  Reviews

                  Clement R. Attanasio

                  Input/output benchmarks should stress the I/O subsystem. Patterson and Chen claim that many existing I/O benchmarks do not. Therefore, they propose self-scaling, by which the benchmark observes its own performance and drives the load it is generating into the range that stresses the I/O capacity of the system, and not, for example, the CPU or memory. The five parameters of I/O workload they use are the number of unique data bytes read or written; the average size of a request; the fraction of reads to total number of I/O requests; the fraction of requests that follow the previous one in sequence; and the number of processes running the I/O benchmark. By varying the five-dimensional space of these parameters and observing the shape of the resulting curves, the authors develop a predictive methodology by which workload performance on other, unmeasured systems is projected. This paper is worthwhile. Through examples, the authors illustrate the kind of information about system behavior that is usually obvious in retrospect, but not always beforehand. For example, when might a system perform better on writing than reading__?__ <__?__Pub Caret>The answer is, when it batches many small writes into a few large ones. Unsurprisingly, the authors' claims for I/O performance prediction are more arguable than the benefits of I/O benchmark self-scaling. Chen and Patterson observe that there are transition regions in the performance curves, generally when the amount of data touched in the benchmark increases past the size of the buffer cache. Although they allow themselves to predict performance separately in those two domains, it is not clear that they treat competing predictive techniques similarly in their comparisons.

                  Access critical reviews of Computing literature here

                  Become a reviewer for Computing Reviews.

                  Comments

                  Login options

                  Check if you have access through your login credentials or your institution to get full access on this article.

                  Sign in

                  Full Access

                  • Published in

                    cover image ACM Transactions on Computer Systems
                    ACM Transactions on Computer Systems  Volume 12, Issue 4
                    Special issue on computer architecture
                    Nov. 1994
                    101 pages
                    ISSN:0734-2071
                    EISSN:1557-7333
                    DOI:10.1145/195792
                    Issue’s Table of Contents

                    Copyright © 1994 ACM

                    Publisher

                    Association for Computing Machinery

                    New York, NY, United States

                    Publication History

                    • Published: 1 November 1994
                    Published in tocs Volume 12, Issue 4

                    Permissions

                    Request permissions about this article.

                    Request Permissions

                    Check for updates

                    Qualifiers

                    • article

                  PDF Format

                  View or Download as a PDF file.

                  PDF

                  eReader

                  View online with eReader.

                  eReader