Skip to main content

Theory and practice in parallel job scheduling

  • Conference paper
  • First Online:
Book cover Job Scheduling Strategies for Parallel Processing (JSSPP 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1291))

Included in the following conference series:

Abstract

The scheduling of jobs on parallel supercomputer is becoming the subject of much research. However, there is concern about the divergence of theory and practice. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I. Ahmad, “Editorial: resource management of parallel and distributed systems with static scheduling: challenges, solutions, and new problems”. Concurrency — Pract. & Exp. 7(5), pp. 339–347, Aug 1995.

    Google Scholar 

  2. G. Alverson, S. Kahan, R. Korry, C. McCann, and B. Smith, “Scheduling on the Tera MTA”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 19–44, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  3. S. V. Anastiadis and K. C. Sevcik, “Parallel application scheduling on networks of workstations”. J. Parallel & Distributed Comput., Jun 1997. (to appear).

    Google Scholar 

  4. J. M. Barton and N. Bitar, “A scalable multi-discipline, multiple-processor scheduling framework for IRIX”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 45–69, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  5. T. B. Brecht, “An experimental evaluation of processor pool-based scheduling for shared-memory NUMA multiprocessors”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  6. J. Bruno, E. G. Coffman, Jr., and R. Sethi, “Scheduling independent tasks to reduce mean finishing time”. Comm. ACM 17(7), pp. 382–387, Jul 1974.

    Article  Google Scholar 

  7. S. Chakrabarti, C. Phillips, A. S. Achulz, D. B. Shmoys, C. Stein, and J. Wein, “Improved approximation algorithms for minsum criteria”. In Intl. Colloq. Automata, Lang., & Prog., pp. 646–657, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1099.

    Google Scholar 

  8. S-H. Chiang, R. K. Mansharamani, and M. K. Vernon, “Use of application characteristics and limited preemption for run-to-completion parallel processor scheduling policies”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 33–44, May 1994.

    Google Scholar 

  9. S-H. Chiang and M. K. Vernon, “Dynamic vs. static quantum-based parallel processor allocation”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 200–223, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  10. R. W. Conway, W. L. Maxwell, and L. W. Miller, Theory of Scheduling. Addison-Wesley, 1967.

    Google Scholar 

  11. X. Deng, N. Gu, T. Brecht, and K. Lu, “Preemptive scheduling of parallel jobs on multiprocessors”. In 7th SIAM Symp. Discrete Algorithms, pp. 159–167, Jan 1996.

    Google Scholar 

  12. A. B. Downey, “Using queue time predictions for processor allocation”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  13. J. Du and J. Y-H. Leung, “Complexity of scheduling parallel task systems”. SIAM J. Discrete Math. 2(4), pp. 473–487, Nov 1989.

    Article  Google Scholar 

  14. K. Dussa, K. Carlson, L. Dowdy, and K-H. Park, “Dynamic partitioning in a transputer environment”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 203–213, May 1990.

    Google Scholar 

  15. A. Dusseau, R. H. Arpaci, and D. E. Culler, “Effective distributed scheduling of parallel workloads”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 25–36, May 1996.

    Google Scholar 

  16. D. L. Eager, J. Zahorjan, and E. D. Lazowska, “Speedup versus efficiency in parallel systems”. IEEE Trans. Comput. 38(3), pp. 408–423, Mar 1989.

    Article  Google Scholar 

  17. D. G. Feitelson, “Memory usage in the LANL CM-5 workload”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  18. D. G. Feitelson, “Packing schemes for gang scheduling”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 89–110, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  19. D. G. Feitelson, A Survey of Scheduling in Multiprogrammed Parallel Systems. Research Report RC 19790 (87657), IBM T. J. Watson Research Center, Oct 1994.

    Google Scholar 

  20. D. G. Feitelson and M. A. Jette, “Improved utilization and responsiveness with gang scheduling”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  21. D. G. Feitelson and B. Nitzberg, “Job characteristics of a production parallel scientific workload on the NASA Ames iPSC/860”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 337–360, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  22. D. G. Feitelson and L. Rudolph, “Co-scheduling based on runtime identification of activity working sets”. Intl. J. Parallel Programming 23(2), pp. 135–160, Apr 1995.

    Google Scholar 

  23. D. G. Feitelson and L. Rudolph, “Evaluation of design choices for gang scheduling using distributed hierarchical control”. J. Parallel & Distributed Comput. 35(1), pp. 18–34, May 1996.

    Google Scholar 

  24. D. G. Feitelson and L. Rudolph, “Gang scheduling performance benefits for linegrain synchronization”. J. Parallel & Distributed Comput. 16(4), pp. 306–318, Dec 1992.

    Google Scholar 

  25. D. G. Feitelson and L. Rudolph, “Toward convergence in job schedulers for parallel supercomputers”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 1–26, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  26. A. Feldmann, J. Sgall, and S-H. Teng, “Dynamic scheduling on parallel machines”. Theoretical Comput. Sci. 130(1), pp. 49–72, Aug 1994.

    Article  Google Scholar 

  27. H. Franke, P. Pattnaik, and L. Rudolph, “Gang scheduling for highly efficient distributed multiprocessor systems”. In 6th Symp. Frontiers Massively Parallel Comput., pp. 1–9, Oct 1996.

    Google Scholar 

  28. M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, 1979.

    Google Scholar 

  29. D. Ghosal, G. Serazzi, and S. K. Tripathi, “The processor working set and its use in scheduling multiprocessor systems”. IEEE Trans. Softw. Eng. 17(5), pp. 443–453, May 1991.

    Article  Google Scholar 

  30. R. Gibbons, A Historical Application Profiler for Use by Parallel Schedulers. Master's thesis, Dept. Computer Science, University of Toronto, Dec 1996. Available as Technical Report CSRI-TR 354.

    Google Scholar 

  31. R. Gibbons, “A historical application profiler for use by parallel schedulers”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  32. B. Gorda and R. Wolski, “Time sharing massively parallel machines”. In Intl. Conf. Parallel Processing, vol. II, pp. 214–217, Aug 1995.

    Google Scholar 

  33. R. L. Graham, “Bounds on multiprocessing timing anomalies”. SIAM J. Applied Mathematics 17(2), pp. 416–429, Mar 1969.

    Article  Google Scholar 

  34. A. S. Grimshaw, J. B. Weissman, E. A. West, and E. C. Loyot, Jr., “Metasystems: an approach combining parallel processing and heterogeneous distributed computing systems”. J. Parallel & Distributed Comput. 21(3), pp. 257–270, Jun 1994.

    Google Scholar 

  35. R. L. Henderson, “Job scheduling under the portable batch system”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 279–294, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  36. A. Hori et al., “Time space sharing scheduling and architectural support”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 92–105, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  37. S. Hotovy, “Workload evolution on the Cornell Theory Center IBM SP2”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 27–40, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  38. N. Islam, A. Prodromidis, and M. S. Squillante, “Dynamic partitioning in different distributed-memory environments”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 244–270, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  39. J. Jann, P. Pattnaik, H. Franke, F. Wang, J. Skovira, and J. Riodan, “Modeling of workload in mpps”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  40. Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949. Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 11.62. Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer-Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  41. T. Kawaguchi and S. Kyan, “Worst case bound of an LRF schedule for the mean weighted flow-time problem”. SIAM J. Comput. 15(4), pp. 1119–1129, Nov 1986.

    Article  Google Scholar 

  42. H. Kellerer, T. Tautenhahn, and G. J. Wöginger, “Approximability and nonapproximability results for minimizing total flow time on a single machine”. In 28th Ann. Symp. Theory of Computing, pp. 418–426, 1996.

    Google Scholar 

  43. R. Krishnamurti and. E. Ma, “An approximation algorithm for scheduling tasks on varying partition sizes in partitionable multiprocessor systems”. IEEE Trans. Comput. 41(12), pp. 1572–1579, Dec 1992.

    Article  Google Scholar 

  44. R. N. Lagerstrom and S. K. Gipp, “PScheD: political scheduling on the CRAY T3E”. In Job Scheduling Strategies for Parallel Processing; D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  45. W. Lee, M. Frank, V. Lee, K. Mackenzie, and L. Rudolph, “Implications of I/O for gang scheduled workloads”. In Job Scheduling Strategies far Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume)

    Google Scholar 

  46. C. E. Leiserson, Z. S. Abuhamdeh, D. C. Douglas, C. R. Feynman, M. N. Ganmukhi, J. V. Hill, W. D. Hillis, B. C. Kuszmaul, M. A. St. Pierre, D. S. Wells, M. C. Wong-Chan, S-W. Yang, and R. Zak, “The network architecture of the Con nection Machine CM-5”. J. Parallel & Distributed Comput. 33(2), pp. 145–158, Mar 1996.

    Google Scholar 

  47. S. Leonardi and D. Raz, “Approximating total flow time on parallel machines”. In 29th Ann. Symp. Theory of Computing, 1997.

    Google Scholar 

  48. S. T. Leutenegger and M. K. Vernon, “The performance of multiprogrammed multiprocessor scheduling policies”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 226–236, May 1990.

    Google Scholar 

  49. D. Lifka, “The ANL/IBM SP scheduling system”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 295–303, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  50. W. Ludwig and P. Tiwari, “Scheduling malleable and nonmalleable parallel tasks”. In 5th SIAM Symp. Discrete Algorithms, pp. 167–176, Jan 1994.

    Google Scholar 

  51. C. McCann and J. Zahorjan, “Processor allocation policies for message passing parallel computers”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 19–32, May 1994.

    Google Scholar 

  52. C. McCann and J. Zahorjan, “Scheduling memory constrained jobs on distributed memory parallel computers”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 208–219, May 1995.

    Google Scholar 

  53. R. McNaughton, “Scheduling with deadlines and loss functions”. Management Science 6(1), pp. 1–12, Oct 1959.

    Google Scholar 

  54. P. Messina, “The Concurrent Supercomputing Consortium: year 1”. IEEE Parallel & Distributed Technology 1(1), pp. 9–16, Feb 1993.

    Google Scholar 

  55. R. Motwani, S. Phillips, and E. Torng, “Non-clairvoyant scheduling”. Theoretical Comput. Sci. 130(1), pp. 17–47, Aug 1994.

    Article  Google Scholar 

  56. V. K. Naik, S. K. Setia, and M. S. Squillante, “Performance analysis of job scheduling policies in parallel supercomputing environments”. In Supercomputing '93, pp. 824–833, Nov 1993.

    Google Scholar 

  57. R. Nelson, D. Towsley, and A. N. Tantawi, “Performance analysis of parallel processing systems”. IEEE Trans. Softw. Eng. 14(4), pp. 532–540, Apr 1988.

    Article  Google Scholar 

  58. T. D. Nguyen, R. Vaswani, and J. Zahorjan, “Parallel application characterization for multiprocessor scheduling policy design”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 175–199, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  59. T. D. Nguyen, R. Vaswani, and J. Zahorjan, “Using runtime measured workload characteristics in parallel processor scheduling”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 155–174, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  60. J. D. Padhye and L. Dowdy, “Dynamic versus adaptive processor allocation policies for message passing parallel computers: an empirical comparison”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 224–243, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  61. E. W. Parsons and K. C. Sevcik, “Benefits of speedup knowledge in memory-constrained multiprocessor scheduling”. Performance Evaluation 27&28, pp. 253–272, Oct 1996.

    Google Scholar 

  62. E. W. Parsons and K. C. Sevcik, “Coordinated allocation of memory and processors in multiprocessors”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 57–67, May 1996.

    Google Scholar 

  63. E. W. Parsons and K. C. Sevcik, “Implementing multiprocessor scheduling disciplines”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  64. E. W. Parsons and K. C. Sevcik, “Multiprocessor scheduling for high-variability service time distributions”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 127–145, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  65. V. G. J. Peris, M. S. Squillante, and V. K. Naik, “Analysis of the impact of memory in distributed parallel processing systems”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 5–18, May 1994.

    Google Scholar 

  66. J. Pruyne and M. Livny, “Managing checkpoints for parallel programs”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 140–154, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  67. J. Pruyne and M. Livny, “Parallel processing on dynamic resources with CARMI”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 259–278, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  68. E. Rosti, E. Smirni, L. W. Dowdy, G. Serazzi, and B. M. Carlson, “Robust partitioning schemes of multiprocessor systems”. Performance Evaluation 19(2-3), pp. 141–165, Mar 1994.

    Article  Google Scholar 

  69. E. Rosti, E. Smirni, G. Serazzi, and L. W. Dowdy, “Analysis of non-work-conserving processor partitioning policies”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 165–181, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  70. S. K. Sahni, “Algorithms for scheduling independent tasks”. J. ACM 23(1), pp. 116–127, Jan 1976.

    Article  Google Scholar 

  71. U. Schwiegelshohn, “Preemptive weighted completion time scheduling of parallel jobs”. In 4th European Symp. Algorithms, pp. 39–51, Springer-Verlag, Sep 1996. Lecture Notes in Computer Science Vol. 1136.

    Google Scholar 

  72. U. Schwiegeishohn, W. Ludwig, J. L. Wolf, J. J. Turek, and P. Yu, “Smart SMART bounds for weighted response time scheduling”. SIAM J. Comput. To appear.

    Google Scholar 

  73. S. K. Setia, “The interaction between memory allocation and adaptive partitioning in message-passing multicomputers”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 146–165, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  74. S. K. Setia, M. S. Squillante, and S. K. Tripathi, “Analysis of processor allocation in multiprogrammed, distributed-memory parallel processing systems”. IEEE Trans. Parallel & Distributed Syst. 5(4), pp. 401–420, Apr 1994.

    Google Scholar 

  75. S. K. Setia, M. S. Squillante, and S. K. Tripathi, “Processor scheduling on multiprogrammed, distributed memory parallel computers”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 158–170, May 1993.

    Google Scholar 

  76. K. C. Sevcik, “Application scheduling and processor allocation in multiprogrammed parallel processing systems”. Performance Evaluation 19(2-3), pp. 107–140, Mar 1994.

    Article  Google Scholar 

  77. K. C. Sevcik, “Characterization of parallelism in applications and their use in scheduling”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 171–180, May 1989.

    Google Scholar 

  78. D. Shmoys, J. Wein, and D. Williamson, “Scheduling parallel machines on-line”. SIAM J. Comput. 24(6), pp. 1313–1331, Dec 1995.

    Article  Google Scholar 

  79. J. Skovira, W. Chan, H. Zhou, and D. Lifka, “The EASY — Load-Leveler API project”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 41–47, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  80. D. D. Sleator and R. E. Tarjan, “Amortized efficiency of list update and paging rules”. Comm. ACM 28(2), pp. 202–208, Feb 1985.

    Article  Google Scholar 

  81. E. Smirni, E. Rosti, G. Serazzi, L. W. Dowdy, and K. C. Sevcik, “Performance gains from leaving idle processors in multiprocessor systems”. In Intl. Conf. Parallel Processing, vol. III, pp. 203–210, Aug 1995.

    Google Scholar 

  82. W. Smith, “Various optimizers for single-stage production”. Naval Research Logistics Quarterly 3, pp. 59–66, 1956.

    Google Scholar 

  83. P. G. Sobalvarro and W. E. Weihl, “Demand-based coscheduling of parallel jobs on multiprogrammed multiprocessors”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 106–126, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  84. M. S. Squillante, “On the benefits and limitations of dynamic partitioning in parallel computer systems”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 219–238, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  85. M. S. Squillante, F. Wang, and M. Papaefthymiou, “Stochastic analysis of gang scheduling in parallel and distributed systems”. Performance Evaluation 27&28, pp. 273–296, Oct 1996.

    Google Scholar 

  86. A. Tucker and A. Gupta, “Process control and scheduling issues for multiprogrammed shared-memory multiprocessors”. In 12th Symp. Operating Systems Principles, pp. 159–166, Dec 1989.

    Google Scholar 

  87. J. Turek, W. Ludwig, J. L. Wolf, L. Fleischer, P. Tiwari, J. Glasgow, U. Schwiegelshohn, and P. S. Yu, “Scheduling parallelizable tasks to minimize average response time”. In 6th Symp. Parallel Algorithms & Architectures, pp. 200–209, Jun 1994.

    Google Scholar 

  88. J. Turek, J. L. Wolf, and P. S. Yu, “Approximate algorithms for scheduling parallelizable tasks”. In 4th Symp. Parallel Algorithms & Architectures, pp. 323–332, Jun 1992.

    Google Scholar 

  89. J. J. Turek, U. Schwiegelshohn, J. L. Wolf, and P. Yu, “Scheduling parallel tasks to minimize average response time”. In 5th ACM-SIAM Symp. Discrete Algorithms, pp. 112–121, Jan 1994.

    Google Scholar 

  90. C. A. Waldspurger and W. E. Weihl, “Lottery scheduling: flexible proportional-share resource management”. In 1st Symp. Operating Systems Design & Implementation, pp. 1–11, USENIX, Nov 1994.

    Google Scholar 

  91. C. A. Waldspurger, Lottery and Stride Scheduling: Flexible Proportional-Share Resource Management. Ph.D. dissertation, Massachusetts Institute of Technology, Technical Report MIT/LCS/TR-667, Sep 1995.

    Google Scholar 

  92. M. Wan, R. Moore, G. Kremenek, and K. Steube, “A batch scheduler for the Intel Paragon with a non-contiguous node allocation algorithm”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 48–64, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  93. F. Wang, H. Franke, M. Papaefthymiou, P. Pattnaik, L. Rudolph, and M. S. Squillante, “A gang scheduling design for multiprogrammed parallel computing environments”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 111–125, Springer-Verlag, 1996. Lecture Notes in Computer Science Vol. 1162.

    Google Scholar 

  94. F. Wang, M. Papaefthymiou, and M. Squillante, “Performance evaluation of gang scheduling for parallel and distributed multiprogramming”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), Springer Verlag, 1997. Lecture Notes in Computer Science (this volume).

    Google Scholar 

  95. Q. Wang and K. H. Cheng, “A heuristic of scheduling parallel tasks and its analysis”. SIAM J. Comput. 21(2), pp. 281–294, Apr 1992.

    Article  MathSciNet  Google Scholar 

  96. K. K. Yue and D. J. Lilja, “Loop-level process control: an effective processor allocation policy for multiprogrammed shared-memory multiprocessors”. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.), pp. 182–199, Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  97. J. Zahorjan and C. McCann, “Processor scheduling in shared memory multiprocessors”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 214–225, May 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Dror G. Feitelson Larry Rudolph

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Feitelson, D.G., Rudolph, L., Schwiegelshohn, U., Sevcik, K.C., Wong, P. (1997). Theory and practice in parallel job scheduling. In: Feitelson, D.G., Rudolph, L. (eds) Job Scheduling Strategies for Parallel Processing. JSSPP 1997. Lecture Notes in Computer Science, vol 1291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63574-2_14

Download citation

  • DOI: https://doi.org/10.1007/3-540-63574-2_14

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63574-1

  • Online ISBN: 978-3-540-69599-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics