skip to main content
article
Free Access

A new approach to the maximum-flow problem

Published:01 October 1988Publication History
Skip Abstract Section

Abstract

All previously known efficient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense graphs, achieving an O(n3) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n2/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efficient distributed and parallel implementations. A parallel implementation running in O(n2log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin algorithm, which also uses n processors but requires O(n2) space.

References

  1. 1 AHUJA, R. K. AND ORLIN, J.B. A fast and simple algorithm for the maximum flow problem. Tech. Rep. 1905-87, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Mass., 1987.Google ScholarGoogle Scholar
  2. 1a AHUJA, R. K., ORLIN, J. B., AND TARJAN, R.E. Improved time bounds for the maximum flow problem. Unpublished mansucript.Google ScholarGoogle Scholar
  3. 2 AWERBUCH, B. Complexity of network synchronization. J. ACM 32, 4 (Oct. 1985), 804-823. Google ScholarGoogle Scholar
  4. 2a CHERIYAN, J., AND MAHESHWARI, S. N. Analysis of preflow push algorithms for maximum network flow. Department of Computer Science and Engineering, Indian Institute of Tech., New Delhi, India, 1987.Google ScholarGoogle Scholar
  5. 3 CHERKASKY, R.V. An algorithm for constructing maximal flows in networks with complexity of O(V2x/E) operations. Math. Methods Solution Econ. Probl. 7 (1977), 112-125. (In Russian.)Google ScholarGoogle Scholar
  6. 4 DINIC, E.A. Algorithm for solution of a problem of maximum flow in networks with power estimation. Sov. Math. Dokl. 11 (1970), 1277-1280.Google ScholarGoogle Scholar
  7. 5 EDMONDS, J., AND KARP, R.M. Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 19, 2 (Apr. 1972), 248-264. Google ScholarGoogle Scholar
  8. 6 EVEN, S. Graph Algorithms. Computer Science Press, Potomac, Md., 1979. Google ScholarGoogle Scholar
  9. 7 FORD, L. R., JR., AND FULKERSON, D.R. Maximal flow through a network. Can. J. Math. 8 (1956), 399-404.Google ScholarGoogle Scholar
  10. 8 FORD, L. R., JR., AND FULKERSON, D.R. Flows in Networks. Princeton University Press, Princeton, N.j., 1962.Google ScholarGoogle Scholar
  11. 9 FORTUNE, S., AND WYLLIE, J. Parallelism in random access machines. In Proceedings of the lOth ACM Symposium on Theory of Computing (1978), pp. 114-118. Google ScholarGoogle Scholar
  12. 10 GABOW, H. N. Scaling algorithms for network problems. J. Comput. Syst. Sci. 31 (1985), 148-168. Google ScholarGoogle Scholar
  13. 11 GALIL, Z. An 0(VS/3E2/3) algorithm for the maximal flow problem. Acta Inf. 14 (1980), 221-242.Google ScholarGoogle Scholar
  14. 12 GALIL, Z., AND NAAMAD, A. An O(EV log2V) algorithm for the maximal flow problem. J. Comput. Syst. Sci. 21 (1980), 203-217.Google ScholarGoogle Scholar
  15. 13 GALLAGER, R. G., HUMBLET, P. A., AND SPIRA, P.M. A distributed algorithm for minimumweight spanning trees. ACM Trans. Program. Lang. Syst. 5, 1 (Jan. 1983), 66-77. Google ScholarGoogle Scholar
  16. 14 GOLDBERG, A. V. A new max-flow algorithm. Tech. Rep. MIT/LCS/TM-291, Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, Mass., 1985.Google ScholarGoogle Scholar
  17. 15 GOLDBERG, A.V. Efficient graph algorithms for sequential and parallel computers. PhD dissertation, Massachusetts Institute of Technology, Cambridge, Mass., Jan. 1987. Also available as Tech. Rep. TR-374, Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, Mass., 1987. Google ScholarGoogle Scholar
  18. 16 GOLDBERG, A. V., AND TAP, JAN, R. E. A new approach to the maximum flow problem. In Proceedings of the 18th ACM Symposium on Theory of Computing. ACM, New York, 1986, pp. 136-146. Google ScholarGoogle Scholar
  19. 17 GOLDBERG, A. V., AND TARJAN, R.E. Finding minimum-cost circulations by successive approximation. Math. Oper. Res., to appear. Google ScholarGoogle Scholar
  20. 18 KARZANOV, A.V. Determining the maximal flow in a network by the method of preflows. Sov. Math. Dokl. 15 (1974), 434-437.Google ScholarGoogle Scholar
  21. 19 LAWLER, E.L. Combinatorial Optimization: Networks and Matroids. Holt, Rinehart, and Winston, New York, 1976.Google ScholarGoogle Scholar
  22. 20 LEISERSON, C., AND MAGGS, B. Communication-efficient parallel graph algorithms. In Proceedings of the International Conference on Parallel Processing. IEEE Computer Society Press, Silver Spring, Md., 1986, pp. 861-868.Google ScholarGoogle Scholar
  23. 21 MALHOTRA, V. M., PRAMODH KUMAR, M., AND MAHESHWARI, S.N. An O(I V I3) algorithm for finding maximum flows in networks. Inf. Process. Lett. 7 (1978), 277-278.Google ScholarGoogle Scholar
  24. 22 OGIELSKI, A. T. Integer optimization and zero-temperature fixed point in Ising random-field systems. Phys. Rev. Lett. 57 (1986), 1251-1254.Google ScholarGoogle Scholar
  25. 23 PAPADIMITRIOU, C. H., AND STEIGLITZ, K. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Englewood Cliffs, N.J., 1982. Google ScholarGoogle Scholar
  26. 24 PICARD, J. C., AND RATLIFF, H. O. Minimum cuts and related problems. Networks 5 (1975), 357-370.Google ScholarGoogle Scholar
  27. 25 SHILOACn, Y. An O(nI log21) maximum-flow algorithm. Tech. Rep. STAN-CS-78-802, Computer Science Dept., Stanford Univ., Stanford, Calif., 1978. Google ScholarGoogle Scholar
  28. 26 SHILOACH, Y., AND VISHKIN, U. An O(n21og n) parallel max-flow algorithm. J. Algorithms 3 (1982), 128-146. Google ScholarGoogle Scholar
  29. 27 SLEATOR, D.D. An O(nm log n) algorithm for maximum network flow. Tech. Rep. STAN-CS- 80-831, Computer Science Dept., Stanford Univ., Stanford, Calif., 1980.Google ScholarGoogle Scholar
  30. 28 SLEATOR, O. D., AND TAR JAN, R.E. A data structure for dynamic trees. J. Comput. Syst. Sci. 26 (1983), 362-391. Google ScholarGoogle Scholar
  31. 29 SLEATOR, D. D., AND TARJAN, R.E. Self-adjusting binary search trees. J. ACM 32, 3 (July 1985), 652-686. Google ScholarGoogle Scholar
  32. 30 TARJAN, R. E. Data Structures and Network Algorithms. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1983. Google ScholarGoogle Scholar
  33. 31 TAP.JAN, R.E. A simple version of Karzanov's blocking flow algorithm. Oper. Res. Lett. 2 (1984), 265-268.Google ScholarGoogle Scholar

Index Terms

  1. A new approach to the maximum-flow problem

      Recommendations

      Reviews

      Charles Martel

      Network flow algorithms are powerful tools that can be used to solve a wide range of optimization problems in management and logistics, such as scheduling of production in manufacturing facilities, routing of packets in communication networks, and assignment of routes in transportation networks. It is therefore not surprising that these algorithms have been extensively studied. The four papers reviewed below have all contributed new ideas for improving the performance of network flow algorithms, and a large number of new results based on these ideas have appeared over the last five years. The study of network flow provides an elegant setting in which a variety of algorithmic and data structures techniques can be applied. Assume that we are dealing with a network that has n nodes connected by m edges; the nodes may or may not be arranged into layers. Prior to 1985, virtually all the best network flow algorithms were based on the approach Dinic suggested in 1970 [1]: finding a maximum flow in a layered network requires that at most n ? 1 maximal flows be determined (a maximal flow is also called a blocking flow because it blocks further forward augmentations). Karzonov [2] was the first to show that the method of preflows, which allows nodes to temporarily have more incoming flow than outgoing flow, could be used to find a blocking flow in a time no more than n 2 times some constant ( O( n 2) time). This method results in an algorithm that runs in O( n 3) time. Both the algorithm of Malhotra, Kumar, and Maheshwari [3] and the wave algorithm of Tarjan [4], however, are much simpler and also achieve this time bound, still the best bound known for dense graphs with large capacities (new results have led to better algorithms for sparse graphs, for graphs with small capacities, and for parallel implementations). More details of network flow algorithms can be found in the books by Papadimitriou and Steiglitz [5] and Tarjan [6], and two recent surveys provide comprehensive treatments of newer results on network flows [7,8]. Shiloach and Vishkin The main focus of the paper by Shiloach and Vishkin is to find a maximum flow algorithm that can be implemented efficiently on a PRAM (parallel random access machine). The authors first describe an O( n 3) sequential algorithm that is a modification of Karzonov's algorithm. Their algorithm is more flexible with respect to the order in which it can process arcs (edges that are directed), and they demonstrate its ability to run in O( n 2 log n) time on a concurrent-read exclusive-write PRAM with n processors. The algorithm still deals with up to n ? 1 layered graphs sequentially; it gets its speedup by finding a blocking flow in each layered network in O( n log n) parallel time. The key to the parallel implementation of this algorithm is the introduction of several tree data structures that store information about the flows in the arcs incident to a node. This organization allows many processors to work in parallel, with each processor modifying the flow in a different arc. The authors' complete description of how to handle the parallel details also covers exactly how the processors are assigned to tasks. The resulting algorithm is fairly efficient; using n processors, it speeds up the best sequential algorithm by a factor of n/log n. (Note that since the maximum flow problem is P-complete [9] it is unlikely that the extreme speedups of an NC parallel algorithm can be achieved.) Sleator and Tarjan In an effort to improve the performance of Dinic's algorithm, several researchers have developed new data structures that store and manipulate the flows in individual arcs in the network. The first such data structures were developed independently by Shiloach in 1978 [10] and by Galil and Naamad in 1980 [11], and both made it possible to find a blocking flow in a layered network in O( m log 2 n) time, resulting in an O( mn log 2 n) time maximum flow algorithm. Sleator and Tarjan introduce the dynamic tree, a refinement of the previous data structures, and show how to use it to find a blocking flow in O( mn log n) time. Dynamic trees basically allow the flows in all arcs on one augmenting path to be updated in a single operation. The details of implementing dynamic trees are quite messy, but Sleator and Tarjan describe them as abstract data structures and go over the operations for manipulating them. They prove their running times and make it possible for the reader to design and analyze algorithms that use dynamic trees without becoming involved with all the details of their low-level implementation. The authors provide some fairly detailed code that implements the basic operations, which is nice if one wants to do an actual implementation. The code itself contains no comments, however (the accompanying text describes the code); I think comments would have made it much more useful. Dynamic trees have been used to improve a number of recent flow algorithms, but the computational experience reported in this paper, as well as more recent results [7], suggest that dynamic trees do not lead to faster running times in practice. The authors demonstrate the use of dynamic trees to speed up a number of other graph algorithms. Gabow Gabow suggests a general framework for designing efficient algorithms using scaling. The scaling approach as applied to network flow is to (1) halve all the capabilities, (2) recursively find a maximum flow for the reduced problem to get a flow f, and (3) double the flow in each arc and then use Dinic's algorithm to increase f to a maximum flow. Step (3) always starts with a flow that is at most m units less than the optimum, so Dinic's algorithm runs in O( mn) time. The number of halving phases is log N (where N is the largest capacity), so the algorithm runs in O( mn log N) time. Under the similarity assumption that N is bounded by a polynomial function of n, this bound is O( mn log n), which equals the bound of Sleator and Tarjan but with a much simpler algorithm. Gabow shows that scaling is a reasonable approach even when the capacities are not integral, and he applies his scaling technique to a number of other weighted graph problems. Goldberg and Tarjan All the algorithms discussed above are based on Dinic's layered graph approach; Goldberg discovered the first efficient algorithm which is not [12,13]. His algorithm uses the preflow approach suggested by Karzonov in which unbalanced nodes, whose incoming flow exceeds their outgoing flow, try to balance themselves by sending out additional flow to their neighbors. In Karzonov's algorithm (as well as in the Shiloach-Vishkin algorithm) a layered graph is used to determine which vertices an unbalanced vertex can send flow to. Goldberg's innovation was to use distance labels, which estimate the distance from a node to the sink t. Thus a node with label d tries to send its flow toward t by sending its excess flow to a node with label d ? 1. This approach leads to a more uniform treatment of vertices, allows vertices to be processed in a more flexible order, and provides a global measure of the progress of the algorithm. A straightforward implementation leads to an O( n 3) running time. Goldberg and Tarjan describe the details of the algorithm and its analysis. They also show how to implement the algorithm using dynamic trees to obtain a running time of O( mn log ( n 2/ m)). The flexible and uniform treatment of vertices in this approach makes it easier to add dynamic trees to the algorithm. The size of the dynamic trees which are used is restricted to at most n 2/ m; since the dynamic tree operations take O(log k) time (where k is the size of the tree), a bound of O( mn) on the number of dynamic tree operations yields the time bound. The algorithm's flexibility also makes it easy to parallelize. It can be implemented to run in O( n 2 log n) time on an exclusive-read exclusive-write PRAM using low-level details very similar to the Shiloach-Vishkin implementation. The Goldberg-Tarjan (GT) algorithm has the advantage of using only O( m) space, while the Shiloach-Vishkin algorithm seems to require a space at least n 2 times some constant (&OHgr;( n 2) space). Goldberg has implemented the parallel version of his algorithm on the Connection Machine and has reported speedups of over 100 compared to a sequential implementation [13]. It is difficult, however, to judge how much these results depend on the particular flow problems used. The GT algorithm can also be adapted fairly easily to run on asynchronous distributed parallel systems. Perhaps the strongest case for the importance of Goldberg and Tarjan's paper is the algorithms that have been developed from it. Gallo, Grigoriadis, and Tarjan [14] have shown how to use the GT algorithm to solve a large number of parametric problems a factor of n faster than previous algorithms. Ahuja and Orlin [15] have shown how to combine the ideas of the GT algorithm with the scaling approach to develop a fairly simple maximum flow algorithm which runs in O( mn + n 2 log N) time. In a more recent paper Ahuja, Orlin, and Tarjan [16] have added dynamic trees to the Ahuja-Orlin algorithm, which results in somewhat better time bounds.

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Journal of the ACM
        Journal of the ACM  Volume 35, Issue 4
        Oct. 1988
        242 pages
        ISSN:0004-5411
        EISSN:1557-735X
        DOI:10.1145/48014
        Issue’s Table of Contents

        Copyright © 1988 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 October 1988
        Published in jacm Volume 35, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader