skip to main content
10.1145/3350755.3400255acmconferencesArticle/Chapter ViewAbstractPublication PagesspaaConference Proceedingsconference-collections
research-article
Open Access

Randomized Incremental Convex Hull is Highly Parallel

Authors Info & Claims
Published:09 July 2020Publication History

ABSTRACT

The randomized incremental convex hull algorithm is one of the most practical and important geometric algorithms in the literature. Due to its simplicity, and the fact that many points or facets can be added independently, it is also widely used in parallel convex hull implementations. However, to date there have been no non-trivial theoretical bounds on the parallelism available in these implementations. In this paper, we provide a strong theoretical analysis showing that the standard incremental algorithm is inherently parallel. In particular, we show that for n points in any constant dimension, the algorithm has O(log n) dependence depth with high probability. This leads to a simple work-optimal parallel algorithm with polylogarithmic span with high probability.

Our key technical contribution is a new definition and analysis of the configuration dependence graph extending the traditional configuration space, which allows for asynchrony in adding configurations. To capture the "true" dependence between configurations, we define the support set of configuration c to be the set of already added configurations that it depends on. We show that for problems where the size of the support set can be bounded by a constant, the depth of the configuration dependence graph is shallow (O(log n) with high probability for input size n). In addition to convex hull, our approach also extends to several related problems, including half-space intersection and finding the intersection of a set of unit circles. We believe that the configuration dependence graph and its analysis is a general idea that could potentially be applied to more problems.

References

  1. Umut Acar, Guy E. Blelloch, and Robert Blumofe. The data locality of work stealing. Theory of Computing Systems (TOCS), 35(3):321--347, 2002.Google ScholarGoogle Scholar
  2. Kunal Agrawal, Jeremy T. Fineman, Kefu Lu, Brendan Sheridan, Jim Sukha, and Robert Utterback. Provably good scheduling for parallel programs that use data structures through implicit batching. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Dan Alistarh, Trevor Brown, Justin Kopinsky, and Giorgi Nadiradze. Relaxed schedulers can efficiently parallelize iterative algorithms. In ACM Symposium on Principles of Distributed Computing (PODC), pages 377--386, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Dan Alistarh, Giorgi Nadiradze, and Nikita Koval. Efficiency guarantees for parallel incremental algorithms under relaxed schedulers. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 145--154, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nancy M Amato, Michael T Goodrich, and Edgar A Ramos. Parallel algorithms for higher-dimensional convex hulls. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 683--694. IEEE, 1994.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Nancy M. Amato and Franco P. Preparata. The parallel 3D convex hull problem revisited. International Journal of Computational Geometry & Applications, 2(02):163--173, 1992.Google ScholarGoogle ScholarCross RefCross Ref
  7. Mikhail J. Atallah, Richard Cole, and Michael T. Goodrich. Cascading divide-and-conquer: A technique for designing parallel algorithms. SIAM J. on Computing, 18(3):499--532, June 1989.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Mikhail J. Atallah and Michael T. Goodrich. Efficient parallel solutions to some geometric problems. Journal of Parallel and Distributed Computing, 3(4):492 -- 507, 1986.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mikhail J. Atallah and Michael T. Goodrich. Parallel algorithms for some functions of two convex polygons. Algorithmica, 3(4):535--548, 1988.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Brad Barber. Qhull. http://www.qhull.org/html/index.htm, 2015.Google ScholarGoogle Scholar
  11. Naama Ben-David, Guy E. Blelloch, Jeremy T. Fineman, Phillip B. Gibbons, Yan Gu, Charles McGuffey, and Julian Shun. Parallel algorithms for asymmetric read-write costs. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 145--156, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Guy E. Blelloch, Daniel Ferizovic, and Yihan Sun. Just join for parallel ordered sets. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 253--264. ACM, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Guy E. Blelloch, Jeremy T. Fineman, Yan Gu, and Yihan Sun. Optimal (randomized) parallel algorithms in the binary-forking model. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2020.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Guy E. Blelloch, Jeremy T. Fineman, and Julian Shun. Greedy sequential maximal independent set and matching are parallel on average. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 308--317, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Guy E. Blelloch, Phillip B. Gibbons, and Harsha Vardhan Simhadri. Low depth cache-oblivious algorithms. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Guy E. Blelloch and Yan Gu. Improved parallel cache-oblivious algorithms on dynamic programming. SIAM Symposium on Algorithmic Principles of Computer Systems (APOCS), 2020.Google ScholarGoogle Scholar
  17. Guy E. Blelloch, Yan Gu, Julian Shun, and Yihan Sun. Parallelism in randomized incremental algorithms. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 467--478, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Guy E. Blelloch, Yan Gu, Julian Shun, and Yihan Sun. Parallel write-efficient algorithms and data structures for computational geometry. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Jean-Daniel Boissonnat, Olivier Devillers, René Schott, Monique Teillaud, and Mariette Yvinec. Applications of random sampling to on-line algorithms in computational geometry. Discrete & Computational Geometry, 8(1):51--71, Jul 1992.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Jean-Daniel Boissonnat and Monique Teillaud. On the randomized construction of the delaunay tree. Theoretical Computer Science, 112(2):339--354, 1993.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jean-Daniel Boissonnat and M. Yvinec. Algorithmic Geometry. Cambridge University Press, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  22. Richard P. Brent. The parallel evaluation of general arithmetic expressions. J. ACM, 21(2):201--206, April 1974.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Zoran Budimlić, Vincent Cavé, Raghavan Raman, Jun Shirako, Saug nak Tacs irlar, Jisheng Zhao, and Vivek Sarkar. The design and implementation of the habanero-java parallel programming language. In Symposium on Object-oriented Programming, Systems, Languages and Applications (OOPSLA), pages 185--186, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Matt Campbell. Miconvexhull. https://designengrlab.github.io/MIConvexHull/, 2017.Google ScholarGoogle Scholar
  25. Philippe Charles, Christian Grothoff, Vijay Saraswat, Christopher Donawa, Allan Kielstra, Kemal Ebcioglu, Christoph Von Praun, and Vivek Sarkar. X10: an object-oriented approach to non-uniform cluster computing. In Symposium on Object-oriented Programming, Systems, Languages and Applications (OOPSLA), volume 40, pages 519--538, 2005.Google ScholarGoogle Scholar
  26. A. Chow. Parallel Algorithms for Geometric Problems. PhD thesis, Department of Computer Science, University of Illinois, Urbana-Champaign, December 1981.Google ScholarGoogle Scholar
  27. Marcelo Cintra, Diego R. Llanos, and Belén Palop. Speculative parallelization of a randomized incremental convex hull algorithm. In International Conference on Computational Science and Its Applications, pages 188--197, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  28. Kenneth L. Clarkson and Peter W. Shor. Applications of random sampling in computational geometry, II. Discrete & Computational Geometry, 4(5):387--421, 1989.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Richard Cole and Vijaya Ramachandran. Resource oblivious sorting on multicores. ACM Transactions on Parallel Computing (TOPC), 3(4), 2017.Google ScholarGoogle Scholar
  30. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.Introduction to Algorithms (3. ed.). MIT Press, 2009.Google ScholarGoogle Scholar
  31. Norm Dadoun and David G. Kirkpatrick. Parallel construction of subdivision hierarchies. J. Computer and System Sciences, 39(2):153--165, 1989.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. Computational Geometry: Algorithms and Applications. Springer-Verlag, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  33. Laxman Dhulipala, Guy E. Blelloch, and Julian Shun. Theoretically efficient parallel graph algorithms can be fast and scalable. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Pedro Diaz, Diego R. Llanos, and Belen Palop. Parallelizing 2D-convex hulls on clusters: Sorting matters. Jornadas De Paralelismo, 2004.Google ScholarGoogle Scholar
  35. Herbert Edelsbrunner. Geometry and Topology for Mesh Generation. Cambridge University Press, 2006.Google ScholarGoogle Scholar
  36. Manuela Fischer and Andreas Noever. Tight analysis of parallel randomized greedy mis. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2152--2160, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  37. Matteo Frigo, Charles E Leiserson, and Keith H Randall. The implementation of the cilk-5 multithreaded language. Proceedings of the SIGPLAN Symposium on Programming Language Design and Implementation, 33(5):212--223, 1998.Google ScholarGoogle Scholar
  38. Mingcen Gao, Thanh-Tung Cao, Ashwin Nanjappa, Tiow-Seng Tan, and Zhiyong Huang. gHull: A GPU algorithm for 3D convex hull. ACM Transactions on Mathematical Software, 40(1):3:1--3:19, October 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. J. Gil, Y. Matias, and U. Vishkin. Towards a theory of nearly constant time parallel algorithms. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 698--710, 1991.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Arturo Gonzalez-Escribano, Diego R. Llanos, David Orden, and Belen Palop. Parallelization alternatives and their performance for the convex hull problem. Applied Mathematical Modelling, 30(7):563 -- 577, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  41. Michael T. Goodrich, Yossi Matias, and Uzi Vishkin. Optimal parallel approximation for prefix sums and integer sorting. In ACM-SIAM Symposium on Discrete Algorithms (SODA), 1994.Google ScholarGoogle Scholar
  42. Neelima Gupta and Sandeep Sen. Faster output-sensitive parallel algorithms for 3D convex hulls and vector maxima. Journal of Parallel and Distributed Computing, 63(4):488--500, April 2003.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. William Hasenplaugh, Tim Kaler, Tao B Schardl, and Charles E Leiserson. Ordering heuristics for parallel graph coloring. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 166--177, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. https://www.threadingbuildingblocks.org.Google ScholarGoogle Scholar
  45. Joseph JáJá. An Introduction to Parallel Algorithms. Addison Wesley, Reading, MA, 1992.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. http://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html.Google ScholarGoogle Scholar
  47. Diego R. Llanos, David Orden, and Belen Palop. Meseta: A new scheduling strategy for speculative parallelization of randomized incremental algorithms. International Conference on Parallel Processing Workshops, pages 121--128, 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Mikola Lysenko. incremental-convex-hull. https://github.com/mikolalysenko/incremental-convex-hull, 2014.Google ScholarGoogle Scholar
  49. Russ Miller and Quentin F. Stout. Efficient parallel convex hull algorithms. IEEE Trans. on Comput., 37(12):1605--1618, December 1988.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Ketan Mulmuley. Computational geometry - an introduction through randomized algorithms. Prentice Hall, 1994.Google ScholarGoogle Scholar
  51. Xinghao Pan, Dimitris Papailiopoulos, Samet Oymak, Benjamin Recht, Kannan Ramchandran, and Michael I. Jordan. Parallel correlation clustering on big graphs. In Advances in Neural Information Processing Systems (NIPS), pages 82--90, 2015.Google ScholarGoogle Scholar
  52. John H. Reif and Sandeep Sen. Optimal randomized parallel algorithms for computational geometry. Algorithmica, 7(1--6):91--117, 1992.Google ScholarGoogle Scholar
  53. Raimund Seidel. Small-dimensional linear programming and convex hulls made easy. Discrete & Computational Geometry, 6(3):423--434, 1991.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Raimund Seidel. Backwards analysis of randomized geometric algorithms. In New Trends in Discrete and Computational Geometry, pages 37--67. Springer-Verlag, 1993.Google ScholarGoogle ScholarCross RefCross Ref
  55. Julian Shun, Yan Gu, Guy E. Blelloch, Jeremy T. Fineman, and Phillip B. Gibbons. Sequential random permutation, list contraction and tree contraction are highly parallel. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 431--448, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Ayal Stein, Eran Geva, and Jihad El-Sana. Applications of geometry processing: CudaHull: Fast parallel 3D convex hull on the GPU. Comput. Graph., 36(4):265--271, June 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. https://msdn.microsoft.com/en-us/library/dd460717 %28v=vs.110%29.aspx.Google ScholarGoogle Scholar
  58. The CGAL Project. CGAL User and Reference Manual. CGAL Editorial Board, 4.14 edition, 2019.Google ScholarGoogle Scholar
  59. Stanley Tzeng and John D. Owens. Finding convex hulls using quickhull on the GPU. CoRR, abs/1201.2936, 2012.Google ScholarGoogle Scholar
  60. Uzi Vishkin. Thinking in parallel: Some basic data-parallel algorithms and techniques, 2010. Course notes, University of Maryland.Google ScholarGoogle Scholar

Index Terms

  1. Randomized Incremental Convex Hull is Highly Parallel

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SPAA '20: Proceedings of the 32nd ACM Symposium on Parallelism in Algorithms and Architectures
        July 2020
        601 pages
        ISBN:9781450369350
        DOI:10.1145/3350755

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 July 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate447of1,461submissions,31%

        Upcoming Conference

        SPAA '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader