skip to main content
research-article

Analyzing growing plants from 4D point cloud data

Published:01 November 2013Publication History
Skip Abstract Section

Abstract

Studying growth and development of plants is of central importance in botany. Current quantitative are either limited to tedious and sparse manual measurements, or coarse image-based 2D measurements. Availability of cheap and portable 3D acquisition devices has the potential to automate this process and easily provide scientists with volumes of accurate data, at a scale much beyond the realms of existing methods. However, during their development, plants grow new parts (e.g., vegetative buds) and bifurcate to different components --- violating the central incompressibility assumption made by existing acquisition algorithms, which makes these algorithms unsuited for analyzing growth. We introduce a framework to study plant growth, particularly focusing on accurate localization and tracking topological events like budding and bifurcation. This is achieved by a novel forward-backward analysis, wherein we track robustly detected plant components back in time to ensure correct spatio-temporal event detection using a locally adapting threshold. We evaluate our approach on several groups of time lapse scans, often ranging from days to weeks, on a diverse set of plant species and use the results to animate static virtual plants or directly attach them to physical simulators.

Skip Supplemental Material Section

Supplemental Material

References

  1. Ahmed, N., Theobalt, C., Dobrev, P., Seidel, H.-P., and Thrun, S. 2008. Robust fusion of dynamic shape and normal capture for high-quality reconstruction of time-varying geometry. In IEEE CVPR, 1--8.Google ScholarGoogle Scholar
  2. Akhter, I., Simon, T., Khan, S., Matthews, I., and Sheikh, Y. 2012. Bilinear spatiotemporal basis models. ACM TOG 31, 2, 17:1--17:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin, D., and Silva, C. T. 2001. Point set surfaces. In IEEE Vis, VIS '01, 21--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM TOG 30, 75:1--75:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Bojsen-Hansen, M., Li, H., and Wojtan, C. 2012. Tracking surfaces with evolving topology. ACM TOG 31, 4, 53:1--53:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Boykov, Y., and Kolmogorov, V. 2004. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE TPAMI 26, 9, 1124--1137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Boykov, Y., Veksler, O., and Zabih, R. 2001. Fast approximate energy minimization via graph cuts. IEEE TPAMI 23, 11, 1222--1239. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Bradley, D., Popa, T., Sheffer, A., Heidrich, W., and Boubekeur, T. 2008. Markerless garment capture. ACM TOG 27, 3, 99:1--99:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Brendel, W., and Todorovic, S. 2011. Learning spatiotemporal graphs of human activities. In IEEE ICCV, 778--785. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Chang, W., and Zwicker, M. 2009. Range scan registration using reduced deformable models. CGF 28, 2, 447--456.Google ScholarGoogle ScholarCross RefCross Ref
  11. Chang, W., and Zwicker, M. 2011. Global registration of dynamic range scans for articulated model reconstruction. ACM TOG 30, 3, 26:1--26:15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Chen, X., and Laux, T. 2012. Plant development - a snapshot in 2012. Current Opinion in Plant Biology 15, 1, 1--3.Google ScholarGoogle ScholarCross RefCross Ref
  13. Curless, B. 1999. From range scans to 3d models. Proc. of SIGGRAPH 33, 4 (Nov.), 38--41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel, H.-P., and Thrun, S. 2008. Performance capture from sparse multi-view video. ACM TOG 27, 3, 98:1--98:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Fernandez, R., Das, P., Mirabet, V., Moscardi, E., Traas, J., Verdeil, J.-L., Malandain, G., and Godin, C. 2010. Imaging plant growth in 4d: robust tissue reconstruction and lineaging at cell resolution. Nature Methods 7, 7, 547--553.Google ScholarGoogle ScholarCross RefCross Ref
  16. Gaur, U., Zhu, Y., Song, B., and Roy-Chowdhury, A. 2011. A "string of feature graphs" model for recognition of complex activities in natural videos. In IEEE ICCV, 2595--2602. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Huang, H., Li, D., Zhang, H., Ascher, U., and Cohen-Or, D. 2009. Consolidation of unorganized point clouds for surface reconstruction. ACM TOG 28, 5, 176:1--176:7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Huang, H., Wu, S., Cohen-Or, D., Gong, M., Zhang, H., Li, G., and Chen, B. 2013. L1-medial skeleton of point cloud. ACM TOG 32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Kalal, Z., Mikolajczyk, K., and Matas, J. 2010. Forward-backward error: Automatic detection of tracking failures. In Int. Conf. on Pattern Recognition, 2756--2759. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Kazhdan, M., Bolitho, M., and Hoppe, H. 2006. Poisson surface reconstruction. In Proc. SGP, 61--70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Kevin, T., Fei-Fei, L., and Koller, D. 2012. Learning latent temporal structure for complex event detection. In IEEE CVPR.Google ScholarGoogle Scholar
  22. Kolmogorov, V., and Zabih, R. 2004. What energy functions can be minimized via graph cuts? IEEE TPAMI 26, 2, 147--159. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Li, C., Deussen, O., Song, Y.-Z., Willis, P., and Hall, P. 2011. Modeling and generating moving trees from video. ACM TOG 30, 6, 127:1--127:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Li, H., Luo, L., Vlasic, D., Peers, P., Popović, J., Pauly, M., and Rusinkiewicz, S. 2012. Temporally coherent completion of dynamic shapes. ACM TOG 31, 1, 2:1--2:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Liao, M., Zhang, Q., Wang, H., Yang, R., and Gong, M. 2009. Modeling deformable objects from a single depth camera. In IEEE ICCV, 167--174.Google ScholarGoogle Scholar
  26. Livny, Y., Yan, F., Olson, M., Chen, B., Zhang, H., and El-Sana, J. 2010. Automatic reconstruction of tree skeletal structures from point clouds. ACM TOG 29, 6, 151:1--151:8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Lu, C., Chelikani, S., Jaffray, D., Milosevic, M., Staib, L., and Duncan, J. 2012. Simultaneous nonrigid registration, segmentation, and tumor detection in MRI guided cervical cancer radiation therapy. IEEE Trans. on Medical Imaging 31, 6, 1213--1227.Google ScholarGoogle ScholarCross RefCross Ref
  28. Mitra, N. J., Flöry, S., Ovsjanikov, M., Gelfand, N., Guibas, L., and Pottmann, H. 2007. Dynamic geometry registration. In Proc. SGP, 173--182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Mündermann, L., Erasmus, Y., Lane, B., Coen, E., and Prusinkiewicz, P. 2005. Quantitative modeling of arabidopsis development. Plant physiology 139, 2, 960--968.Google ScholarGoogle Scholar
  30. Neubert, B., Franken, T., and Deussen, O. 2007. Approximate image-based tree-modeling using particle flows. ACM TOG 26, 3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Pirk, S., Niese, T., Deussen, O., and Neubert, B. 2012. Capturing and animating the morphogenesis of polygonal tree models. ACM TOG 31, 6 (Nov.), 169:1--169:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Pirk, S., Stava, O., Kratt, J., Said, M. A. M., Neubert, B., Měch, R., Benes, B., and Deussen, O. 2012. Plastic trees: interactive self-adapting botanical tree models. ACM TOG 31, 4 (July), 50:1--50:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Pirsiavash, H., and Ramanan, D. 2012. Detecting activities of daily living in first-person camera views. In IEEE CVPR. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Pons-Moll, G., Baak, A., Gall, J., Leal-Taixe, L., Muller, M., Seidel, H.-P., and Rosenhahn, B. 2011. Outdoor human motion capture using inverse kinematics and von mises-fisher sampling. In IEEE ICCV, 1243--1250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Popa, T., South-Dickinson, I., Bradley, D., Sheffer, A., and Heidrich, W. 2010. Globally consistent space-time reconstruction. CGF 29, 5, 1633--1642.Google ScholarGoogle ScholarCross RefCross Ref
  36. Prusinkiewicz, P., and Lindenmayer, A. 1996. The algorithmic beauty of plants. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Prusinkiewicz, P., and Runions, A. 2012. Computational models of plant development and form. New Phytologist 193, 3, 549--569.Google ScholarGoogle ScholarCross RefCross Ref
  38. Quan, L., Tan, P., Zeng, G., Yuan, L., Wang, J., and Kang, S. B. 2006. Image-based plant modeling. ACM TOG 25, 3, 599--604. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Rozenberg, G., and Salomaa, A. 1980. Mathematical Theory of L Systems. Academic Press, Inc., Orlando, FL, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Sharf, A., Alcantara, D. A., Lewiner, T., Greif, C., Sheffer, A., Amenta, N., and Cohen-Or, D. 2008. Space-time surface reconstruction using incompressible flow. ACM TOG 27, 5, 110:1--110:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., and Blake, A. 2011. Real-time human pose recognition in parts from single depth images. In IEEE CVPR, 1297--1304. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Song, Z., and Chung, R. 2008. Use of lcd panel for calibrating structured-light-based range sensing system. IEEE Trans. on Instrumentation and Measurement 57, 11, 2623--2630.Google ScholarGoogle ScholarCross RefCross Ref
  43. Song, Z., Chung, R., and Zhang, X.-T. 2013. An accurate and robust strip-edge-based structured light means for shiny surface micromeasurement in 3D. IEEE Trans. on Industrial Electronics 60, 3, 1023--1032.Google ScholarGoogle ScholarCross RefCross Ref
  44. Tevs, A., Berner, A., Wand, M., Ihrke, I., Bokeloh, M., Kerber, J., and Seidel, H.-P. 2012. Animation cartographyintrinsic reconstruction of shape and motion. ACM TOG 31, 2, 12:1--12:15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Thrun, S., and Montemerlo, M. 2005. The graphslam algorithm with applications to large-scale mapping of urban structures. Int. J. on Robotics Research 25, 5/6, 403--430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Vlasic, D., Baran, I., Matusik, W., and Popović, J. 2008. Articulated mesh animation from multi-view silhouettes. ACM TOG 27, 3, 97:1--97:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Wand, M., Adams, B., Ovsjanikov, M., Berner, A., Bokeloh, M., Jenke, P., Guibas, L., Seidel, H.-P., and Schilling, A. 2009. Efficient reconstruction of nonrigid shape and motion from real-time 3d scanner data. ACM TOG 28, 2, 15:1--15:15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Xu, H., Gossett, N., and Chen, B. 2007. Knowledge and heuristic-based modeling of laser-scanned trees. ACM TOG 26, 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Yamazaki, S., Narasimhan, S. G., Baker, S., and Kanade, T. 2007. Coplanar shadowgrams for acquiring visual hulls of intricate objects. In IEEE ICCV, 1--8.Google ScholarGoogle Scholar
  50. Yamazaki, S., Narasimhan, S. G., Baker, S., and Kanade, T. 2009. The theory and practice of coplanar shadowgram imaging for acquiring visual hulls of intricate objects. IJCV 81, 3, 259--280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Zhao, Y., and Barbič, J. 2013. Interactive authoring of simulation-ready plants. ACM TOG 32, 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Zheng, Q., Sharf, A., Tagliasacchi, A., Chen, B., Zhang, H., Sheffer, A., and Cohen-Or, D. 2010. Consensus skeleton for non-rigid space-time registration. CGF 29, 635--644.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Analyzing growing plants from 4D point cloud data

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Graphics
            ACM Transactions on Graphics  Volume 32, Issue 6
            November 2013
            671 pages
            ISSN:0730-0301
            EISSN:1557-7368
            DOI:10.1145/2508363
            Issue’s Table of Contents

            Copyright © 2013 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 1 November 2013
            Published in tog Volume 32, Issue 6

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader