Skip to main content

Combining Color, Depth, and Motion for Video Segmentation

  • Conference paper
Computer Vision Systems (ICVS 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5815))

Included in the following conference series:

Abstract

This paper presents an innovative method to interpret the content of a video scene using a depth camera. Cameras that provide distance instead of color information are part of a promising young technology but they come with many difficulties: noisy signals, small resolution, and ambiguities, to cite a few.

By taking advantage of the robustness to noise of a recent background subtraction algorithm, our method is able to extract useful information from the depth signals. We further enhance the robustness of the algorithm by combining this information with that of an RGB camera. In our experiments, we demonstrate this increased robustness and conclude by showing a practical example of an immersive application taking advantage of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Silvestre, D.: Video surveillance using a time-of-flight camera. Master’s thesis, Informatics and Mathematical Modelling, Technical University of Denmark, DTU (2007)

    Google Scholar 

  2. Lindner, M., Kolb, A.: Lateral and depth calibration of PMD-distance sensors. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Remagnino, P., Nefian, A., Meenakshisundaram, G., Pascucci, V., Zara, J., Molineros, J., Theisel, H., Malzbender, T. (eds.) ISVC 2006. LNCS, vol. 4292, pp. 524–533. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  3. Lindner, M., Kolb, A.: Calibration of the intensity-related distance error of the PMD TOF-camera. In: SPIE: Intelligent Robots and Computer Vision XXV, vol. 6764, pp. 6764–35 (2007)

    Google Scholar 

  4. Blais, F.: Review of 20 years of range sensor development. Journal of Electronic Imaging 13(1), 231–243 (2004)

    Article  Google Scholar 

  5. Oprisescu, S., Falie, D., Ciuc, M., Buzuloiu, V.: Measurements with ToF cameras and their necessary corrections. In: International Symposium on Signals, Circuits and Systems (ISSCS), pp. 1–4 (July 2007)

    Google Scholar 

  6. Falie, D., Buzuloiu, V.: Noise characteristics of 3D time-of-flight cameras. In: International Symposium on Signals, Circuits and Systems (ISSCS), July 2007 vol. 1, pp. 1–4 (2007)

    Google Scholar 

  7. Radmer, J., Fusté, P., Schmidt, H., Krüger, J.: Incident light related distance error study and calibration of the PMD-range imaging camera. In: Society, I.C. (ed.) Conference on Computer Vision and Pattern Recognition, Piscataway, NJ, pp. 23–28 (2008)

    Google Scholar 

  8. Piccardi, M.: Background subtraction techniques: a review. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, vol. 4, pp. 3099–3104 (2004)

    Google Scholar 

  9. Radke, R., Andra, S., Al-Kofahi, O., Roysam, B.: Image change detection algorithms: A systematic survey. IEEE transactions on Image Processing 14(3), 294–307 (2005)

    Article  MathSciNet  Google Scholar 

  10. Barnich, O., Van Droogenbroeck, M.: ViBe: a powerful random technique to estimate the background in video sequences. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2009), April 2009, pp. 945–948 (2009)

    Google Scholar 

  11. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  12. Wang, H., Suter, D.: A consensus-based method for tracking: Modelling background scenario and foreground appearance. Pattern Recognition 40(3), 1091–1105 (2007)

    Article  MATH  Google Scholar 

  13. Fuchs, S., May, S.: Calibration and registration for precise surface reconstruction with ToF cameras. In: DAGM Dyn3D Workshop (September 2007)

    Google Scholar 

  14. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004)

    MATH  Google Scholar 

  15. Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11), 1330–1334 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Leens, J., Piérard, S., Barnich, O., Van Droogenbroeck, M., Wagner, JM. (2009). Combining Color, Depth, and Motion for Video Segmentation. In: Fritz, M., Schiele, B., Piater, J.H. (eds) Computer Vision Systems. ICVS 2009. Lecture Notes in Computer Science, vol 5815. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04667-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04667-4_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04666-7

  • Online ISBN: 978-3-642-04667-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics