Skip to main content

Gestural Interaction Using Feature Classification

  • Conference paper
Articulated Motion and Deformable Objects (AMDO 2008)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 5098))

Included in the following conference series:

Abstract

This paper describes our ongoing research work on deviceless interaction using hand gesture recognition with a calibrated stereo system. Video-based interaction is one of the most intuitive kinds of Human-Computer-Interaction with Virtual-Reality applications due to the fact that users are not wired to a computer. If interaction with three-dimensional environments is considered, pointing, grabbing and releasing are the most intuitive gestures used by humans. This paper describes our video-based gesture recognition system that observes the user in front of a large displaying screen, identifying three different hand gestures in real time using 2D feature classification and determines 3D information like the 3D position of the user’s hand or the pointing direction if performed. Different scenario applications like a virtual chess game against the computer or an industrial scenario have been developed and tested. To estimate the possible count of distinguishable gestures a sign language recognition application has been developed and tested using a single uncalibrated camera only.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Azarbayejani, A., Pentland, A.: Camera self-calibration from one point correspondence. Media Lab Technical Report 341 (1995)

    Google Scholar 

  • Cohen, C.: The Gesture Recognition Home Page - A brief overview of gesture recognition, Website (retrieved, October 2007), http://www.cybernet.com/~ccohen/

  • Erol, A., et al.: Vision-based hand pose estimation: A review. Computer Vision and Image Understanding 108(1-2), 52–73 (2007); Special Issue on Vision for Human-Computer Interaction

    Article  Google Scholar 

  • Gavrila, D.: The Visual Analysis of Human Movement: A Survey. Computer Vision and Image Understanding 73(1), 82–98 (1999)

    Article  MATH  Google Scholar 

  • Gunes, H., Piccardi, M.: Bi-modal affect recognition from expressive face and upper-body gesture by single frame analysis and multi-frame post integration. In: National ICT Australia HCSNet Multimodal User Interaction Workshop, Sydney, September 13-14 (2005)

    Google Scholar 

  • Kjeldsen, F.: Visual Interpretation of Hand Gestures as a Practical Interface Modality, Dissertation. Columbia University, New York (1997)

    Google Scholar 

  • Kohler, M.: Vision based hand gesture recognition systems. University of Dortmund (Retrieved, October 2007), http://ls7-www.cs.uni-dortmund.de/research/gesture/

  • Malerczyk, C.: Interactive Museum Exhibit Using Pointing Gesture Recognition. In: Skala, V. (ed.) u.a.; European Association for Computer Graphics (Eurographics): WSCG 2004. Short Communications, vol. II, pp. 165–171. University of West Bohemia, Plzen (2004)

    Google Scholar 

  • Malerczyk, C., Daehne, P., Schnaider, M.: Exploring Digitized Artworks by Pointing Posture Recognition. In: 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, Pisa, Italy, November 8 – 11 (2005)

    Google Scholar 

  • O’Hagen, R., Zelinsky, A.: Visual Gesture Interfaces to Virtual Environments. In: Proceedings if AUIC 2000, Canberra, Australia (2000)

    Google Scholar 

  • Sun, S., Egerstedt, M.: Control theoretic smoothing splines. IEEE Transactions on automatic control 45, 12 (2000)

    Article  MathSciNet  Google Scholar 

  • Schlattman, M., Klein, R.: Simultaneous 4 gestures 6 DOF real-time two-hand tracking without any markers. In: Proceedings of the 2007 ACM symposium on Virtual reality software and technology, Newport Beach, California, USA, pp. 39–42 (2007)

    Google Scholar 

  • Schwald, B., Malerczyk, C.: Controlling virtual worlds using interaction spheres. In: Vidal, C.A. (ed.) Proceedings of 5th Symposium on Virtual Reality (SVR), pp. 3–14 (2002)

    Google Scholar 

  • Starner, T., Pentland, A.: Real-Time American Sign Language Recognition from Video Using Hidden Markov Models, Technical Report 375, Massachusetts Institute of Technology Media Laboratory, Cambridge (1996)

    Google Scholar 

  • Witten, I., Frank, E.: Data Mining Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Francisco J. Perales Robert B. Fisher

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Malerczyk, C. (2008). Gestural Interaction Using Feature Classification. In: Perales, F.J., Fisher, R.B. (eds) Articulated Motion and Deformable Objects. AMDO 2008. Lecture Notes in Computer Science, vol 5098. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-70517-8_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-70517-8_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-70516-1

  • Online ISBN: 978-3-540-70517-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics