skip to main content
10.1145/2556288.2556984acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Exploring the use of hand-to-face input for interacting with head-worn displays

Authors Info & Claims
Published:26 April 2014Publication History

ABSTRACT

We propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques.

Skip Supplemental Material Section

Supplemental Material

pn0222-file3.mp4

mp4

33.5 MB

References

  1. Alexander, J., Han, T., Judd, W., Irani, P. Subramanian, S. 2012. Putting your best foot forward: investigating real-world mappings for foot-based gestures. In Proc. of CHI '12. ACM, 1229--1238. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Bailly, G., Müller, J., Rohs, M., Wigdor, D., Kratz, S. 2012. ShoeSense: a new perspective on gestural interaction and wearable applications. In Proc. of CHI '12. ACM, 1239--1248. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Borg, G. Borg's Perceived Exertion and Pain Scales. Human Kinetics (1998), viii 104pp.Google ScholarGoogle Scholar
  4. Epson Moverio, http://www.epson.com/moverioGoogle ScholarGoogle Scholar
  5. GlobalWebIndex, https://www.globalwebindex.net/Topglobal-smartphone-appsGoogle ScholarGoogle Scholar
  6. Google Glass, Project Glass One Day video, http://www.google.com/glassGoogle ScholarGoogle Scholar
  7. Gustafson, S.,Rabe, B., and Baudisch, P.. 2013. Understanding palm-based imaginary interfaces. In Proc. of CHI '13. ACM, 889--898. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Harrison, C., Tan, D., and Morris, D. 2010. Skinput: appropriating the body as an input surface. In Proc. of CHI '10. ACM, 453--462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Harrison, C., Benko, H. and Wilson, A. 2011. OmniTouch: wearable multitouch interaction everywhere. In Proc of UIST'11. ACM, 441--450. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Hidayat, A. Flick list with its momentum scrolling and deceleration. http://ariya.ofilabs.com/Google ScholarGoogle Scholar
  11. Hincapie-Ramos, J-D,. Guo, X., Moghadasian, P. and Irani, P. 2014. Consumed Endurance: a metric to quantify arm fatigue of mid-air interactions. In Proc. of CHI'14. ACM, to appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Hinckley, K., Wigdor, D., Input Technologies and Techniques. Chap. 9 in The HCI Handbook, 3rd Edition, Taylor & Francis. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Holz, C. Grossman, T., Frizmaurice, G and Agur, A. 2012. Implanted user interfaces. In Proc. of CHI '12. ACM, 503--512. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Kim, D. Hilliges, O., Izadi, S., Butler, A., Chen, J., Oikonomidis, I. and Olivier, P.. 2012. Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor. In Proc. of UIST '12. ACM, 167--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Kimura, S., Fukuomoto, M., and Horikoshi, T. 2013. Eyeglass-based hands-free videophone. In Proc. of ISWC '13. ACM, 117--124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lissermann, R.Huber, J., Hadjakos, A. and Mühlhäuser, M. 2013. EarPut: augmenting behind-the-ear devices for ear-based interaction. In CHI '13EA. ACM, 1323--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Lucero, A. Lyons, K., Vetek, A., Järvenpää, T., White, S. and Salmimaa, M. 2013. Exploring the interaction design space for interactive glasses. In CHI '13 EA. ACM, 1341--1346. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Mahmoud, M. and Robinson, P. 2011. Interpreting Hand-Over-Face Gestures. In Proc. of Affective Computing, Springer, 248--255. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Malacria, S., Lecolinet, E. and Guiard, Y. 2010. Clutchfree panning and integrated pan-zoom control on touchsensitive surfaces: the cyclostar approach. In Proc. of CHI '10. ACM, 2615--2624. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Nicas, M. and Best, D. 2008. A Study Quantifying the Hand-to-Face Contact Rate and Its Potential Application to Predicting Respiratory Tract Infection. In J. Occup. Env. Hyg., 347--352.Google ScholarGoogle Scholar
  21. Perrault, S., Lecolinet, E., Eagan, J., Guiard, Y. 2013. Watchit: simple gestures and eyes-free interaction for wristwatches and bracelets. In Proc. of CHI'13. ACM, 1451--1460. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Profita, H., Clawson, J., Gilliland, S., Zeagler, C., Starner, T., Budd, J. and Do, E. 2013. Don't mind me touching my wrist: a case study of interacting with onbody technology in public. In Proc. of ISWC '13. ACM, 89--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Rico, J. and Brewster, S. 2010. Usable gestures for mobile interfaces: evaluating social acceptability. In Proc. of CHI '10. ACM, 887--896. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ruiz, J, Li, Y., Lank, E. 2011. User-defined motion gestures for mobile interaction. In Proc. of CHI '11. ACM, 197--206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Saponas, T., Kelly, D., Parviz, B. and Tan, D. 2009. Optically sensing tongue gestures for computer input. In Proc.of UIST '09. ACM, 177--180. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Serrano, M., Lecolinet, E., and Guiard, Y. 2013. BezelTap gestures: quick activation of commands from sleep mode on tablets. In Proc. of CHI'13. ACM, 3027--3036. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Seyed, T., Burns, C., Costa, M, Maurer, F. and Tang, A. 2012. Eliciting usable gestures for multi-display environments. In Proc. of ITS '12. ACM, 41--50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Spakov, O. and Majaranta, P. 2012. Enhanced gaze interaction using simple head gestures. In Proc. of UbiComp '12. ACM, 705--710. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Tamaki, E. Miyak, T. and Rekimoto, J. 2010. BrainyHand: a wearable computing device without HMD and its interaction techniques. In Proc. of AVI '10. ACM, 387--388. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Telepathy, http://tele-pathy.orgGoogle ScholarGoogle Scholar
  31. Vuzix M100, http://www.vuzix.comGoogle ScholarGoogle Scholar
  32. Wagner, J. Nancel, M., Gustafson, S., Huot, S. and Mackay, W. 2013. Body-centric design space for multisurface interaction. In Proc. of CHI '13. ACM, 12991308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Wobbrock, J., Ringel Morris, M. and Wilson, A. 2009. User-defined gestures for surface computing. In Proc. of CHI '09. ACM, 1083--1092. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Yang, X-D. Grossman, T., Wigdor, D. and Fitzmaurice, G. 2012. Magic finger: always-available input through finger instrumentation. In Proc. of UIST '12. ACM, 147156. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Exploring the use of hand-to-face input for interacting with head-worn displays

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2014
      4206 pages
      ISBN:9781450324731
      DOI:10.1145/2556288

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 April 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '14 Paper Acceptance Rate465of2,043submissions,23%Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader