skip to main content
10.1145/3397481.3450645acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article
Open Access

TeethTap: Recognizing Discrete Teeth Gestures Using Motion and Acoustic Sensing on an Earpiece

Authors Info & Claims
Published:14 April 2021Publication History

ABSTRACT

Teeth gestures become an alternative input modality for different situations and accessibility purposes. In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively. TeethTap uses a support vector machine to classify gestures from noise by fusing acoustic and motion data, and implements K-Nearest-Neighbor (KNN) with a Dynamic Time Warping (DTW) distance measurement using motion data for gesture classification. A user study with 11 participants demonstrated that TeethTap could recognize 13 gestures with a real-time classification accuracy of 90.9% in a laboratory environment. We further uncovered the accuracy differences on different teeth gestures when having sensors on single vs. both sides. Moreover, we explored the activation gesture under real-world environments, including eating, speaking, walking and jumping. Based on our findings, we further discussed potential applications and practical challenges of integrating TeethTap into future devices.

References

  1. Adafruit. [n.d.]. Adafruit HUZZAH32 – ESP32 Feather Board ID: 3405 - $19.95 : Adafruit Industries, Unique & fun DIY electronics and kits. https://www.adafruit.com/product/3405. (Accessed on 10/03/2020).Google ScholarGoogle Scholar
  2. Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. 2017. Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 679–689.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Daniel Ashbrook, Carlos Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, Sangeeta Gajendra, and Ross Tallents. 2016. Bitey: An exploration of tooth click gestures for hands-free user interface control. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, 158–169.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Michael Barz, Andreas Bulling, and Florian Daiber. 2015. Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers.Google ScholarGoogle Scholar
  5. Abdelkareem Bedri, Richard Li, Malcolm Haynes, Raj Prateek Kosaraju, Ishaan Grover, Temiloluwa Prioleau, Min Yan Beh, Mayank Goel, Thad Starner, and Gregory Abowd. 2017. EarBit: Using Wearable Sensors to Detect Eating Episodes in Unconstrained Environments. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 37 (Sept. 2017), 20 pages. https://doi.org/10.1145/3130902Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Shengjie Bi, Tao Wang, Nicole Tobias, Josephine Nordrum, Shang Wang, George Halvorsen, Sougata Sen, Ronald A. Peterson, Kofi Odame, Kelly Caine, Ryan J. Halter, Jacob Sorber, and David Kotz. 2018. Auracle: Detecting Eating Episodes with an Ear-mounted Sensor. IMWUT 2(2018), 92:1–92:27.Google ScholarGoogle Scholar
  7. Tuochao Chen, Benjamin Steeper, Kinan Alsheikh, Songyun Tao, François Guimbretière, and Cheng Zhang. 2020. C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-Mounted Miniature Cameras. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 112–125. https://doi.org/10.1145/3379337.3415879Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Piotr Dalka and Andrzej Czyzewski. 2010. Human-Computer Interface Based on Visual Lip Movement and Gesture Recognition.IJCSA 7, 3 (2010), 124–139.Google ScholarGoogle Scholar
  9. Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, and Woontack Woo. 2016. Smooth eye movement interaction using EOG glasses. In ICMI.Google ScholarGoogle Scholar
  10. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans-Werner Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements. In UIST.Google ScholarGoogle Scholar
  11. Augusto Esteves, David Verweij, Liza Suraiya, Md. Rasel Islam, Youryang Lee, and Ian Oakley. 2017. SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality. In UIST.Google ScholarGoogle Scholar
  12. Mingming Fan, Zhen Li, and Franklin Mingzhe Li. 2020. Eyelid Gestures on Mobile Devices for People with Motor Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–8.Google ScholarGoogle Scholar
  13. S. Furui. 2000. Speech recognition technology in the ubiquitous/wearable computing environment. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), Vol. 6. 3735–3738 vol.6. https://doi.org/10.1109/ICASSP.2000.860214Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Pablo Gallego Cascón, Denys JC Matthies, Sachith Muthukumarana, and Suranga Nanayakkara. 2019. ChewIt. An Intraoral Interface for Discreet Interactions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. TDK InvenSense. 2014. MPU-9250, Nine-Axis (Gyro+ Accelerometer+ Compass) MEMS MotionTracking™ Device.Google ScholarGoogle Scholar
  16. Yasha Iravantchi, Yang Zhang, Evi Bernitsas, Mayank Goel, and Chris Harrison. 2019. Interferi: Gesture Sensing Using On-Body Acoustic Interferometry. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Shin Katayama, Akhil Mathur, Marc Van den Broeck, Tadashi Okoshi, Jin Nakazawa, and Fahim Kawsar. 2019. Situation-Aware Emotion Regulation of Conversational Agents with Kinetic Earables. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 725–731.Google ScholarGoogle Scholar
  18. Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. 2018. eSense: Open Earable Platform for Human Sensing. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems. 371–372.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Koichi Kuzume. 2011. Tooth-touch Sound and Expiration Signal Detection and Its Application in a Mouse Interface Device for Disabled Persons - Realization of a Mouse Interface Device Driven by Biomedical Signals. In PECCS.Google ScholarGoogle Scholar
  20. Seungchul Lee, Chulhong Min, Alessandro Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, and Fahim Kawsar. 2019. Automatic Smile and Frown Recognition with Kinetic Earables. In Proceedings of the 10th Augmented Human International Conference 2019. 1–4.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Cheng-Yuan Li, Yen-Chang Chen, Wei-Ju Chen, Polly Huang, and Hao-hua Chu. 2013. Sensor-embedded Teeth for Oral Activity Recognition. In Proceedings of the 2013 International Symposium on Wearable Computers (Zurich, Switzerland) (ISWC ’13). ACM, New York, NY, USA, 41–44. https://doi.org/10.1145/2493988.2494352Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Katsutoshi Masai, Yuta Sugiura, Katsuhiro Suzuki, Sho Shimamura, Kai Kunze, Masa Ogata, Masahiko Inami, and Maki Sugimoto. 2015. AffectiveWear: towards recognizing affect in real life. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers. 357–360.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Tamer Mohamed and Lin Zhong. 2006. Teethclick: Input with teeth clacks. Technical Report. Technical Report. Rice University.Google ScholarGoogle Scholar
  24. Mouser. [n.d.]. BU-30179-000 Knowles | Mouser. https://www.mouser.com/ProductDetail/Knowles/BU-30179-000?qs=wo4x%252BUeoG8Wi4aOonYRo4g==. (Accessed on 10/03/2020).Google ScholarGoogle Scholar
  25. Alexander Ng, Stephen A Brewster, and John Williamson. 2013. The impact of encumbrance on mobile interactions. In IFIP Conference on Human-Computer Interaction. Springer, 92–109.Google ScholarGoogle ScholarCross RefCross Ref
  26. Phuc Nguyen, Nam Bui, Anh Nguyen, Hoang Truong, Abhijit Suresh, Matt Whitlock, Duy Pham, Thang Dinh, and Tam Vu. 2018. Tyth-typing on your teeth: Tongue-teeth localization for human-computer interface. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 269–282.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jay Prakash, Zhijian Yang, Yu-Lin Wei, Haitham Hassanieh, and Romit Roy Choudhury. 2020. EarSense: earphones as a teeth activity sensor. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Ville Rantanen, Hanna Venesvirta, Oleg Spakov, Jarmo Verho, Akos Vetek, Veikko Surakka, and Jukka Lekkala. 2013. Capacitive measurement of facial activity intensity. IEEE Sensors Journal 13, 11 (2013), 4329–4338.Google ScholarGoogle ScholarCross RefCross Ref
  29. T. Scott Saponas, Daniel Kelly, Babak A. Parviz, and Desney S. Tan. 2009. Optically Sensing Tongue Gestures for Computer Input. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology (Victoria, BC, Canada) (UIST ’09). ACM, New York, NY, USA, 177–180. https://doi.org/10.1145/1622176.1622209Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. T Scott Saponas, Daniel Kelly, Babak A Parviz, and Desney S Tan. 2009. Optically sensing tongue gestures for computer input. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. 177–180.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. T. Simpson *, C. Broughton, M. J. A. Gauthier, and A. Prochazka. 2008. Tooth-Click Control of a Hands-Free Computer Interface. IEEE Transactions on Biomedical Engineering 55, 8 (Aug 2008), 2050–2056. https://doi.org/10.1109/TBME.2008.921161Google ScholarGoogle ScholarCross RefCross Ref
  32. Tyler Simpson, Michel Gauthier, and Arthur Prochazka. 2010. Evaluation of tooth-click triggering and speech recognition in assistive technology for computer access.Neurorehabilitation and neural repair 24 2 (2010), 188–94.Google ScholarGoogle Scholar
  33. Yusuke Sugano and Andreas Bulling. 2015. Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency. In UIST.Google ScholarGoogle Scholar
  34. Kazuhiro Taniguchi, Hisashi Kondo, Mami Kurosawa, and Atsushi Nishikawa. 2018. Earable TEMPO: a novel, hands-free input device that uses the movement of the tongue measured with a wearable ear sensor. Sensors 18, 3 (2018), 733.Google ScholarGoogle ScholarCross RefCross Ref
  35. Kazuhiro Taniguchi and Atsushi Nishikawa. 2018. Mouthwitch: A Novel Head Mount Type Hands-Free Input Device that Uses the Movement of the Temple to Control a Camera. Sensors 18, 7 (2018), 2273.Google ScholarGoogle ScholarCross RefCross Ref
  36. Gineke A ten Holt, Marcel JT Reinders, and EA Hendriks. 2007. Multi-dimensional dynamic time warping for gesture recognition. In Thirteenth annual conference of the Advanced School for Computing and Imaging, Vol. 300.Google ScholarGoogle Scholar
  37. Cheng Zhang, Anandghan Waghmare, Pranav Kundra, Yiming Pu, Scott M. Gilliland, Thomas Plötz, Thad Starner, Omer T. Inan, and Gregory D. Abowd. 2017. FingerSound: Recognizing unistroke thumb gestures using a ring. IMWUT 1(2017), 120:1–120:19.Google ScholarGoogle Scholar
  38. Qiao Zhang, Shyamnath Gollakota, Ben Taskar, and Raj PN Rao. 2014. Non-intrusive tongue machine interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2555–2558.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Xiaoyi Zhang, Harish Kulkarni, and Meredith Ringel Morris. 2017. Smartphone-based gaze gesture communication for people with motor disabilities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2878–2889.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Xiaoyu (Amy) Zhao, Elias D. Guestrin, Dimitry Sayenko, Tyler Simpson, Michel Gauthier, and Milos R. Popovic. 2012. Typing with Eye-gaze and Tooth-clicks. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). ACM, New York, NY, USA, 341–344. https://doi.org/10.1145/2168556.2168632Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. TeethTap: Recognizing Discrete Teeth Gestures Using Motion and Acoustic Sensing on an Earpiece
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        IUI '21: Proceedings of the 26th International Conference on Intelligent User Interfaces
        April 2021
        618 pages
        ISBN:9781450380171
        DOI:10.1145/3397481

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 April 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate746of2,811submissions,27%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format