skip to main content
10.1145/3544549.3585726acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

VR, Gaze, and Visual Impairment: An Exploratory Study of the Perception of Eye Contact across different Sensory Modalities for People with Visual Impairments in Virtual Reality

Published:19 April 2023Publication History

ABSTRACT

As social virtual reality (VR) becomes more popular, avatars are being designed with realistic behaviors incorporating non-verbal cues like eye contact. However, perceiving eye contact during a conversation can be challenging for people with visual impairments. VR presents an opportunity to display eye contact cues in alternative ways, making them perceivable for people with visual impairments. We performed an exploratory study to gain initial insights on designing eye contact cues for people with visual impairments, including a focus group for a deeper understanding of the topic. We implemented eye contact cues via visual, auditory, and tactile sensory modalities in VR and tested these approaches with eleven participants with visual impairments and collected qualitative feedback. The results show that visual cues indicating the gaze direction were preferred, but auditory and tactile cues were also prevalent as they do not superimpose additional visual information.

Skip Supplemental Material Section

Supplemental Material

3544549.3585726-talk-video.mp4

mp4

18 MB

References

  1. Yosuef Alotaibi, John H Williamson, and Stephen Brewster. 2020. Investigating electrotactile feedback on the hand. In 2020 IEEE Haptics Symposium (HAPTICS). IEEE, 637–642. https://doi.org/10.1109/HAPTICS45997.2020.ras.HAP20.13.8ee5dc37Google ScholarGoogle ScholarCross RefCross Ref
  2. Jeremy N. Bailenson, Andrew C. Beall, Jack Loomis, Jim Blascovich, and Matthew Turk. 2004. Transformed Social Interaction: Decoupling Representation from Behavior and Form in Collaborative Virtual Environments. Presence: Teleoperators and Virtual Environments 13, 4 (Aug. 2004), 428–441. https://doi.org/10.1162/1054746041944803Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Jeremy N. Bailenson, Andrew C. Beall, Jack Loomis, Jim Blascovich, and Matthew Turk. 2005. Transformed Social Interaction, Augmented Gaze, and Social Influence in Immersive Virtual Environments. Human Communication Research 31, 4 (Oct. 2005), 511–537. https://doi.org/10.1111/j.1468-2958.2005.tb00881.xGoogle ScholarGoogle ScholarCross RefCross Ref
  4. C.S. Fichten, D. Judd, V. Tagalakis, R. Amsel, and K. Robillard. 1991. Communication Cues Used by People with and without Visual Impairments in Daily Conversations and Dating. Journal of Visual Impairment & Blindness 85, 9 (Nov. 1991), 371–378. https://doi.org/10.1177/0145482X9108500906Google ScholarGoogle ScholarCross RefCross Ref
  5. Mar Gonzalez Franco, Eyal Ofek, Ye Pan, Angus Antley, Anthony Steed, Bernhard Spanlang, Antonella Maselli, Domna Banakou, Nuria Pelechano, Sergio Orts-Escolano, Veronica Orvalho, Laura Trutoiu, Markus Wojcik, Maria V. Sanchez-Vives, Jeremy Bailenson, Mel Slater, and Jaron Lanier. 2020. The Rocketbox library and the utility of freely available rigged avatars. Frontiers in Virtual Reality (Nov. 2020). https://www.microsoft.com/en-us/research/publication/the-rocketbox-library-and-the-utility-of-freely-available-rigged-avatars-for-procedural-animation-of-virtual-humans-and-embodiment/Google ScholarGoogle Scholar
  6. Thomas Hermann, Andy Hunt, and John G Neuhoff. 2011. The sonification handbook. Logos Verlag Berlin.Google ScholarGoogle Scholar
  7. Tiger F Ji, Brianna Cochran, and Yuhang Zhao. 2022. VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Hiroyuki Kajimoto, Masaki Suzuki, and Yonezo Kanno. 2014. HamsaTouch: Tactile Vision Substitution with Smartphone and Electro-Tactile Display. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems. 1273–1278. https://doi.org/10.1145/2559206.2581164Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Oliver Beren Kaul, Michael Rohs, Marc Mogalle, and Benjamin Simon. 2021. Around-the-Head Tactile System for Supporting Micro Navigation of People with Visual Impairments. ACM Trans. Comput.-Hum. Interact. 28, 4 (July 2021). https://doi.org/10.1145/3458021Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Chris L Kleinke. 1986. Gaze and eye contact: A research review.Psychological bulletin 100, 1 (1986), 78. https://doi.org/10.1037/0033-2909.100.1.78Google ScholarGoogle ScholarCross RefCross Ref
  11. Hiromi Kobayashi and Shiro Kohshima. 2001. Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye. Journal of Human Evolution 40, 5 (May 2001), 419–435. https://doi.org/10.1006/jhev.2001.0468Google ScholarGoogle ScholarCross RefCross Ref
  12. Sreekar Krishna, Shantanu Bala, Troy McDaniel, Stephen McGuire, and Sethuraman Panchanathan. 2010. VibroGlove: An Assistive Technology Aid for Conveying Facial Expressions. In CHI ’10 Extended Abstracts on Human Factors in Computing Systems. 3637–3642. https://doi.org/10.1145/1753846.1754031Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Sreekar Krishna, Dirk Colbry, John Black, Vineeth Balasubramanian, and Sethuraman Panchanathan. 2008. A Systematic Requirements Analysis and Development of an Assistive Device to Enhance the Social Interaction of People Who are Blind or Visually Impaired. In Workshop on Computer Vision Applications for the Visually Impaired. James Coughlan and Roberto Manduchi, Marseille, France. https://hal.inria.fr/inria-00325432Google ScholarGoogle Scholar
  14. Florian Lang, Albrecht Schmidt, and Tonja Machulla. 2020. Augmented Reality for People with Low Vision: Symbolic and Alphanumeric Representation of Information. In Computers Helping People with Special Needs. Vol. 12376. 146–156. https://doi.org/10.1007/978-3-030-58796-3_19Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jingyi Li, Son Kim, Joshua A. Miele, Maneesh Agrawala, and Sean Follmer. 2019. Editing Spatial Layouts through Tactile Templates for People with Visual Impairments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11. https://doi.org/10.1145/3290605.3300436Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Cecily Morrison, Edward Cutrell, Martin Grayson, Elisabeth RB Becker, Vasiliki Kladouchou, Linda Pring, Katherine Jones, Rita Faia Marques, Camilla Longden, and Abigail Sellen. 2021. Enabling Meaningful Use of AI-Infused Educational Technologies for Children with Blindness: Learnings from the Development and Piloting of the PeopleLens Curriculum. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. https://doi.org/10.1145/3441852.3471210Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Cecily Morrison, Ed Cutrell, Martin Grayson, Geert Roumen, Rita Faia Marques, Anja Thieme, Alex Taylor, and Abigail Sellen. 2021. PeopleLens. Interactions 28, 3 (April 2021), 10–13. https://doi.org/10.1145/3460116Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Miles L. Patterson. 1982. A sequential functional model of nonverbal exchange.Psychological Review 89, 3 (1982), 231–249. https://doi.org/10.1037/0033-295X.89.3.231 Place: US Publisher: American Psychological Association.Google ScholarGoogle ScholarCross RefCross Ref
  19. Shi Qiu, Jun Hu, Ting Han, Hirotaka Osawa, and Matthias Rauterberg. 2020. An Evaluation of a Wearable Assistive Device for Augmenting Social Interactions. IEEE Access 8 (2020), 164661–164677. https://doi.org/10.1109/ACCESS.2020.3022425Google ScholarGoogle ScholarCross RefCross Ref
  20. Shi Qiu, Jun Hu, and Matthias Rauterberg. 2015. Nonverbal Signals for Face-to-Face Communication between the Blind and the Sighted. (2015), 10.Google ScholarGoogle Scholar
  21. Martin Rotard, Sven Knödler, and Thomas Ertl. 2005. A Tactile Web Browser for the Visually Disabled. In Proceedings of the Sixteenth ACM Conference on Hypertext and Hypermedia(HYPERTEXT ’05). 15–22. https://doi.org/10.1145/1083356.1083361Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Daniel Roth, Constantin Klelnbeck, Tobias Feigl, Christopher Mutschler, and Marc Erich Latoschik. 2018. Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 215–222. https://doi.org/10.1109/VR.2018.8447550Google ScholarGoogle ScholarCross RefCross Ref
  23. Michael Tomasello, Brian Hare, Hagen Lehmann, and Josep Call. 2007. Reliance on head versus eyes in the gaze following of great apes and human infants: the cooperative eye hypothesis. Journal of Human Evolution 52, 3 (March 2007), 314–320. https://doi.org/10.1016/j.jhevol.2006.10.001Google ScholarGoogle ScholarCross RefCross Ref
  24. M. A. Torres-Gil, O. Casanova-Gonzalez, and J. L. Gonzalez-Mora. 2010. Applications of Virtual Reality for Visually Impaired People. W. Trans. on Comp. 9, 2 (Feb. 2010), 184–193.Google ScholarGoogle Scholar
  25. Markus Wieland, Lauren Thevin, Albrecht Schmidt, and Tonja Machulla. 2022. Non-verbal Communication and Joint Attention Between People with and Without Visual Impairments: Deriving Guidelines for Inclusive Conversations in Virtual Realities. In Computers Helping People with Special Needs. Vol. 13341. 295–304. https://doi.org/10.1007/978-3-031-08648-9_34Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yuhang Zhao, Cynthia L. Bennett, Hrvoje Benko, Edward Cutrell, Christian Holz, Meredith Ringel Morris, and Mike Sinclair. 2018. Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 1–14. https://doi.org/10.1145/3173574.3173690Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson. 2019. SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14. https://doi.org/10.1145/3290605.3300341Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. VR, Gaze, and Visual Impairment: An Exploratory Study of the Perception of Eye Contact across different Sensory Modalities for People with Visual Impairments in Virtual Reality

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
        April 2023
        3914 pages
        ISBN:9781450394222
        DOI:10.1145/3544549

        Copyright © 2023 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 19 April 2023

        Check for updates

        Qualifiers

        • Work in Progress
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate6,164of23,696submissions,26%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format