skip to main content
10.1145/3491102.3502020acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open Access

ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users

Authors Info & Claims
Published:29 April 2022Publication History

ABSTRACT

Recent advances have enabled automatic sound recognition systems for deaf and hard of hearing (DHH) users on mobile devices. However, these tools use pre-trained, generic sound recognition models, which do not meet the diverse needs of DHH users. We introduce ProtoSound, an interactive system for customizing sound recognition models by recording a few examples, thereby enabling personalized and fine-grained categories. ProtoSound is motivated by prior work examining sound awareness needs of DHH people and by a survey we conducted with 472 DHH participants. To evaluate ProtoSound, we characterized performance on two real-world sound datasets, showing significant improvement over state-of-the-art (e.g., +9.7% accuracy on the first dataset). We then deployed ProtoSound's end-user training and real-time recognition through a mobile application and recruited 19 hearing participants who listened to the real-world sounds and rated the accuracy across 56 locations (e.g., homes, restaurants, parks). Results show that ProtoSound personalized the model on-device in real-time and accurately learned sounds across diverse acoustic contexts. We close by discussing open challenges in personalizable sound recognition, including the need for better recording interfaces and algorithmic improvements.

Skip Supplemental Material Section

Supplemental Material

3491102.3502020-talk-video.mp4

mp4

86.4 MB

References

  1. Jakob Abeßer. 2020. A review of deep learning based methods for acoustic scene classification. Applied Sciences 10, 6.Google ScholarGoogle ScholarCross RefCross Ref
  2. Sharath Adavanne, Archontis Politis, and Tuomas Virtanen. 2019. TAU Moving Sound Events 2019 - Ambisonic, Anechoic, Synthetic IR and Moving Source Dataset [Data set]. https://doi.org/10.5281/zenodo.2636594Google ScholarGoogle ScholarCross RefCross Ref
  3. Rosa Ma Alsina-Pagès, Joan Navarro, Francesc Al\’ias, and Marcos Hervás. 2017. Homesound: Real-time audio event detection based on high performance computing for behaviour and surveillance remote monitoring. Sensors 17, 4: 854.Google ScholarGoogle ScholarCross RefCross Ref
  4. Sanghamitra Bandyopadhyay and Sriparna Saha. 2012. Unsupervised classification: similarity measures, classical and metaheuristic approaches, and applications. Springer Science & Business Media.Google ScholarGoogle Scholar
  5. Danielle Bragg, Nicholas Huynh, and Richard E. Ladner. 2016. A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, 3–13.Google ScholarGoogle Scholar
  6. Anna Cavender and Richard E Ladner. 2008. Hearing impairments. In Web accessibility. Springer, 25–35.Google ScholarGoogle Scholar
  7. Sachin Chachada and C-C Jay Kuo. 2014. Environmental sound recognition: A survey. APSIPA Transactions on Signal and Information Processing 3.Google ScholarGoogle Scholar
  8. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. 2019. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232.Google ScholarGoogle Scholar
  9. Selina Chu, Shrikanth Narayanan, and C-C Jay Kuo. 2009. Environmental sound recognition with time–frequency audio features. IEEE Transactions on Audio, Speech, and Language Processing 17, 6: 1142–1158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Charles K Chui, Jeffrey M Lemm, and Sahra Sedigh. 1992. An introduction to wavelets. Academic press.Google ScholarGoogle Scholar
  11. Courtenay V Cotton and Daniel P W Ellis. 2011. Spectral vs. spectro-temporal features for acoustic event detection. In 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 69–72.Google ScholarGoogle ScholarCross RefCross Ref
  12. Jeremiah D Deng, Christian Simmermacher, and Stephen Cranefield. 2008. A study on feature analysis for musical instrument classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 38, 2: 429–438.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. John J Dudley and Per Ola Kristensson. 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS) 8, 2: 1–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Leah Findlater, Bonnie Chinh, Dhruv Jain, Jon Froehlich, Raja Kushalnagar, and Angela Carey Lin. 2019. Deaf and Hard-of-hearing Individuals’ Preferences for Wearable and Mobile Sound Awareness Technologies. In SIGCHI Conference on Human Factors in Computing Systems (CHI)., 1–13.Google ScholarGoogle Scholar
  15. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, 1126–1135.Google ScholarGoogle Scholar
  16. Pasquale Foggia, Nicolai Petkov, Alessia Saggese, Nicola Strisciuglio, and Mario Vento. 2015. Reliable detection of audio events in highly noisy environments. Pattern Recognition Letters 65: 22–28.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Eduardo Fonseca, Jordi Pons Puig, Xavier Favory, Frederic Font Corbera, Dmitry Bogdanov, Andres Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra. 2017. Freesound datasets: a platform for the creation of open audio datasets. In Hu X, Cunningham SJ, Turnbull D, Duan Z, editors. Proceedings of the 18th ISMIR Conference; 2017 oct 23-27; Suzhou, China.[Canada]: International Society for Music Information Retrieval; 2017. p. 486-493.Google ScholarGoogle Scholar
  18. Jort F Gemmeke, Daniel P W Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 776–780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Shayan Gharib, Konstantinos Drossos, Emre Cakir, Dmitriy Serdyuk, and Tuomas Virtanen. 2018. Unsupervised adversarial domain adaptation for acoustic scene classification. arXiv preprint arXiv:1808.05777.Google ScholarGoogle Scholar
  20. Steven Goodman, Susanne Kirchner, Rose Guttman, Dhruv Jain, Jon Froehlich, and Leah Findlater. 2020. Evaluating Smartwatch-based Sound Feedback for Deaf and Hard-of-hearing Users Across Contexts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Steven M Goodman, Ping Liu, Dhruv Jain, Emma J McDonnell, Jon E Froehlich, and Leah Findlater. 2021. Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2: 1–23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Shawn Hershey, Sourish Chaudhuri, Daniel P W Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, and others. 2017. CNN architectures for large-scale audio classification. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), 131–135.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Dhruv Jain, Brendon Chiu, Steven Goodman, Chris Schmandt, Leah Findlater, and Jon E Froehlich. 2020. Field study of a tactile sound awareness device for deaf users. In Proceedings of the 2020 International Symposium on Wearable Computers, 55–57.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Dhruv Jain, Angela Carey Lin, Marcus Amalachandran, Aileen Zeng, Rose Guttman, Leah Findlater, and Jon Froehlich. 2019. Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 94:1-94:13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Dhruv Jain, Kelly Mack, Akli Amrous, Matt Wright, Steven Goodman, Leah Findlater, and Jon E Froehlich. 2020. HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Dhruv Jain, Hung Ngo, Pratyush Patel, Steven Goodman, Leah Findlater, and Jon Froehlich. 2020. SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users. In ACM SIGACCESS conference on Computers and accessibility, 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Pedro R Mendes Júnior, Roberto M De Souza, Rafael de O Werneck, Bernardo V Stein, Daniel V Pazinato, Waldir R de Almeida, Otávio A B Penatti, Ricardo da S Torres, and Anderson Rocha. 2017. Nearest neighbors distance ratio open-set classifier. Machine Learning 106, 3: 359–386.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Mahdie Karbasi, Seyed Mohammad Ahadi, and M Bahmanian. 2011. Environmental sound classification using spectral dynamic features. In 2011 8th International Conference on Information, Communications & Signal Processing, 1–5.Google ScholarGoogle ScholarCross RefCross Ref
  29. Peerapol Khunarsal, Chidchanok Lursinsap, and Thanapant Raicharoen. 2013. Very short time environmental sound classification based on spectrogram pattern matching. Information Sciences 243: 57–74.Google ScholarGoogle ScholarCross RefCross Ref
  30. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.Google ScholarGoogle Scholar
  31. W Bradley Knox and Peter Stone. 2015. Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance. Artificial Intelligence 225: 24–50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Paddy Ladd and Harlan Lane. 2013. Deaf ethnicity, deafhood, and their relationship. Sign Language Studies 13, 4: 565–579.Google ScholarGoogle ScholarCross RefCross Ref
  33. Gierad Laput, Karan Ahuja, Mayank Goel, and Chris Harrison. 2018. Ubicoustics: Plug-and-play acoustic activity recognition. In The 31st Annual ACM Symposium on User Interface Software and Technology, 213–224.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Juncheng Li, Wei Dai, Florian Metze, Shuhui Qu, and Samarjit Das. 2017. A comparison of deep learning methods for environmental sound detection. In 2017 IEEE International conference on acoustics, speech and signal processing (ICASSP), 126–130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Hong Lu, Wei Pan, Nicholas D Lane, Tanzeem Choudhury, and Andrew T Campbell. 2009. Soundsense: scalable sound sensing for people-centric applications on mobile phones. In Proceedings of the 7th international conference on Mobile systems, applications, and services, 165–178.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Lie Lu, Hong-Jiang Zhang, and Hao Jiang. 2002. Content analysis for audio classification and segmentation. IEEE Transactions on speech and audio processing 10, 7: 504–516.Google ScholarGoogle ScholarCross RefCross Ref
  37. Manuela M Marin and Helmut Leder. 2013. Examining complexity across domains: relating subjective and objective measures of affective environmental scenes, paintings and music. PloS one 8, 8: e72412.Google ScholarGoogle ScholarCross RefCross Ref
  38. John D Markel and A H Jr Gray. 2013. Linear prediction of speech. Springer Science & Business Media.Google ScholarGoogle Scholar
  39. Tara Matthews, Scott Carter, Carol Pai, Janette Fong, and Jennifer Mankoff. 2006. Scribe4Me: Evaluating a Mobile Sound Transcription Tool for the Deaf. In Proceedings of Ubiquitous Computing (UbiComp), Paul Dourish and Adrian Friday (eds.). Springer Berlin Heidelberg, 159–176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Tara Matthews, Janette Fong, F. Wai-Ling Ho-Ching, and Jennifer Mankoff. 2006. Evaluating non-speech sound visualizations for the deaf. Behaviour & Information Technology 25, 4: 333–351.Google ScholarGoogle ScholarCross RefCross Ref
  41. Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen. 2016. TUT Sound events 2016. https://doi.org/10.5281/zenodo.45759Google ScholarGoogle ScholarCross RefCross Ref
  42. Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen. 2019. Acoustic Scene Classification in DCASE 2019 Challenge: Closed and Open Set Classification and Data Mismatch Setups. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), 164–168.Google ScholarGoogle ScholarCross RefCross Ref
  43. Michael Potuck. Hands-on with iOS 14’s Sound Recognition feature that listens for doorbells, smoke alarms, more. Retrieved March 9, 2021 from https://9to5mac.com/2020/10/28/how-to-use-iphone-sound-recognition-ios-14/Google ScholarGoogle Scholar
  44. Dalibor Mitrović, Matthias Zeppelzauer, and Christian Breiteneder. 2010. Features for content-based audio retrieval. In Advances in computers. Elsevier, 71–150.Google ScholarGoogle Scholar
  45. Matthew S Moore. 1992. For Hearing people only: Answers to some of the most commonly asked questions about the Deaf community, its culture, and the" Deaf Reality". Deaf Life Press.Google ScholarGoogle Scholar
  46. Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.Google ScholarGoogle Scholar
  47. Kamal Nigam and Rayid Ghani. 2000. Analyzing the effectiveness and applicability of co-training. In Proceedings of the ninth international conference on Information and knowledge management, 86–93.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Boris N Oreshkin, Pau Rodr\’iguez López, and Alexandre Lacoste. 2018. {TADAM:} Task dependent adaptive metric for improved few-shot learning. CoRR abs/1805.1. Retrieved from http://arxiv.org/abs/1805.10123Google ScholarGoogle Scholar
  49. Vesa Peltonen, Juha Tuomi, Anssi Klapuri, Jyri Huopaniemi, and Timo Sorsa. 2002. Computational auditory scene recognition. In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, II–1941.Google ScholarGoogle ScholarCross RefCross Ref
  50. Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task-oriented dialog. arXiv preprint arXiv:2002.12328.Google ScholarGoogle Scholar
  51. Karol J Piczak. 2015. ESC: Dataset for environmental sound classification. In Proceedings of the 23rd ACM international conference on Multimedia, 1015–1018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Ilyas Potamitis and Todor Ganchev. 2008. Generalized recognition of sound events: Approaches and applications. Multimedia services in intelligent environments: 41–79.Google ScholarGoogle Scholar
  53. Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L Yuille. 2018. Few-shot image recognition by predicting parameters from activations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7229–7238.Google ScholarGoogle ScholarCross RefCross Ref
  54. Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. 2019. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157.Google ScholarGoogle Scholar
  55. Gonzalo Ramos, Christopher Meek, Patrice Simard, Jina Suh, and Soroush Ghorashi. 2020. Interactive machine teaching: a human-centered approach to building machine-learned models. Human–Computer Interaction 35, 5–6: 413–451.Google ScholarGoogle Scholar
  56. J Salamon, C Jacoby, and J P Bello. 2014. A Dataset and Taxonomy for Urban Sound Research. In 22nd ACM International Conference on Multimedia (ACM-MM’14), 1041–1044.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4510–4520.Google ScholarGoogle ScholarCross RefCross Ref
  58. Bowen Shi, Ming Sun, Krishna C Puvvada, Chieh-Chi Kao, Spyros Matsoukas, and Chao Wang. 2020. Few-shot acoustic event detection via meta learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 76–80.Google ScholarGoogle ScholarCross RefCross Ref
  59. Liu Sicong, Zhou Zimu, Du Junzhao, Shangguan Longfei, Jun Han, and Xin Wang. 2017. UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2: 17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Jake Snell, Kevin Swersky, and Richard S Zemel. 2017. Prototypical networks for few-shot learning. arXiv preprint arXiv:1703.05175.Google ScholarGoogle Scholar
  61. Ole Morten Strand and Andreas Egeberg. 2004. Cepstral mean and variance normalization in the model domain. In COST278 and ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction.Google ScholarGoogle Scholar
  62. Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. 2019. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 403–412.Google ScholarGoogle ScholarCross RefCross Ref
  63. Lisa Torrey and Jude Shavlik. 2010. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques. IGI global, 242–264.Google ScholarGoogle Scholar
  64. Wei Wang and Zhi-Hua Zhou. 2010. A new analysis of co-training. In ICML.Google ScholarGoogle Scholar
  65. Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (CSUR) 53, 3: 1–34.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Yu Wang, Justin Salamon, Nicholas J Bryan, and Juan Pablo Bello. 2020. Few-shot sound event detection. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 81–85.Google ScholarGoogle ScholarCross RefCross Ref
  67. Meredith Whittaker, Meryl Alper, Cynthia L Bennett, Sara Hendren, Liz Kaziunas, Mara Mills, Meredith Ringel Morris, Joy Rankin, Emily Rogers, Marcel Salas, and others. 2019. Disability, bias, and AI. AI Now Institute.Google ScholarGoogle Scholar
  68. Jason Wu, Chris Harrison, Jeffrey P Bigham, and Gierad Laput. 2020. Automated Class Discovery and One-Shot Interactions for Acoustic Activity Recognition. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Nobuhide Yamakawa, Tetsuro Kitahara, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, and Hiroshi G Okuno. 2010. Effects of modelling within-and between-frame temporal variations in power spectra on non-verbal sound recognition. In Eleventh Annual Conference of the International Speech Communication Association.Google ScholarGoogle ScholarCross RefCross Ref
  70. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2: 1–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. 2018. Applied federated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903.Google ScholarGoogle Scholar
  72. Wenpeng Yin. 2020. Meta-learning for few-shot natural language processing: A survey. arXiv preprint arXiv:2007.09604.Google ScholarGoogle Scholar
  73. Alina Zajadacz. 2015. Evolution of models of disability as a basis for further policy changes in accessible tourism. Journal of Tourism Futures 1, 3: 189–202.Google ScholarGoogle ScholarCross RefCross Ref
  74. Live Transcribe & Sound Notifications – Apps on Google Play. Retrieved April 5, 2021 from https://play.google.com/store/apps/details?id=com.google.audio.hearing.visualization.accessibility.scribeGoogle ScholarGoogle Scholar
  75. BBC Sound Effects. Retrieved September 18, 2019 from http://bbcsfx.acropolis.org.uk/Google ScholarGoogle Scholar
  76. Network Sound Effects Library. Retrieved September 15, 2019 from https://www.sound-ideas.com/Product/199/Network-Sound-Effects-LibraryGoogle ScholarGoogle Scholar
  77. UPC-TALP dataset. Retrieved September 18, 2019 from http://www.talp.upc.edu/content/upc-talp-database-isolated-meeting-room-acoustic-eventsGoogle ScholarGoogle Scholar
  78. Google Surveys. Retrieved April 5, 2021 from https://surveys.google.comGoogle ScholarGoogle Scholar
  79. Google Opinion Rewards - It Pays to Share Your Opinion. Retrieved April 5, 2021 from https://surveys.google.com/google-opinion-rewards/Google ScholarGoogle Scholar
  80. Google AI Blog: Federated Learning: Collaborative Machine Learning without Centralized Training Data. Retrieved April 6, 2021 from https://ai.googleblog.com/2017/04/federated-learning-collaborative.htmlGoogle ScholarGoogle Scholar
  81. AudioSet Label Accuracy. Retrieved April 6, 2021 from https://research.google.com/audioset/dataset/index.htmlGoogle ScholarGoogle Scholar

Index Terms

  1. ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
            April 2022
            10459 pages
            ISBN:9781450391573
            DOI:10.1145/3491102

            Copyright © 2022 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 29 April 2022

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited

            Acceptance Rates

            Overall Acceptance Rate6,199of26,314submissions,24%

            Upcoming Conference

            CHI '24
            CHI Conference on Human Factors in Computing Systems
            May 11 - 16, 2024
            Honolulu , HI , USA

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format