skip to main content
research-article
Public Access

The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective

Published:21 July 2021Publication History
Skip Abstract Section

Abstract

Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.

References

  1. 2006. Convention on the Rights of Persons with Disabilities (CRPD) Enable. Retrieved from https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html.Google ScholarGoogle Scholar
  2. 2016. Sunderland v. Bethesda Health, Inc. Vol. 184. Dist. Court, SD Florida. https://www.leagle.com/decision/infdco20160204a03.Google ScholarGoogle Scholar
  3. United Nations. 2016. WFD and WASLI Issue Statement on Signing Avatars. Retrieved from https://wfdeaf.org/news/wfd-wasli-issue-statement-signing-avatars/.Google ScholarGoogle Scholar
  4. Michael Sawh. 2017. Ontenna smart hair clip will help deaf people sense sound. Wareable Ltd. Retrieved from https://www.wareable.com/wearable-tech/ontenna-smart-hair-clip-for-deaf-sense-sound-4524.Google ScholarGoogle Scholar
  5. Registry of Interpreters for the Deaf, Inc. (RID). 2019. 2018 Annual Report. Retrieved from https://rid.org/2018-annual-report/.Google ScholarGoogle Scholar
  6. BBC News. 2019. Google sign language AI turns hand gestures into speech. Retrieved from https://www.bbc.com/news/technology-49410945.Google ScholarGoogle Scholar
  7. Alex Abenchuchan. 2015. The Daily Moth. Retrieved from https://www.dailymoth.com/blog.Google ScholarGoogle Scholar
  8. Chadia Abras, Diane Maloney-Krichmar, Jenny Preece, et al. 2004. User-centered design. Bainbridge, W. Encyclopedia of Human-computer Interaction. Sage Publications, Thousand Oaks, 37, 4 (2004), 445–456.Google ScholarGoogle Scholar
  9. Nikolas Adaloglou, Theocharis Chatzis, Ilias Papastratis, Andreas Stergioulas, Georgios Th. Papadopoulos, Vassia Zacharopoulou, George J. Xydopoulos, Klimnis Atzakas, Dimitris Papazachariou, and Petros Daras. 2020. A comprehensive study on sign language recognition methods. arXiv:2007.12530 [cs] (2020).Google ScholarGoogle Scholar
  10. Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. Modeling the speed and timing of American sign language to generate realistic animations. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’18). Association for Computing Machinery, New York, NY, 259–270. DOI:https://doi.org/10.1145/3234695.3236356Google ScholarGoogle Scholar
  11. Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2019. Effect of automatic sign recognition performance on the usability of video-based search interfaces for sign language dictionaries. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’19). Association for Computing Machinery, New York, NY, 56–67. DOI:https://doi.org/10.1145/3308561.3353791Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Amazon. 2005. Amazon Mechanical Turk. Retrieved from https://www.mturk.com/.Google ScholarGoogle Scholar
  13. Robert J. Amdur and Elizabeth A. Bankert. 2010. Institutional Review Board: Member Handbook. Jones & Bartlett Publishers.Google ScholarGoogle Scholar
  14. Diego Roberto Antunes, André L. P. Guedes, and Laura Sánchez García. 2015. A context-based collaborative framework to build sign language databases by real users. In Proceedings of the International Conference on Universal Access in Human-computer Interaction. Springer, 327–338.Google ScholarGoogle ScholarCross RefCross Ref
  15. Diego R. Antunes, Cayley Guimarães, Laura S. García, Luiz Eduardo S. Oliveira, and Sueli Fernandes. 2011. A framework to support development of sign language human-computer interaction: Building tools for effective information access and inclusion of the deaf. In Proceedings of the 5th International Conference on Research Challenges in Information Science. IEEE, 1–12.Google ScholarGoogle ScholarCross RefCross Ref
  16. Giuseppe Ateniese, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secur. Netw. 10, 3 (Sept. 2015), 137–150. DOI:https://doi.org/10.1504/IJSN.2015.071829Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. A. Augustus, E. Ritchie, and S. Stecker. 2013. The Official American Sign Language Writing Textbook. ASLized, Los Angeles, CA.Google ScholarGoogle Scholar
  18. Britta Bauer and Hermann Hienz. 2000. Relevant features for video-based continuous sign language recognition. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, 440–445.Google ScholarGoogle ScholarCross RefCross Ref
  19. Britta Bauer and Karl-Friedrich Kraiss. 2002. Towards an automatic sign language recognition system using subunits. In Gesture and Sign Language in Human-computer Interaction, Ipke Wachsmuth and Timo Sowa (Eds.). Lecture Notes in Computer Science, Vol. 2298. Springer Berlin, 123–173.Google ScholarGoogle Scholar
  20. B. Bauer, S. Nießen, and H. Hienz. 1999. Towards an automatic sign language translation system. In Proceedings of the International Workshop on Physicality and Tangibility in Interaction: Towards New Paradigms for Interaction Beyond the Desktop.Google ScholarGoogle Scholar
  21. Larwan Berke, Sushant Kafle, and Matt Huenerfauth. 2018. Methods for evaluation of imperfect captioning tools by deaf or hard-of-hearing users at different reading literacy levels. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, 1–12. DOI:https://doi.org/10.1145/3173574.3173665Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Bluche, H. Ney, J. Louradour, and C. Kermorvant. 2015. Framewise and CTC training of neural networks for handwriting recognition. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR’15). 81–85. DOI:https://doi.org/10.1109/ICDAR.2015.7333730Google ScholarGoogle Scholar
  23. M. Borg and K. P. Camilleri. 2019. Sign language detection “in the wild” with recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19). 1637–1641. DOI:https://doi.org/10.1109/ICASSP.2019.8683257.Google ScholarGoogle Scholar
  24. Mark Borg and Kenneth P. Camilleri. 2020. Phonologically-meaningful subunits for deep learning-based sign language recognition. In Proceedings of the European Conference on Computer Vision Workshops (ECCVW’20).Google ScholarGoogle Scholar
  25. Harry Bornstein, Karen Luczak Saulnier, Lillian B. Hamilton, and Ralph R. Miller. 1983. The Comprehensive Signed English Dictionary. Gallaudet University Press.Google ScholarGoogle Scholar
  26. Rachel Botsman. 2017. Big data meets big brother as China moves to rate its citizens. Wired UK 21 (2017), 1–11.Google ScholarGoogle Scholar
  27. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 343–351.Google ScholarGoogle Scholar
  28. Daren C. Brabham. 2013. Crowdsourcing. The MIT Press.Google ScholarGoogle Scholar
  29. Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign language recognition, generation, and translation: An interdisciplinary perspective. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’19). Association for Computing Machinery, New York, NY, 16–31. DOI:https://doi.org/10.1145/3308561.3353774.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Danielle Bragg, Oscar Koller, Naomi Caselli, and William Thies. 2020. Exploring collection of sign language datasets: Privacy, participation, and model performance. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 16–31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Danielle Bragg, Kyle Rector, and Richard E. Ladner. 2015. A user-powered American sign language dictionary. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1837–1848.Google ScholarGoogle Scholar
  32. Ben Braithwaite. 2019. Sign language endangerment and linguistic diversity. Language 95, 1 (2019), e161–e187.Google ScholarGoogle ScholarCross RefCross Ref
  33. Ben Braithwaite. 2020. Ideologies of linguistic research on small sign languages in the global south: A Caribbean perspective. Lang. Commun. 74 (2020), 182–194.Google ScholarGoogle ScholarCross RefCross Ref
  34. M. Brand, N. Oliver, and A. Pentland. 1997. Coupled hidden Markov models for complex action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’97). 994–999. DOI:https://doi.org/10.1109/CVPR.1997.609450.Google ScholarGoogle ScholarCross RefCross Ref
  35. Constituição BRASIL. 2009. Decreto nº 6.949, de 25 de agosto de 2009. Promulga a Convenção Internacional sobre os Direitos das Pessoas com Deficiência e seu Protocolo Facultativo, assinados em Nova York, em 30 de março de 2007. Diário Oficial da União163 (2009).Google ScholarGoogle Scholar
  36. Chamber of Deputies BRAZIL. 2015. Law n 13,146, of July 6, 2015. institutes the Brazilian law for inclusion of persons with disabilities (statute of persons with disabilities ê ncia). Official J. Union (2015), 43.Google ScholarGoogle Scholar
  37. Diane Brentari. 1996. Trilled movement: Phonetic realization and formal representation. Lingua 98, 1 (Mar. 1996), 43–71. DOI:https://doi.org/10.1016/0024-3841(95)00032-1.Google ScholarGoogle ScholarCross RefCross Ref
  38. Robert V. Bruce. 2020. Bell: Alexander Graham Bell and the conquest of solitude. Plunkett Lake Press.Google ScholarGoogle Scholar
  39. Jeremy L. Brunson. 2008. Your case will now be heard: Sign language interpreters as problematic accommodations in legal interactions. J. Deaf Stud. Deaf Educ. 13, 1 (2008), 77–91.Google ScholarGoogle ScholarCross RefCross Ref
  40. Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. 2018. Neural sign language translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 7784–7793.Google ScholarGoogle ScholarCross RefCross Ref
  41. Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, and Richard Bowden. 2016. Using convolutional 3D neural networks for user-independent continuous gesture recognition. In Proceedings of the International Conference on Pattern Recognition Workshops (ICPRW’16). 49–54.Google ScholarGoogle ScholarCross RefCross Ref
  42. Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, and Richard Bowden. 2017. SubUNets: End-to-end hand shape and continuous sign language recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 22–27.Google ScholarGoogle ScholarCross RefCross Ref
  43. Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020. Sign language transformers: Joint end-to-end sign language recognition and translation. arXiv:2003.13830 [cs] (Mar. 2020).Google ScholarGoogle Scholar
  44. Lee W. Campbell, David A. Becker, Ali Azarbayejani, Aaron F. Bobick, and Alex Pentland. 1996. Invariant features for 3-D gesture recognition. In Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition. IEEE, 157–162.Google ScholarGoogle ScholarCross RefCross Ref
  45. Xiujuan Chai, Hanjie Wang, and Xilin Chen. 2014. The DEVISIGN Large Vocabulary of Chinese Sign Language Database and Baseline Evaluations. Technical Report. Key Lab of Intelligent Information Processing of Chinese Academy of Sciences.Google ScholarGoogle Scholar
  46. James I. Charlton. 2000. Nothing about Us Without Us: Disability Oppression and Empowerment. University of California Press.Google ScholarGoogle Scholar
  47. Ka Leong Cheng, Zhaoyang Yang, Qifeng Chen, and Yu-Wing Tai. 2020. Fully convolutional networks for continuous sign language recognition. arXiv:2007.12402 [cs] (July 2020).Google ScholarGoogle Scholar
  48. Nicholas Confessore, Michael LaForgia, and Gabriel J. X. Dance. 2018. Facebook’s data sharing and privacy rules: 5 takeaways from our investigation. Retrieved from https://www.nytimes.com/2018/12/18/us/politics/facebook-data-sharing-deals.html.Google ScholarGoogle Scholar
  49. H. Cooper, R. Bowden, and S. CVSSP. 2007. Sign language recognition using boosted volumetric features. In Proceedings of the IAPR Conference on Machine Vision Applications. 359–362.Google ScholarGoogle Scholar
  50. R. Orin Cornett. 1967. Cued speech. American Annals of the Deaf 112, 1 (Jan. 1967), 3–13. https://www.jstor.org/action/doBasicSearch?Query=cued+speech+cornett&filter=jid%3A10.2307%2Fj50013925.Google ScholarGoogle Scholar
  51. Onno Crasborn. 2010. What does “informed consent” mean in the internet age? Publishing sign language corpora as open content. Sign Lang. Stud. 10, 2 (2010), 276–290.Google ScholarGoogle ScholarCross RefCross Ref
  52. Onno Crasborn and Inge Zwitserlood. 2008. The corpus NGT: An online corpus for professionals and laymen. In LREC Workshop on the Representation and Processing of Sign Languages. 44–49.Google ScholarGoogle Scholar
  53. Runpeng Cui, Hu Liu, and Changshui Zhang. 2017. Recurrent convolutional neural networks for continuous sign language recognition by staged optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 7361–7369.Google ScholarGoogle ScholarCross RefCross Ref
  54. R. Cui, H. Liu, and C. Zhang. 2019. A deep neural framework for continuous sign language recognition by iterative training. IEEE Trans. Multimedia (2019). DOI:https://doi.org/10.1109/TMM.2018.2889563.Google ScholarGoogle Scholar
  55. Maartje De Meulder. 2015. The legal recognition of sign languages. Sign Lang. Stud. 15, 4 (2015), 498–506.Google ScholarGoogle ScholarCross RefCross Ref
  56. Ronice Müller de Quadros, Kathryn Davidson, Diane Lillo-Martin, and Karen Emmorey. 2020. Code-blending with depicting signs. Ling. Approach. Biling. 10, 2 (2020), 290–308.Google ScholarGoogle ScholarCross RefCross Ref
  57. Dorothy Elizabeth Robling Denning. 1982. Cryptography and Data Security. Vol. 112. Addison-Wesley Reading.Google ScholarGoogle Scholar
  58. Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation. Springer, 1–19.Google ScholarGoogle ScholarCross RefCross Ref
  59. Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. 2017. Exposed! A Survey of Attacks on Private Data. Ann. Rev. Statist. Applic. 4, 1 (2017), 61–84. DOI:https://doi.org/10.1146/annurev-statistics-060116-054123.Google ScholarGoogle ScholarCross RefCross Ref
  60. Ralph Elliott, Javier Bueno, Richard Kennaway, and John Glauert. 2010. Towards the integration of synthetic SL animation with avatars into corpus annotation tools. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies.Google ScholarGoogle Scholar
  61. Michael Erard. 2017. Why Sign-Language Gloves Don’t Help Deaf People. Retrieved from https://www.theatlantic.com/technology/archive/2017/11/why-sign-language-gloves-dont-help-deaf-people/545441/.Google ScholarGoogle Scholar
  62. European Sign Language Center. 2018. Spreadthesign. Retrieved from https://www.spreadthesign.com.Google ScholarGoogle Scholar
  63. Shelly Fan. 2016. This Smart Vest Lets the Deaf “Hear” with Their Skin. Retrieved from https://singularityhub.com/2016/10/09/this-smart-vest-lets-the-deaf-hear-with-their-skin/.Google ScholarGoogle Scholar
  64. Gaolin Fang, Wen Gao, and Debin Zhao. 2007. Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Trans. Syst., Man, Cyber. 37, 1 (Jan. 2007), 1–9. DOI:https://doi.org/10.1109/TSMCA.2006.886347.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Lindsay Ferrara and Gabrielle Hodge. 2018. Language as description, indication, and depiction. Front. Psychol. 9 (2018), 716.Google ScholarGoogle ScholarCross RefCross Ref
  66. Jami N. Fisher, Julie Hochgesang, and Meredith Tamminga. 2016. Examining variation in the absence of a “main” ASL corpus: The case of the Philadelphia signs project. In Proceedings of the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining. 75–80.Google ScholarGoogle Scholar
  67. Jami N. Fisher, Meredith Tamminga, and Julie A. Hochgesang. 2018. The historical and social context of the Philadelphia ASL community. Sign Lang. Stud. 18, 3 (2018), 429–460.Google ScholarGoogle ScholarCross RefCross Ref
  68. Center for Data Innovation. 2019. When Is It Okay to Use Data for AI? Retrieved from https://www.datainnovation.org/2019/10/when-is-it-okay-to-use-data-for-ai/.Google ScholarGoogle Scholar
  69. Jens Forster, Oscar Koller, Christian Oberdörfer, Yannick Gweth, and Hermann Ney. 2013. Improving continuous sign language recognition: Speech recognition techniques and system design. In Proceedings of the Workshop on Speech and Language Processing for Assistive Technologies. 41–46.Google ScholarGoogle Scholar
  70. Jens Forster, Christoph Schmidt, Oscar Koller, Martin Bellgardt, and Hermann Ney. 2014. Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-weather. In Proceedings of the International Conference on Language Resources and Evaluation (LREC’14). 1911–1916.Google ScholarGoogle Scholar
  71. Nelly Furman, David Goldberg, and Natalia Lusin. 2010. Enrollments in languages other than English in United States institutions of higher education, Fall 2009. In Modern Language Association. ERIC.Google ScholarGoogle Scholar
  72. Carrie Lou Garberoglio, Stephanie Cawthon, and Mark Bond. 2016. Deaf People and Employment in the United States. Washington, DC: US Department of Education, Office of Special Education Programs, National Deaf Center on Postsecondary Outcomes. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=deaf+people+and+employment+in+the+united+states&btnG.Google ScholarGoogle Scholar
  73. Ann E. Geers, Christine M. Mitchell, Andrea Warner-Czyz, Nae-Yuh Wang, Laurie S. Eisenberg, CDaCI Investigative Team et al. 2017. Early sign language exposure and cochlear implantation benefits. Pediatrics 140, 1 (2017).Google ScholarGoogle Scholar
  74. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the International Conference on Machine Learning (ICML’06). ACM, New York, NY, 369–376. DOI:https://doi.org/10.1145/1143844.1143891.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. K. Grobel and M. Assan. 1997. Isolated sign language recognition using hidden Markov models. In Proceedingsof the IEEE International Conference on Systems, Man, and Cybernetics (SMC’97), Vol. 1. 162–167. DOI:https://doi.org/10.1109/ICSMC.1997.625742.Google ScholarGoogle Scholar
  76. Nicola Grove and Margaret Walker. 1990. The Makaton vocabulary: Using manual signs and graphic symbols to develop interpersonal communication. Augment. Altern. Commun. 6, 1 (1990), 15–28.Google ScholarGoogle ScholarCross RefCross Ref
  77. Yannick Gweth, Christian Plahl, and Hermann Ney. 2012. Enhanced continuous sign language recognition using PCA and neural network features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’12), 55–60.Google ScholarGoogle ScholarCross RefCross Ref
  78. Wyatte C. Hall. 2017. What you don’t know can hurt you: The risk of language deprivation by impairing sign language development in deaf children. Matern. Child Health J. 21, 5 (2017), 961–965.Google ScholarGoogle ScholarCross RefCross Ref
  79. Wyatte C. Hall, Leonard L. Levin, and Melissa L. Anderson. 2017. Language deprivation syndrome: A possible neurodevelopmental disorder with sociocultural origins. Soc. PsychiatryPsychiatric Epidem. 52, 6 (June 2017), 761–776. DOI:https://doi.org/10.1007/s00127-017-1351-7.Google ScholarGoogle ScholarCross RefCross Ref
  80. Thomas Hanke. 2004. HamNoSys-representing sign language data in language resources and language processing contexts. In Proceedings of the International Conference on Language Resources and Evaluation, Vol. 4. 1–6.Google ScholarGoogle Scholar
  81. Raychelle Harris, Heidi M. Holmes, and Donna M. Mertens. 2009. Research ethics in sign language communities. Sign Lang. Stud. 9, 2 (2009), 104–131.Google ScholarGoogle ScholarCross RefCross Ref
  82. Peter C. Hauser. 2000. American sign language and cued English. Biling. Ident. Deaf Commun. 6 (2000), 43.Google ScholarGoogle Scholar
  83. Ellen S. Hibbard and Deb I. Fels. 2011. The Vlogging phenomena: A deaf perspective. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’11). Association for Computing Machinery, New York, NY, 59–66. DOI:https://doi.org/10.1145/2049536.2049549.Google ScholarGoogle Scholar
  84. Joseph Hill. 2020. Do deaf communities actually want sign language gloves?Nat. Electron. (2020). DOI:https://doi.org/10.1038/s41928-020-0451-7.Google ScholarGoogle Scholar
  85. Joseph Hill, Carolyn McCaskill, Ceil Lucas, and Robert Bayley. 2009. Signing outside the box: The size of signing space in Black ASL. New Ways Anal. Variat. 38 (2009).Google ScholarGoogle Scholar
  86. Joseph C. Hill. 2017. The importance of the sociohistorical context in sociolinguistics: The case of Black ASL. Sign Lang. Stud. 18, 1 (2017), 41–57.Google ScholarGoogle ScholarCross RefCross Ref
  87. Jonathan Hook, Sanne Verbaan, Abigail Durrant, Patrick Olivier, and Peter Wright. 2014. A study of the challenges related to DIY assistive technology in the context of children with disabilities. In Proceedings of the Conference on Designing Interactive Systems. 597–606.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Jie Huang, Wengang Zhou, Qilin Zhang, Houqiang Li, and Weiping Li. 2018. Video-based sign language recognition without temporal segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2257–2264.Google ScholarGoogle ScholarCross RefCross Ref
  89. Matt Huenerfauth, Elaine Gale, Brian Penly, Sree Pillutla, Mackenzie Willard, and Dhananjai Hariharan. 2017. Evaluation of language feedback methods for student videos of American sign language. ACM Trans. Access. Comput. 10, 1 (Apr. 2017). DOI:https://doi.org/10.1145/3046788.Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Tom Humphries. 1975. Audism: The making of a word. Unpublished essay. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=humphries+audism+the+making+of+a+word&btnG=.Google ScholarGoogle Scholar
  91. Tom Humphries, Raja Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, and Scott Smith. 2013. The right to language. J. Law, Medic. Ethics 41, 4 (2013), 872–884.Google ScholarGoogle ScholarCross RefCross Ref
  92. KinTrans Inc. 2013. Project KinTrans Hands Can Talk. Retrieved from https://www.kintrans.com/.Google ScholarGoogle Scholar
  93. MotionSavvy Inc.2013. UNI. Retrieved from http://motionsavvy.com.Google ScholarGoogle Scholar
  94. SignAll Technologies Inc. 2016. SignAll 1.0. Retrieved from https://www.signall.us/.Google ScholarGoogle Scholar
  95. Meredith Minkler and Nina Wallerstein. 2011. Community-Based Participatory Research for Health: From Process to Outcomes. John Wiley & Sons.Google ScholarGoogle Scholar
  96. Robert E. Johnson. 2006. Cultural constructs that impede discussions about variability in speech-based educational models for deaf children with cochlear implants. Perspectiva 24, 3 (2006), 29–80.Google ScholarGoogle Scholar
  97. Trevor Johnston. 2006. W(h)ither the Deaf community? Population, genetics, and the future of Australian sign language. Sign Lang. Stud. 6, 2 (2006), 137–173.Google ScholarGoogle ScholarCross RefCross Ref
  98. Hamid Reza Vaezi Joze and Oscar Koller. 2018. MS-ASL: A large-scale data set and benchmark for understanding American sign language. arXiv:1812.01053 [cs] (Dec. 2018).Google ScholarGoogle Scholar
  99. Hernisa Kacorri and Matt Huenerfauth. 2016. Continuous profile models in ASL syntactic facial expression synthesis. In Proceedings of the 54th Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2084–2093. DOI:https://doi.org/10.18653/v1/P16-1196.Google ScholarGoogle ScholarCross RefCross Ref
  100. Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth. 2020. Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing. SIGACCESS Access. Comput.125 (Mar. 2020). DOI:https://doi.org/10.1145/3386296.3386300.Google ScholarGoogle Scholar
  101. Sushant Kafle and Matt Huenerfauth. 2019. Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Trans. Access. Comput. 12, 2 (June 2019). DOI:https://doi.org/10.1145/3325862.Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. Avi C. Kak. 2002. Purdue RVL-SLLL ASL database for automatic recognition of American sign language. In Proceedings of the IEEE International Conference on Multimodal Interfaces (ICMI’02), 167–172. DOI:https://doi.org/10.1109/ICMI.2002.1166987.Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Lori M. Kaufman. 2009. Data security in the world of cloud computing. IEEE Secur. Priv. 7, 4 (2009), 61–64.Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Michael Kipp. 2017. Anvil. Retrieved from https://www.anvil-software.org/.Google ScholarGoogle Scholar
  105. Sang-Ki Ko, Chang Jo Kim, Hyedong Jung, and Choongsang Cho. 2019. Neural sign language translation based on human keypoint estimation. Appl. Sci. 9, 13 (Jan. 2019), 2683. DOI:https://doi.org/10.3390/app9132683.Google ScholarGoogle ScholarCross RefCross Ref
  106. Oscar Koller. 2020. Quantitative survey of the state of the art in sign language recognition. arXiv:2008.09918 [cs] (Aug. 2020).Google ScholarGoogle Scholar
  107. Oscar Koller, Necati Cihan Camgoz, Hermann Ney, and Richard Bowden. 2019. Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallelism in sign language videos. IEEE Trans. Pattern Anal. Mach. Intell. accepted for publication (2019), 15.Google ScholarGoogle Scholar
  108. Oscar Koller, Jens Forster, and Hermann Ney. 2015. Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers. Comput. Vis. Image Underst. 141 (Dec. 2015), 108–125. DOI:https://doi.org/10.1016/j.cviu.2015.09.013.Google ScholarGoogle Scholar
  109. Oscar Koller, Hermann Ney, and Richard Bowden. 2013. May the force be with you: Force-aligned signwriting for automatic subunit annotation of corpora. In Proceedings of the International Conference on Automatic Face and Gesture Recognition (FG’13), 1–6.Google ScholarGoogle ScholarCross RefCross Ref
  110. Oscar Koller, Hermann Ney, and Richard Bowden. 2014. Read my lips: Continuous signer independent weakly supervised viseme recognition. In Proceedings of the European Conference on Computer Vision (ECCV’14). 281–296.Google ScholarGoogle ScholarCross RefCross Ref
  111. Oscar Koller, Hermann Ney, and Richard Bowden. 2015. Deep learning of mouth shapes for sign language. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW’15). 477–483.Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. Oscar Koller, Hermann Ney, and Richard Bowden. 2016. Automatic alignment of HamNoSys subunits for continuous sign language recognition. In Proceedings of the LREC Workshop on the Representation and Processing of Sign Languages. 121–128.Google ScholarGoogle Scholar
  113. Oscar Koller, Hermann Ney, and Richard Bowden. 2016. Deep hand: How to train a CNN on 1 million hand images when your data is continuous and weakly labelled. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 3793–3802.Google ScholarGoogle ScholarCross RefCross Ref
  114. Oscar Koller, Sepehr Zargaran, and Hermann Ney. 2017. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent CNN-HMMs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), 4297–4305.Google ScholarGoogle ScholarCross RefCross Ref
  115. Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2016. Deep sign: Hybrid CNN-HMM for continuous sign language recognition. In Proceedings of the British Machine Vision Conference (BMVC’16), 1–12. DOI:https://doi.org/10.5244/C.30.136.Google ScholarGoogle ScholarCross RefCross Ref
  116. Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2018. Deep sign: Enabling robust statistical continuous sign language recognition via hybrid CNN-HMMs. Int. J. Comput. Vis. 126, 12 (Dec. 2018), 1311–1325. DOI:https://doi.org/10.1007/s11263-018-1121-3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. Steven Komarov and Krzysztof Z. Gajos. 2014. Organic peer assessment. In Proceedings of the CHI Learning Innovation at Scale Workshop.Google ScholarGoogle Scholar
  118. Reiner Konrad, Thomas Hanke, Gabriele Langer, Dolly Blanck, Julian Bleicken, Ilona Hofmann, Olga Jeziorski, Lutz König, Susanne König, Rie Nishio, Anja Regen, Uta Salden, Sven Wagner, and Satu Worseck. 2019. MY DGS – Annotated. Public Corpus of German Sign Language, 2nd Release. DOI:https://doi.org/10.25592/dgs-corpus-2.0.Google ScholarGoogle Scholar
  119. Robert Kraut, Michael Patterson, Vicki Lundmark, Sara Kiesler, Tridas Mukophadhyay, and William Scherlis. 1998. Internet paradox: A social technology that reduces social involvement and psychological well-being?Amer. Psychol. 53, 9 (1998), 1017.Google ScholarGoogle Scholar
  120. Jette H. Kristoffersen, Thomas Troelsgård, Anne Skov Hardell, Bo Hardell, Janne Boye Niemelä, Jørgen Sandholt, and Maja Toft. 2016. Ordbog over Dansk Tegnsprog 2008–2016. Retrieved from http://www.tegnsprog.dk/.Google ScholarGoogle Scholar
  121. Alexa Kuenburg, Paul Fellinger, and Johannes Fellinger. 2015. Health care access among deaf people. J. Deaf Stud. Deaf Educ. 21, 1 (09 2015), 1–10. DOI:https://doi.org/10.1093/deafed/env042.Google ScholarGoogle Scholar
  122. Raja Kushalnagar. 2018. Legibility of videos with ASL signers. In Proceedings of the ICT Accessibility Testing Symposium: Mobile Testing, 508 Revision, and Beyond (ICT’18).Google ScholarGoogle Scholar
  123. William Labov. 1972. Some principles of linguistic methodology. Language in Society 1, 1 (Apr. 1972), 97–120. https://www.jstor.org/action/doBasicSearch?Query=labov+%22some+principles+of+linguistic+methodology%22.Google ScholarGoogle ScholarCross RefCross Ref
  124. Emil Ladner. 1931. Silent talkies. American Annals of the Deaf 76, 3 (May 1931), 321–325. https://www.jstor.org/action/doBasicSearch?Query=silent+talkies+ladner&filter=jid%3A10.2307%2Fj50013925.Google ScholarGoogle Scholar
  125. Harlan Lane. 1989. When the Mind Hears: A History of the Deaf. Vintage.Google ScholarGoogle Scholar
  126. Harlan L. Lane, Robert Hoffmeister, and Benjamin J. Bahan. 1996. A Journey into the DEAF-WORLD.DawnSign Press.Google ScholarGoogle Scholar
  127. Harry G. Lang et al. 2000. A Phone of Our Own: The Deaf Insurrection against Ma Bell. Gallaudet University Press.Google ScholarGoogle Scholar
  128. Hai-Son Le, Ngoc-Quan Pham, and Duc-Dung Nguyen. 2015. Neural networks with hidden Markov models in skeleton-based gesture recognition. In Knowledge and Systems Engineering, Viet-Ha Nguyen, Anh-Cuong Le, and Van-Nam Huynh (Eds.). Advances in Intelligent Systems and Computing, Vol. 326. Springer International Publishing, 299–311. DOI:https://doi.org/10.1007/978-3-319-11680-8_24.Google ScholarGoogle Scholar
  129. Boris Lenseigne and Patrice Dalle. 2005. Using signing space as a representation for sign language processing. In Proceedings of the International Gesture Workshop. Springer, 25–36.Google ScholarGoogle Scholar
  130. Clayton Lewis. 2020. Implications of developments in machine learning for people with cognitive disabilities. SIGACCESS Access. Comput.124 (Mar. 2020). DOI:https://doi.org/10.1145/3386308.3386309.Google ScholarGoogle Scholar
  131. Talila A. Lewis. 2014. Police Brutality and Deaf People. Retrieved from https://www.aclu.org/blog/national-security/police-brutality-and-deaf-people.Google ScholarGoogle Scholar
  132. Dongxu Li, Cristian Rodriguez Opazo, Xin Yu, and Hongdong Li. 2020. Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison. arXiv:1910.11006 [cs] (Jan. 2020).Google ScholarGoogle Scholar
  133. J. F. Lichtenauer, E. A. Hendriks, and M. J. T. Reinders. 2008. Sign language recognition by combining statistical DTW and independent classification. IEEE Trans. Pattern Anal. Mach. Intell. 30, 11 (Nov. 2008), 2040–2046. DOI:https://doi.org/10.1109/TPAMI.2008.123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. Scott K. Liddell and Robert E. Johnson. 1989. American sign language: The phonological base.Sign Lang. Stud. 1 (1989), 64:195–277.Google ScholarGoogle Scholar
  135. Jessica Litman. 1990. The public domain. Emory Law J. 39 (1990), 965.Google ScholarGoogle Scholar
  136. Paul A. Lombardo. 2008. Three Generations, No Imbeciles: Eugenics, the Supreme Court, and Buck v. Bell. JHU Press.Google ScholarGoogle Scholar
  137. David Loshin. 2002. Knowledge Integrity: Data Ownership. Retrieved from http://www.datawarehouse.com/article/?articleid=3052.Google ScholarGoogle Scholar
  138. Ceil Lucas. 2001. The Sociolinguistics of Sign Languages. Cambridge University Press.Google ScholarGoogle Scholar
  139. Ceil Lucas. 2014. The Sociolinguistics of the Deaf Community. Elsevier.Google ScholarGoogle Scholar
  140. Emily Lund. 2016. Vocabulary knowledge of children with cochlear implants: A meta-analysis. J. Deaf Stud. Deaf Educ. 21, 2 (2016), 107–121.Google ScholarGoogle ScholarCross RefCross Ref
  141. Yongsen Ma, Gang Zhou, Shuangquan Wang, Hongyang Zhao, and Woosub Jung. 2018. SignFi: Sign language recognition using WiFi. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2, 1 (Mar. 2018), 23:1–23:21. DOI:https://doi.org/10.1145/3191755.Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. Maartjedemeulder. 2020. Retrieved from https://acadeafic.org/.Google ScholarGoogle Scholar
  143. Kelly Mack, Danielle Bragg, Meredith Ringel Morris, Maarten W. Bos, Isabelle Albi, and Andrés Monroy-Hernández. 2020. Social app accessibility for deaf signers. Proc. ACM Hum.-comput. Interact. 4, CSCW2 (2020), 1–31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  144. Rachel I. Mayberry and Robert Kluender. 2018. Rethinking the critical period for language: New insights into an old question from American sign language. Biling.: Lang. Cog. 21, 5 (2018), 886–905.Google ScholarGoogle ScholarCross RefCross Ref
  145. Carolyn McCaskill, Ceil Lucas, Robert Bayley, and Joseph Hill. 2011. The Hidden Treasure of Black ASL: Its History and Structure. Gallaudet University Press, Washington, DC.Google ScholarGoogle Scholar
  146. David McKee, Rachel McKee, Sara Pivac Alexander, and Lynette Pivac. 2015. The Online Dictionary of New Zealand Sign Language. Retrieved from http://nzsl.vuw.ac.nz/.Google ScholarGoogle Scholar
  147. Michael McKee, Deirdre Schlehofer, and Denise Thew. 2013. Ethical issues in conducting research with deaf populations. Amer. J. Pub. Health 103, 12 (2013), 2174–2178.Google ScholarGoogle ScholarCross RefCross Ref
  148. Michael McKee, Denise Thew, Matthew Starr, Poorna Kushalnagar, John T. Reid, Patrick Graybill, Julia Velasquez, and Thomas Pearson. 2012. Engaging the deaf American sign language community: Lessons from a community-based participatory research center. Prog. Commun. Health Partner.: Res., Educ. Act. 6, 3 (2012), 321.Google ScholarGoogle ScholarCross RefCross Ref
  149. Irit Meir, Wendy Sandler, Carol Padden, Mark Aronoff et al. 2010. Emerging sign languages. Oxford Handb. Deaf Stud., Lang. Educ. 2 (2010), 267–280.Google ScholarGoogle Scholar
  150. Ross E. Mitchell, Travas A. Young, Bellamie Bachleda, and Michael A. Karchmer. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6, 3 (2006), 306–335. DOI:https://doi.org/10.1353/sls.2006.0019.Google ScholarGoogle ScholarCross RefCross Ref
  151. Rhonda J. Moore, Ross Smith, and Qi Liu. 2020. Using computational ethnography to enhance the curation of real-world data (RWD) for chronic pain and invisible disability use cases. SIGACCESS Access. Comput. 127 (July 2020). DOI:https://doi.org/10.1145/3412836.3412840.Google ScholarGoogle Scholar
  152. Donald F. Moores. 2010. The history of language and communication issues in deaf education. Oxford Handb. Deaf Stud., Lang. Educ. 2 (2010), 17–30.Google ScholarGoogle Scholar
  153. Meredith Ringel Morris. 2020. AI and accessibility. Commun. ACM 63, 6 (May 2020), 35–37. DOI:https://doi.org/10.1145/3356727.Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. Michael J. Muller and Sarah Kuhn. 1993. Participatory design. Commun. ACM 36, 6 (1993), 24–28.Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). 739–753. DOI:https://doi.org/10.1109/SP.2019.00065.Google ScholarGoogle ScholarCross RefCross Ref
  156. Brenda Nicodemus and Karen Emmorey. 2015. Directionality in ASL-English interpreting: Accuracy and articulation quality in L1 and L2. Interpreting 17, 2 (2015), 145–166.Google ScholarGoogle ScholarCross RefCross Ref
  157. Kim E. Nielsen. 2012. A Disability History of the United States. Vol. 2. Beacon Press.Google ScholarGoogle Scholar
  158. U.S. Department of Justice Civil Rights Division. 2014. Effective Communication. Retrieved from https://www.ada.gov/effective-comm.htm.Google ScholarGoogle Scholar
  159. Eng-Jon Ong, Oscar Koller, Nicolas Pugeault, and Richard Bowden. 2014. Sign spotting using hierarchical sequential patterns with temporal intervals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). 1931–1938.Google ScholarGoogle ScholarDigital LibraryDigital Library
  160. Sylvie C. W. Ong and Surendra Ranganath. 2005. Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 27, 6 (2005), 873–891.Google ScholarGoogle ScholarDigital LibraryDigital Library
  161. Eleni Orfanidou, Bencie Woll, and Gary Morgan. 2014. Research Methods in Sign Language Studies: A Practical Guide. John Wiley & Sons.Google ScholarGoogle Scholar
  162. Carol Padden and Jacqueline Humphries. 2020. Who goes first? Deaf people and CRISPR germline editing. Perspect. Biol. Med. 63, 1 (2020), 54–65.Google ScholarGoogle ScholarCross RefCross Ref
  163. European Parliament and Council of the European Union. 2016. Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Data Protection Directive). L119, p. 1-88.Google ScholarGoogle Scholar
  164. David M. Perlmutter. 1992. Sonority and syllable structure in American sign language. Linguistic Inquiry 23, 3 (Summer 1992), 407–442. https://www.jstor.org/action/doBasicSearch?Query=perlmutter+%22Sonority+and+syllable+structure+in+American+sign+language%22.Google ScholarGoogle Scholar
  165. Roland Pfau, Markus Steinbach, and Bencie Woll. 2012. Sign Language. De Gruyter Mouton, Berlin, Boston. DOI:https://doi.org/10.1515/9783110261325.Google ScholarGoogle Scholar
  166. Rob Picheta. 2020. High-tech glove translates sign language into speech in real time. Retrieved from https://www.cnn.com/2020/06/30/health/sign-language-glove-ucla-scn-scli-intl/index.html.Google ScholarGoogle Scholar
  167. Chen Pichler, Julie Hochgesang, Doreen Simons, and Diane Lillo-Martin. [n.d.]. Community input on recontesting for data sharing. In Proceedings of Language Resources and Evaluation Conference.Google ScholarGoogle Scholar
  168. Lionel Pigou, Sander Dieleman, Pieter-Jan Kindermans, and Benjamin Schrauwen. 2014. Sign language recognition using convolutional neural networks. In Proceedings of the European Conference on Computer Vision Workshops (ECCVW’14), Lourdes Agapito, Michael M. Bronstein, and Carsten Rother (Eds.), Vol. I. Springer International Publishing, 572–578.Google ScholarGoogle Scholar
  169. V. Pitsikalis, S. Theodorakis, and P. Maragos. 2010. Data-driven sub-units and modeling structure for continuous sign language recognition with multiple cues. In Proceedings of the LREC Workshop on the Representation and Processing of Sign Languages. 196–203.Google ScholarGoogle Scholar
  170. V. Pitsikalis, S. Theodorakis, C. Vogler, and P. Maragos. 2011. Advances in phonetics-based sub-unit modeling for transcription alignment and sign language recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’11). 1–6. DOI:https://doi.org/10.1109/CVPRW.2011.5981681.Google ScholarGoogle Scholar
  171. Soraia Silva Prietch, Polianna dos Santos Paim, Ivan Olmos-Pineda, Josefina Guerrero García, and Juan Manuel Gonzalez Calleros. 2019. The human and the context components in the design of automatic sign language recognition systems. In Proceedings of the Iberoamerican Workshop on Human-computer Interaction. Springer, 369–380.Google ScholarGoogle Scholar
  172. Soraia Silva Prietch, Ivan Olmos Pineda, Polianna dos S. Paim, J. M. G. Calleros, J. G. García, and R. Resmin. 2019. Discussion on image processing for sign language recognition: An overview of the problem complexity. United Academic Journals (2019), 112–127.Google ScholarGoogle Scholar
  173. Renée Punch. 2016. Employment and adults who are deaf or hard of hearing: Current status and experiences of barriers, accommodations, and stress in the workplace. Amer. Ann. Deaf 161, 3 (2016), 384–397.Google ScholarGoogle ScholarCross RefCross Ref
  174. Ronice Müller Quadros, Deonísio Schmitt, Juliana Lohn, and Tarcísio de Arantes Leite. [n.d.]. Corpus de Libras. Retrieved from http://corpuslibras.ufsc.br/.Google ScholarGoogle Scholar
  175. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings IEEE 77, 2 (1989), 257–286.Google ScholarGoogle ScholarCross RefCross Ref
  176. Qasim Mahmood Rajpoot and Christian Damsgaard Jensen. 2015. Video surveillance: Privacy issues and legal compliance. In Promoting Social Change and Democracy Through Information Technology. IGI global, 69–92.Google ScholarGoogle Scholar
  177. Octavian Robinson and Jonathan Henner. 2018. Authentic voices, authentic encounters: Cripping the university through American sign language. Disab. Stud. Quart. 38, 4 (2018).Google ScholarGoogle Scholar
  178. Richard A. Rogers. 2006. From cultural exchange to transculturation: A review and reconceptualization of cultural appropriation. Commun. Theor. 16, 4 (2006), 474–503.Google ScholarGoogle ScholarCross RefCross Ref
  179. David Rybach. 2006. Appearance-based Features for Automatic Continuous Sign Language Recognition. Ph.D. Dissertation. Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Aachen, Germany.Google ScholarGoogle Scholar
  180. Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. 2019. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Vol. 11700. Springer Nature.Google ScholarGoogle ScholarDigital LibraryDigital Library
  181. Adam Schembri, Kearsy Cormier, and Jordan Fenlon. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb “agreement” in sign languages. Glossa: J. Gen. Ling. 3, 1 (2018).Google ScholarGoogle Scholar
  182. Adam Schwartz. 2012. Chicago’s video surveillance cameras: A pervasive and poorly regulated threat to our privacy. Nw. J. Tech. & Intell. Prop. 11 (2012), ix.Google ScholarGoogle Scholar
  183. Ann Senghas, Sotaro Kita, and Asli Özyürek. 2004. Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science 305, 5691 (2004), 1779–1782.Google ScholarGoogle Scholar
  184. Dylan A. Simon, Andrew S. Gordon, Lisa Steiger, and Rick O. Gilmore. 2015. Databrary: Enabling sharing and reuse of research video. In Proceedings of the 15th ACM/IEEE-CS Joint Conference on Digital Libraries. 279–280.Google ScholarGoogle Scholar
  185. T. Simonite. 2019. The best algorithms struggle to recognize black faces equally. WIRED.Google ScholarGoogle Scholar
  186. Kristin Snoddon and Maartje De Meulder. 2020. Introduction: Ideologies in sign language vitality and revitalization. Lang. Commun. 74 (2020), 154–163.Google ScholarGoogle ScholarCross RefCross Ref
  187. Anthony Spadafora. 2019. Microsoft’s new “Data Dignity” team aims to give users more control over their data. Retrieved from https://www.techradar.com/news/microsofts-new-data-dignity-team-aims-to-give-users-more-control-over-their-data.Google ScholarGoogle Scholar
  188. Rose Stamp, Adam Schembri, Jordan Fenlon, Ramas Rentelis, Bencie Woll, and Kearsy Cormier. 2014. Lexical variation and change in British sign language. PLoS One 9, 4 (2014), e94053.Google ScholarGoogle ScholarCross RefCross Ref
  189. T. Starner and A. Pentland. 1995. Real-time American sign language recognition from video using hidden Markov models. In Proceedings of the International Symposium on Computer Vision. 265–270.Google ScholarGoogle Scholar
  190. Thad Starner, Joshua Weaver, and Alex Pentland. 1998. Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Anal. Mach. Intell. 20, 12 (Dec. 1998), 1371–1375.Google ScholarGoogle ScholarDigital LibraryDigital Library
  191. William C. Stokoe. 1960. Sign language structure: An outline of the visual communication systems of the American deaf. Stud. Ling.: Occas. Papers 8, 8 (1960).Google ScholarGoogle Scholar
  192. William C. Stokoe, Dorothy C. Casterline, and Carl G. Croneberg. 1965 (reissued 1976). A Dictionary of American Sign Language on Linguistic Principles. Linstok Press.Google ScholarGoogle Scholar
  193. Ted Supalla and Patricia Clark. 2014. Sign Language Archaeology: Understanding the Historical Roots of American Sign Language. Gallaudet University Press.Google ScholarGoogle Scholar
  194. Valerie Sutton. 1995. Lessons in SignWriting. SignWriting.Google ScholarGoogle Scholar
  195. Nazif Can Tamer and Murat Saraçlar. 2020. Keyword search for sign language. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’20). 8184–8188. DOI:https://doi.org/10.1109/ICASSP40776.2020.9054678.Google ScholarGoogle ScholarCross RefCross Ref
  196. Joshua G. Tanenbaum, Amanda M. Williams, Audrey Desjardins, and Karen Tanenbaum. 2013. Democratizing technology: Pleasure, utility and expressiveness in DIY and maker practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2603–2612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  197. Sarah F. Taub. 2001. Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge University Press.Google ScholarGoogle Scholar
  198. Liz (tear down every statue of Andrew) Jackson. 2019. Disability Dongle: A well intended elegant, yet useless solution to a problem we never knew we had. Disability Dongles are most often conceived of and created in design schools and at IDEO. Retrieved from https://twitter.com/elizejackson/status/1110629818234818570?s=20.Google ScholarGoogle Scholar
  199. Jelle Ten Kate, Gerwin Smit, and Paul Breedveld. 2017. 3D-printed upper limb prostheses: A review. Disab. Rehab.: Assist. Technol. 12, 3 (2017), 300–314.Google ScholarGoogle ScholarCross RefCross Ref
  200. Bernard T. Tervoort. 1953. Structurele Analyze van Visueel Taalgebruik binnen een Groep Dove Kinderen: Structural Analysis of Visual Language use Within a Group of Deaf Children. Deel 2. Materiaal, Registers, Enz. North-Holland Publishing Company.Google ScholarGoogle Scholar
  201. The Max Planck Institute for Psycholinguistics The language Archive. 2018. ELAN. Retrieved from https://tla.mpi.nl/tools/tla-tools/elan/elan-description/.Google ScholarGoogle Scholar
  202. U.S. Department of Health & Human Services The Office of Research Integrity (ORI). 2020. Responsible Conduct in Data Management: Data Ownership. Retrieved from https://ori.hhs.gov/education/products/n_illinois_u/datamanagement/dotopic.html.Google ScholarGoogle Scholar
  203. Emily A. Tobey, Donna Thal, John K. Niparko, Laurie S. Eisenberg, Alexandra L. Quittner, Nae-Yuh Wang, and CDaCI Investigative Team. 2013. Influence of implantation age on school-age language performance in pediatric cochlear implant users. Int. J. Audiol. 52, 4 (2013), 219–229.Google ScholarGoogle ScholarCross RefCross Ref
  204. Andrea Toliver-Smith and Betholyn Gentry. 2017. Investigating Black ASL: A systematic review. Amer. Ann. Deaf 161, 5 (2017), 560–570.Google ScholarGoogle ScholarCross RefCross Ref
  205. Shari Trewin. 2018. AI fairness for people with disabilities: Point of view. ArXiv abs/1811.10670 (2018).Google ScholarGoogle Scholar
  206. Shari Trewin, Sara Basson, Michael Muller, Stacy Branham, Jutta Treviranus, Daniel Gruen, Daniel Hebert, Natalia Lyckowski, and Erich Manser. 2019. Considerations for AI fairness for people with disabilities. AI Matters 5, 3 (Dec. 2019), 40–63. DOI:https://doi.org/10.1145/3362077.3362086.Google ScholarGoogle ScholarDigital LibraryDigital Library
  207. Shari Trewin, Meredith Ringel Morris, Stacy Branham, Walter S. Lasecki, Shiri Azenkot, Nicole Bleuel, Shiri Azenkot, Phill Jenkins, and Jeffrey P. Bigham. 2020. Workshop on AI fairness for people with disabilities. SIGACCESS Access. Comput. 125 (Mar. 2020). DOI:https://doi.org/10.1145/3386296.3386297.Google ScholarGoogle Scholar
  208. Mieke Van Herreweghe, Myriam Vermeerbergen, Eline Demey, Hannes De Durpel, Hilde Nyffels, and Sam Verstraete. 2015. Het Corpus VGT. Een Digitaal Open Access Corpus van Videos and Annotaties van Vlaamse Gebarentaal, Ontwikkeld Aan de Universiteit Gent Ism KU Leuven. Retrieved from www.corpusvgt.be.Google ScholarGoogle Scholar
  209. Jaipreet Virdi. 2020. Hearing Happiness: Deafness Cures in History. University of Chicago Press.Google ScholarGoogle ScholarCross RefCross Ref
  210. C. Vogler and D. Metaxas. 1997. Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC’97). Orlando, 156–161.Google ScholarGoogle Scholar
  211. Christian Vogler and Dimitris Metaxas. 1999. Parallel hidden Markov models for American sign language recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’99), Vol. 1. 116–122.Google ScholarGoogle ScholarCross RefCross Ref
  212. Christian Vogler and Dimitris Metaxas. 1999. Toward scalability in ASL recognition: Breaking down signs into phonemes. In Gesture-based Communication in Human-computer Interaction (Lecture Notes in Computer Science). Springer, Berlin, 211–224. DOI:https://doi.org/10.1007/3-540-46616-9_19.Google ScholarGoogle Scholar
  213. Christian Vogler and Dimitris Metaxas. 2001. A framework for recognizing the simultaneous aspects of American sign language. Comput. Vis. Image Underst. 81, 3 (2001), 358–384.Google ScholarGoogle ScholarDigital LibraryDigital Library
  214. Christian Vogler and Dimitris Metaxas. 2004. Handshapes and movements: multiple-channel American sign language recognition. In Gesture-based Communication in Human-computer Interaction (Lecture Notes in Computer Science), Antonio Camurri and Gualtiero Volpe (Eds.), Vol. 2915. Springer, Berlin, 247–258. DOI:https://doi.org/10.1007/978-3-540-24598-8_23.Google ScholarGoogle Scholar
  215. Ulrich von Agris, Moritz Knorr, and K.-F. Kraiss. 2008. The significance of facial features for automatic sign language recognition. In Proceedings of the International Conference on Automatic Face and Gesture Recognition (FG’08). 1–6.Google ScholarGoogle ScholarCross RefCross Ref
  216. Matthew J. Vowels, Necati Cihan Camgoz, and Richard Bowden. 2020. NestedVAE: Isolating common factors via weak supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9202–9212.Google ScholarGoogle ScholarCross RefCross Ref
  217. World Wide Web Consortium (W3C). 2018. Web Content Accessibility Guidelines (WCAG) 2.1. Retrieved from https://www.w3.org/TR/2018/REC-WCAG21-20180605/.Google ScholarGoogle Scholar
  218. Elyse Wanshel. 2016. Students Invented Gloves That Can Translate Sign Language into Speech and Text. Retrieved from https://www.huffpost.com/entry/navid-azodi-and-thomas-pryor-signaloud-gloves-translate-american-sign-language-into-speech-text_n_571fb38ae4b0f309baeee06d.Google ScholarGoogle Scholar
  219. R. Yang, S. Sarkar, and B. Loeding. 2010. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming. IEEE Trans. Pattern Anal. Mach. Intell. 32, 3 (Mar. 2010), 462–477. DOI:https://doi.org/10.1109/TPAMI.2009.26.Google ScholarGoogle ScholarDigital LibraryDigital Library
  220. Zhaoyang Yang, Zhenmei Shi, Xiaoyong Shen, and Yu-Wing Tai. 2019. SF-Net: Structured feature network for continuous sign language recognition. arXiv:1908.01341 [cs] (Aug. 2019).Google ScholarGoogle Scholar
  221. YouTube. [n.d.]. YouTube.com Results, Query: “ASL homework”. Retrieved from https://www.youtube.com/results?search_query=ASL+homework.Google ScholarGoogle Scholar
  222. L. L. C. YouTube. 2005. YouTube. Retrieved from https://www.youtube.com/.Google ScholarGoogle Scholar
  223. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In Proceedings of the Conference on Learning Representations (ICLR’17).Google ScholarGoogle Scholar
  224. Hao Zhou, Wengang Zhou, Yun Zhou, and Houqiang Li. 2020. Spatial-temporal multi-cue network for continuous sign language recognition. arXiv:2002.03187 [cs] (Feb. 2020).Google ScholarGoogle Scholar
  225. Zhihao Zhou, Kyle Chen, Xiaoshi Li, Songlin Zhang, Yufen Wu, Yihao Zhou, Keyu Meng, Chenchen Sun, Qiang He, Wenjing Fan et al. 2020. Sign-to-speech translation using machine-learning assisted stretchable sensor arrays. Nat. Electron. (2020). DOI:https://doi.org/10.1038/s41928-020-0428-6.Google ScholarGoogle Scholar

Index Terms

  1. The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Accessible Computing
            ACM Transactions on Accessible Computing  Volume 14, Issue 2
            June 2021
            174 pages
            ISSN:1936-7228
            EISSN:1936-7236
            DOI:10.1145/3477222
            Issue’s Table of Contents

            Copyright © 2021 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 21 July 2021
            • Accepted: 1 November 2020
            • Received: 1 August 2020
            Published in taccess Volume 14, Issue 2

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format