Abstract
Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.
- 2006. Convention on the Rights of Persons with Disabilities (CRPD) Enable. Retrieved from https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html.Google Scholar
- 2016. Sunderland v. Bethesda Health, Inc. Vol. 184. Dist. Court, SD Florida. https://www.leagle.com/decision/infdco20160204a03.Google Scholar
- United Nations. 2016. WFD and WASLI Issue Statement on Signing Avatars. Retrieved from https://wfdeaf.org/news/wfd-wasli-issue-statement-signing-avatars/.Google Scholar
- Michael Sawh. 2017. Ontenna smart hair clip will help deaf people sense sound. Wareable Ltd. Retrieved from https://www.wareable.com/wearable-tech/ontenna-smart-hair-clip-for-deaf-sense-sound-4524.Google Scholar
- Registry of Interpreters for the Deaf, Inc. (RID). 2019. 2018 Annual Report. Retrieved from https://rid.org/2018-annual-report/.Google Scholar
- BBC News. 2019. Google sign language AI turns hand gestures into speech. Retrieved from https://www.bbc.com/news/technology-49410945.Google Scholar
- Alex Abenchuchan. 2015. The Daily Moth. Retrieved from https://www.dailymoth.com/blog.Google Scholar
- Chadia Abras, Diane Maloney-Krichmar, Jenny Preece, et al. 2004. User-centered design. Bainbridge, W. Encyclopedia of Human-computer Interaction. Sage Publications, Thousand Oaks, 37, 4 (2004), 445–456.Google Scholar
- Nikolas Adaloglou, Theocharis Chatzis, Ilias Papastratis, Andreas Stergioulas, Georgios Th. Papadopoulos, Vassia Zacharopoulou, George J. Xydopoulos, Klimnis Atzakas, Dimitris Papazachariou, and Petros Daras. 2020. A comprehensive study on sign language recognition methods. arXiv:2007.12530 [cs] (2020).Google Scholar
- Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. Modeling the speed and timing of American sign language to generate realistic animations. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’18). Association for Computing Machinery, New York, NY, 259–270. DOI:https://doi.org/10.1145/3234695.3236356Google Scholar
- Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2019. Effect of automatic sign recognition performance on the usability of video-based search interfaces for sign language dictionaries. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’19). Association for Computing Machinery, New York, NY, 56–67. DOI:https://doi.org/10.1145/3308561.3353791Google ScholarDigital Library
- Amazon. 2005. Amazon Mechanical Turk. Retrieved from https://www.mturk.com/.Google Scholar
- Robert J. Amdur and Elizabeth A. Bankert. 2010. Institutional Review Board: Member Handbook. Jones & Bartlett Publishers.Google Scholar
- Diego Roberto Antunes, André L. P. Guedes, and Laura Sánchez García. 2015. A context-based collaborative framework to build sign language databases by real users. In Proceedings of the International Conference on Universal Access in Human-computer Interaction. Springer, 327–338.Google ScholarCross Ref
- Diego R. Antunes, Cayley Guimarães, Laura S. García, Luiz Eduardo S. Oliveira, and Sueli Fernandes. 2011. A framework to support development of sign language human-computer interaction: Building tools for effective information access and inclusion of the deaf. In Proceedings of the 5th International Conference on Research Challenges in Information Science. IEEE, 1–12.Google ScholarCross Ref
- Giuseppe Ateniese, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secur. Netw. 10, 3 (Sept. 2015), 137–150. DOI:https://doi.org/10.1504/IJSN.2015.071829Google ScholarDigital Library
- R. A. Augustus, E. Ritchie, and S. Stecker. 2013. The Official American Sign Language Writing Textbook. ASLized, Los Angeles, CA.Google Scholar
- Britta Bauer and Hermann Hienz. 2000. Relevant features for video-based continuous sign language recognition. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, 440–445.Google ScholarCross Ref
- Britta Bauer and Karl-Friedrich Kraiss. 2002. Towards an automatic sign language recognition system using subunits. In Gesture and Sign Language in Human-computer Interaction, Ipke Wachsmuth and Timo Sowa (Eds.). Lecture Notes in Computer Science, Vol. 2298. Springer Berlin, 123–173.Google Scholar
- B. Bauer, S. Nießen, and H. Hienz. 1999. Towards an automatic sign language translation system. In Proceedings of the International Workshop on Physicality and Tangibility in Interaction: Towards New Paradigms for Interaction Beyond the Desktop.Google Scholar
- Larwan Berke, Sushant Kafle, and Matt Huenerfauth. 2018. Methods for evaluation of imperfect captioning tools by deaf or hard-of-hearing users at different reading literacy levels. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, 1–12. DOI:https://doi.org/10.1145/3173574.3173665Google ScholarDigital Library
- T. Bluche, H. Ney, J. Louradour, and C. Kermorvant. 2015. Framewise and CTC training of neural networks for handwriting recognition. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR’15). 81–85. DOI:https://doi.org/10.1109/ICDAR.2015.7333730Google Scholar
- M. Borg and K. P. Camilleri. 2019. Sign language detection “in the wild” with recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19). 1637–1641. DOI:https://doi.org/10.1109/ICASSP.2019.8683257.Google Scholar
- Mark Borg and Kenneth P. Camilleri. 2020. Phonologically-meaningful subunits for deep learning-based sign language recognition. In Proceedings of the European Conference on Computer Vision Workshops (ECCVW’20).Google Scholar
- Harry Bornstein, Karen Luczak Saulnier, Lillian B. Hamilton, and Ralph R. Miller. 1983. The Comprehensive Signed English Dictionary. Gallaudet University Press.Google Scholar
- Rachel Botsman. 2017. Big data meets big brother as China moves to rate its citizens. Wired UK 21 (2017), 1–11.Google Scholar
- Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 343–351.Google Scholar
- Daren C. Brabham. 2013. Crowdsourcing. The MIT Press.Google Scholar
- Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign language recognition, generation, and translation: An interdisciplinary perspective. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’19). Association for Computing Machinery, New York, NY, 16–31. DOI:https://doi.org/10.1145/3308561.3353774.Google ScholarDigital Library
- Danielle Bragg, Oscar Koller, Naomi Caselli, and William Thies. 2020. Exploring collection of sign language datasets: Privacy, participation, and model performance. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 16–31.Google ScholarDigital Library
- Danielle Bragg, Kyle Rector, and Richard E. Ladner. 2015. A user-powered American sign language dictionary. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1837–1848.Google Scholar
- Ben Braithwaite. 2019. Sign language endangerment and linguistic diversity. Language 95, 1 (2019), e161–e187.Google ScholarCross Ref
- Ben Braithwaite. 2020. Ideologies of linguistic research on small sign languages in the global south: A Caribbean perspective. Lang. Commun. 74 (2020), 182–194.Google ScholarCross Ref
- M. Brand, N. Oliver, and A. Pentland. 1997. Coupled hidden Markov models for complex action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’97). 994–999. DOI:https://doi.org/10.1109/CVPR.1997.609450.Google ScholarCross Ref
- Constituição BRASIL. 2009. Decreto nº 6.949, de 25 de agosto de 2009. Promulga a Convenção Internacional sobre os Direitos das Pessoas com Deficiência e seu Protocolo Facultativo, assinados em Nova York, em 30 de março de 2007. Diário Oficial da União163 (2009).Google Scholar
- Chamber of Deputies BRAZIL. 2015. Law n 13,146, of July 6, 2015. institutes the Brazilian law for inclusion of persons with disabilities (statute of persons with disabilities ê ncia). Official J. Union (2015), 43.Google Scholar
- Diane Brentari. 1996. Trilled movement: Phonetic realization and formal representation. Lingua 98, 1 (Mar. 1996), 43–71. DOI:https://doi.org/10.1016/0024-3841(95)00032-1.Google ScholarCross Ref
- Robert V. Bruce. 2020. Bell: Alexander Graham Bell and the conquest of solitude. Plunkett Lake Press.Google Scholar
- Jeremy L. Brunson. 2008. Your case will now be heard: Sign language interpreters as problematic accommodations in legal interactions. J. Deaf Stud. Deaf Educ. 13, 1 (2008), 77–91.Google ScholarCross Ref
- Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. 2018. Neural sign language translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 7784–7793.Google ScholarCross Ref
- Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, and Richard Bowden. 2016. Using convolutional 3D neural networks for user-independent continuous gesture recognition. In Proceedings of the International Conference on Pattern Recognition Workshops (ICPRW’16). 49–54.Google ScholarCross Ref
- Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, and Richard Bowden. 2017. SubUNets: End-to-end hand shape and continuous sign language recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 22–27.Google ScholarCross Ref
- Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020. Sign language transformers: Joint end-to-end sign language recognition and translation. arXiv:2003.13830 [cs] (Mar. 2020).Google Scholar
- Lee W. Campbell, David A. Becker, Ali Azarbayejani, Aaron F. Bobick, and Alex Pentland. 1996. Invariant features for 3-D gesture recognition. In Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition. IEEE, 157–162.Google ScholarCross Ref
- Xiujuan Chai, Hanjie Wang, and Xilin Chen. 2014. The DEVISIGN Large Vocabulary of Chinese Sign Language Database and Baseline Evaluations. Technical Report. Key Lab of Intelligent Information Processing of Chinese Academy of Sciences.Google Scholar
- James I. Charlton. 2000. Nothing about Us Without Us: Disability Oppression and Empowerment. University of California Press.Google Scholar
- Ka Leong Cheng, Zhaoyang Yang, Qifeng Chen, and Yu-Wing Tai. 2020. Fully convolutional networks for continuous sign language recognition. arXiv:2007.12402 [cs] (July 2020).Google Scholar
- Nicholas Confessore, Michael LaForgia, and Gabriel J. X. Dance. 2018. Facebook’s data sharing and privacy rules: 5 takeaways from our investigation. Retrieved from https://www.nytimes.com/2018/12/18/us/politics/facebook-data-sharing-deals.html.Google Scholar
- H. Cooper, R. Bowden, and S. CVSSP. 2007. Sign language recognition using boosted volumetric features. In Proceedings of the IAPR Conference on Machine Vision Applications. 359–362.Google Scholar
- R. Orin Cornett. 1967. Cued speech. American Annals of the Deaf 112, 1 (Jan. 1967), 3–13. https://www.jstor.org/action/doBasicSearch?Query=cued+speech+cornett&filter=jid%3A10.2307%2Fj50013925.Google Scholar
- Onno Crasborn. 2010. What does “informed consent” mean in the internet age? Publishing sign language corpora as open content. Sign Lang. Stud. 10, 2 (2010), 276–290.Google ScholarCross Ref
- Onno Crasborn and Inge Zwitserlood. 2008. The corpus NGT: An online corpus for professionals and laymen. In LREC Workshop on the Representation and Processing of Sign Languages. 44–49.Google Scholar
- Runpeng Cui, Hu Liu, and Changshui Zhang. 2017. Recurrent convolutional neural networks for continuous sign language recognition by staged optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 7361–7369.Google ScholarCross Ref
- R. Cui, H. Liu, and C. Zhang. 2019. A deep neural framework for continuous sign language recognition by iterative training. IEEE Trans. Multimedia (2019). DOI:https://doi.org/10.1109/TMM.2018.2889563.Google Scholar
- Maartje De Meulder. 2015. The legal recognition of sign languages. Sign Lang. Stud. 15, 4 (2015), 498–506.Google ScholarCross Ref
- Ronice Müller de Quadros, Kathryn Davidson, Diane Lillo-Martin, and Karen Emmorey. 2020. Code-blending with depicting signs. Ling. Approach. Biling. 10, 2 (2020), 290–308.Google ScholarCross Ref
- Dorothy Elizabeth Robling Denning. 1982. Cryptography and Data Security. Vol. 112. Addison-Wesley Reading.Google Scholar
- Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation. Springer, 1–19.Google ScholarCross Ref
- Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. 2017. Exposed! A Survey of Attacks on Private Data. Ann. Rev. Statist. Applic. 4, 1 (2017), 61–84. DOI:https://doi.org/10.1146/annurev-statistics-060116-054123.Google ScholarCross Ref
- Ralph Elliott, Javier Bueno, Richard Kennaway, and John Glauert. 2010. Towards the integration of synthetic SL animation with avatars into corpus annotation tools. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies.Google Scholar
- Michael Erard. 2017. Why Sign-Language Gloves Don’t Help Deaf People. Retrieved from https://www.theatlantic.com/technology/archive/2017/11/why-sign-language-gloves-dont-help-deaf-people/545441/.Google Scholar
- European Sign Language Center. 2018. Spreadthesign. Retrieved from https://www.spreadthesign.com.Google Scholar
- Shelly Fan. 2016. This Smart Vest Lets the Deaf “Hear” with Their Skin. Retrieved from https://singularityhub.com/2016/10/09/this-smart-vest-lets-the-deaf-hear-with-their-skin/.Google Scholar
- Gaolin Fang, Wen Gao, and Debin Zhao. 2007. Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Trans. Syst., Man, Cyber. 37, 1 (Jan. 2007), 1–9. DOI:https://doi.org/10.1109/TSMCA.2006.886347.Google ScholarDigital Library
- Lindsay Ferrara and Gabrielle Hodge. 2018. Language as description, indication, and depiction. Front. Psychol. 9 (2018), 716.Google ScholarCross Ref
- Jami N. Fisher, Julie Hochgesang, and Meredith Tamminga. 2016. Examining variation in the absence of a “main” ASL corpus: The case of the Philadelphia signs project. In Proceedings of the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining. 75–80.Google Scholar
- Jami N. Fisher, Meredith Tamminga, and Julie A. Hochgesang. 2018. The historical and social context of the Philadelphia ASL community. Sign Lang. Stud. 18, 3 (2018), 429–460.Google ScholarCross Ref
- Center for Data Innovation. 2019. When Is It Okay to Use Data for AI? Retrieved from https://www.datainnovation.org/2019/10/when-is-it-okay-to-use-data-for-ai/.Google Scholar
- Jens Forster, Oscar Koller, Christian Oberdörfer, Yannick Gweth, and Hermann Ney. 2013. Improving continuous sign language recognition: Speech recognition techniques and system design. In Proceedings of the Workshop on Speech and Language Processing for Assistive Technologies. 41–46.Google Scholar
- Jens Forster, Christoph Schmidt, Oscar Koller, Martin Bellgardt, and Hermann Ney. 2014. Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-weather. In Proceedings of the International Conference on Language Resources and Evaluation (LREC’14). 1911–1916.Google Scholar
- Nelly Furman, David Goldberg, and Natalia Lusin. 2010. Enrollments in languages other than English in United States institutions of higher education, Fall 2009. In Modern Language Association. ERIC.Google Scholar
- Carrie Lou Garberoglio, Stephanie Cawthon, and Mark Bond. 2016. Deaf People and Employment in the United States. Washington, DC: US Department of Education, Office of Special Education Programs, National Deaf Center on Postsecondary Outcomes. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=deaf+people+and+employment+in+the+united+states&btnG.Google Scholar
- Ann E. Geers, Christine M. Mitchell, Andrea Warner-Czyz, Nae-Yuh Wang, Laurie S. Eisenberg, CDaCI Investigative Team et al. 2017. Early sign language exposure and cochlear implantation benefits. Pediatrics 140, 1 (2017).Google Scholar
- Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the International Conference on Machine Learning (ICML’06). ACM, New York, NY, 369–376. DOI:https://doi.org/10.1145/1143844.1143891.Google ScholarDigital Library
- K. Grobel and M. Assan. 1997. Isolated sign language recognition using hidden Markov models. In Proceedingsof the IEEE International Conference on Systems, Man, and Cybernetics (SMC’97), Vol. 1. 162–167. DOI:https://doi.org/10.1109/ICSMC.1997.625742.Google Scholar
- Nicola Grove and Margaret Walker. 1990. The Makaton vocabulary: Using manual signs and graphic symbols to develop interpersonal communication. Augment. Altern. Commun. 6, 1 (1990), 15–28.Google ScholarCross Ref
- Yannick Gweth, Christian Plahl, and Hermann Ney. 2012. Enhanced continuous sign language recognition using PCA and neural network features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’12), 55–60.Google ScholarCross Ref
- Wyatte C. Hall. 2017. What you don’t know can hurt you: The risk of language deprivation by impairing sign language development in deaf children. Matern. Child Health J. 21, 5 (2017), 961–965.Google ScholarCross Ref
- Wyatte C. Hall, Leonard L. Levin, and Melissa L. Anderson. 2017. Language deprivation syndrome: A possible neurodevelopmental disorder with sociocultural origins. Soc. PsychiatryPsychiatric Epidem. 52, 6 (June 2017), 761–776. DOI:https://doi.org/10.1007/s00127-017-1351-7.Google ScholarCross Ref
- Thomas Hanke. 2004. HamNoSys-representing sign language data in language resources and language processing contexts. In Proceedings of the International Conference on Language Resources and Evaluation, Vol. 4. 1–6.Google Scholar
- Raychelle Harris, Heidi M. Holmes, and Donna M. Mertens. 2009. Research ethics in sign language communities. Sign Lang. Stud. 9, 2 (2009), 104–131.Google ScholarCross Ref
- Peter C. Hauser. 2000. American sign language and cued English. Biling. Ident. Deaf Commun. 6 (2000), 43.Google Scholar
- Ellen S. Hibbard and Deb I. Fels. 2011. The Vlogging phenomena: A deaf perspective. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’11). Association for Computing Machinery, New York, NY, 59–66. DOI:https://doi.org/10.1145/2049536.2049549.Google Scholar
- Joseph Hill. 2020. Do deaf communities actually want sign language gloves?Nat. Electron. (2020). DOI:https://doi.org/10.1038/s41928-020-0451-7.Google Scholar
- Joseph Hill, Carolyn McCaskill, Ceil Lucas, and Robert Bayley. 2009. Signing outside the box: The size of signing space in Black ASL. New Ways Anal. Variat. 38 (2009).Google Scholar
- Joseph C. Hill. 2017. The importance of the sociohistorical context in sociolinguistics: The case of Black ASL. Sign Lang. Stud. 18, 1 (2017), 41–57.Google ScholarCross Ref
- Jonathan Hook, Sanne Verbaan, Abigail Durrant, Patrick Olivier, and Peter Wright. 2014. A study of the challenges related to DIY assistive technology in the context of children with disabilities. In Proceedings of the Conference on Designing Interactive Systems. 597–606.Google ScholarDigital Library
- Jie Huang, Wengang Zhou, Qilin Zhang, Houqiang Li, and Weiping Li. 2018. Video-based sign language recognition without temporal segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2257–2264.Google ScholarCross Ref
- Matt Huenerfauth, Elaine Gale, Brian Penly, Sree Pillutla, Mackenzie Willard, and Dhananjai Hariharan. 2017. Evaluation of language feedback methods for student videos of American sign language. ACM Trans. Access. Comput. 10, 1 (Apr. 2017). DOI:https://doi.org/10.1145/3046788.Google ScholarDigital Library
- Tom Humphries. 1975. Audism: The making of a word. Unpublished essay. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=humphries+audism+the+making+of+a+word&btnG=.Google Scholar
- Tom Humphries, Raja Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, and Scott Smith. 2013. The right to language. J. Law, Medic. Ethics 41, 4 (2013), 872–884.Google ScholarCross Ref
- KinTrans Inc. 2013. Project KinTrans Hands Can Talk. Retrieved from https://www.kintrans.com/.Google Scholar
- MotionSavvy Inc.2013. UNI. Retrieved from http://motionsavvy.com.Google Scholar
- SignAll Technologies Inc. 2016. SignAll 1.0. Retrieved from https://www.signall.us/.Google Scholar
- Meredith Minkler and Nina Wallerstein. 2011. Community-Based Participatory Research for Health: From Process to Outcomes. John Wiley & Sons.Google Scholar
- Robert E. Johnson. 2006. Cultural constructs that impede discussions about variability in speech-based educational models for deaf children with cochlear implants. Perspectiva 24, 3 (2006), 29–80.Google Scholar
- Trevor Johnston. 2006. W(h)ither the Deaf community? Population, genetics, and the future of Australian sign language. Sign Lang. Stud. 6, 2 (2006), 137–173.Google ScholarCross Ref
- Hamid Reza Vaezi Joze and Oscar Koller. 2018. MS-ASL: A large-scale data set and benchmark for understanding American sign language. arXiv:1812.01053 [cs] (Dec. 2018).Google Scholar
- Hernisa Kacorri and Matt Huenerfauth. 2016. Continuous profile models in ASL syntactic facial expression synthesis. In Proceedings of the 54th Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2084–2093. DOI:https://doi.org/10.18653/v1/P16-1196.Google ScholarCross Ref
- Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth. 2020. Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing. SIGACCESS Access. Comput.125 (Mar. 2020). DOI:https://doi.org/10.1145/3386296.3386300.Google Scholar
- Sushant Kafle and Matt Huenerfauth. 2019. Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Trans. Access. Comput. 12, 2 (June 2019). DOI:https://doi.org/10.1145/3325862.Google ScholarDigital Library
- Avi C. Kak. 2002. Purdue RVL-SLLL ASL database for automatic recognition of American sign language. In Proceedings of the IEEE International Conference on Multimodal Interfaces (ICMI’02), 167–172. DOI:https://doi.org/10.1109/ICMI.2002.1166987.Google ScholarDigital Library
- Lori M. Kaufman. 2009. Data security in the world of cloud computing. IEEE Secur. Priv. 7, 4 (2009), 61–64.Google ScholarDigital Library
- Michael Kipp. 2017. Anvil. Retrieved from https://www.anvil-software.org/.Google Scholar
- Sang-Ki Ko, Chang Jo Kim, Hyedong Jung, and Choongsang Cho. 2019. Neural sign language translation based on human keypoint estimation. Appl. Sci. 9, 13 (Jan. 2019), 2683. DOI:https://doi.org/10.3390/app9132683.Google ScholarCross Ref
- Oscar Koller. 2020. Quantitative survey of the state of the art in sign language recognition. arXiv:2008.09918 [cs] (Aug. 2020).Google Scholar
- Oscar Koller, Necati Cihan Camgoz, Hermann Ney, and Richard Bowden. 2019. Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallelism in sign language videos. IEEE Trans. Pattern Anal. Mach. Intell. accepted for publication (2019), 15.Google Scholar
- Oscar Koller, Jens Forster, and Hermann Ney. 2015. Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers. Comput. Vis. Image Underst. 141 (Dec. 2015), 108–125. DOI:https://doi.org/10.1016/j.cviu.2015.09.013.Google Scholar
- Oscar Koller, Hermann Ney, and Richard Bowden. 2013. May the force be with you: Force-aligned signwriting for automatic subunit annotation of corpora. In Proceedings of the International Conference on Automatic Face and Gesture Recognition (FG’13), 1–6.Google ScholarCross Ref
- Oscar Koller, Hermann Ney, and Richard Bowden. 2014. Read my lips: Continuous signer independent weakly supervised viseme recognition. In Proceedings of the European Conference on Computer Vision (ECCV’14). 281–296.Google ScholarCross Ref
- Oscar Koller, Hermann Ney, and Richard Bowden. 2015. Deep learning of mouth shapes for sign language. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW’15). 477–483.Google ScholarDigital Library
- Oscar Koller, Hermann Ney, and Richard Bowden. 2016. Automatic alignment of HamNoSys subunits for continuous sign language recognition. In Proceedings of the LREC Workshop on the Representation and Processing of Sign Languages. 121–128.Google Scholar
- Oscar Koller, Hermann Ney, and Richard Bowden. 2016. Deep hand: How to train a CNN on 1 million hand images when your data is continuous and weakly labelled. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 3793–3802.Google ScholarCross Ref
- Oscar Koller, Sepehr Zargaran, and Hermann Ney. 2017. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent CNN-HMMs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), 4297–4305.Google ScholarCross Ref
- Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2016. Deep sign: Hybrid CNN-HMM for continuous sign language recognition. In Proceedings of the British Machine Vision Conference (BMVC’16), 1–12. DOI:https://doi.org/10.5244/C.30.136.Google ScholarCross Ref
- Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2018. Deep sign: Enabling robust statistical continuous sign language recognition via hybrid CNN-HMMs. Int. J. Comput. Vis. 126, 12 (Dec. 2018), 1311–1325. DOI:https://doi.org/10.1007/s11263-018-1121-3.Google ScholarDigital Library
- Steven Komarov and Krzysztof Z. Gajos. 2014. Organic peer assessment. In Proceedings of the CHI Learning Innovation at Scale Workshop.Google Scholar
- Reiner Konrad, Thomas Hanke, Gabriele Langer, Dolly Blanck, Julian Bleicken, Ilona Hofmann, Olga Jeziorski, Lutz König, Susanne König, Rie Nishio, Anja Regen, Uta Salden, Sven Wagner, and Satu Worseck. 2019. MY DGS – Annotated. Public Corpus of German Sign Language, 2nd Release. DOI:https://doi.org/10.25592/dgs-corpus-2.0.Google Scholar
- Robert Kraut, Michael Patterson, Vicki Lundmark, Sara Kiesler, Tridas Mukophadhyay, and William Scherlis. 1998. Internet paradox: A social technology that reduces social involvement and psychological well-being?Amer. Psychol. 53, 9 (1998), 1017.Google Scholar
- Jette H. Kristoffersen, Thomas Troelsgård, Anne Skov Hardell, Bo Hardell, Janne Boye Niemelä, Jørgen Sandholt, and Maja Toft. 2016. Ordbog over Dansk Tegnsprog 2008–2016. Retrieved from http://www.tegnsprog.dk/.Google Scholar
- Alexa Kuenburg, Paul Fellinger, and Johannes Fellinger. 2015. Health care access among deaf people. J. Deaf Stud. Deaf Educ. 21, 1 (09 2015), 1–10. DOI:https://doi.org/10.1093/deafed/env042.Google Scholar
- Raja Kushalnagar. 2018. Legibility of videos with ASL signers. In Proceedings of the ICT Accessibility Testing Symposium: Mobile Testing, 508 Revision, and Beyond (ICT’18).Google Scholar
- William Labov. 1972. Some principles of linguistic methodology. Language in Society 1, 1 (Apr. 1972), 97–120. https://www.jstor.org/action/doBasicSearch?Query=labov+%22some+principles+of+linguistic+methodology%22.Google ScholarCross Ref
- Emil Ladner. 1931. Silent talkies. American Annals of the Deaf 76, 3 (May 1931), 321–325. https://www.jstor.org/action/doBasicSearch?Query=silent+talkies+ladner&filter=jid%3A10.2307%2Fj50013925.Google Scholar
- Harlan Lane. 1989. When the Mind Hears: A History of the Deaf. Vintage.Google Scholar
- Harlan L. Lane, Robert Hoffmeister, and Benjamin J. Bahan. 1996. A Journey into the DEAF-WORLD.DawnSign Press.Google Scholar
- Harry G. Lang et al. 2000. A Phone of Our Own: The Deaf Insurrection against Ma Bell. Gallaudet University Press.Google Scholar
- Hai-Son Le, Ngoc-Quan Pham, and Duc-Dung Nguyen. 2015. Neural networks with hidden Markov models in skeleton-based gesture recognition. In Knowledge and Systems Engineering, Viet-Ha Nguyen, Anh-Cuong Le, and Van-Nam Huynh (Eds.). Advances in Intelligent Systems and Computing, Vol. 326. Springer International Publishing, 299–311. DOI:https://doi.org/10.1007/978-3-319-11680-8_24.Google Scholar
- Boris Lenseigne and Patrice Dalle. 2005. Using signing space as a representation for sign language processing. In Proceedings of the International Gesture Workshop. Springer, 25–36.Google Scholar
- Clayton Lewis. 2020. Implications of developments in machine learning for people with cognitive disabilities. SIGACCESS Access. Comput.124 (Mar. 2020). DOI:https://doi.org/10.1145/3386308.3386309.Google Scholar
- Talila A. Lewis. 2014. Police Brutality and Deaf People. Retrieved from https://www.aclu.org/blog/national-security/police-brutality-and-deaf-people.Google Scholar
- Dongxu Li, Cristian Rodriguez Opazo, Xin Yu, and Hongdong Li. 2020. Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison. arXiv:1910.11006 [cs] (Jan. 2020).Google Scholar
- J. F. Lichtenauer, E. A. Hendriks, and M. J. T. Reinders. 2008. Sign language recognition by combining statistical DTW and independent classification. IEEE Trans. Pattern Anal. Mach. Intell. 30, 11 (Nov. 2008), 2040–2046. DOI:https://doi.org/10.1109/TPAMI.2008.123.Google ScholarDigital Library
- Scott K. Liddell and Robert E. Johnson. 1989. American sign language: The phonological base.Sign Lang. Stud. 1 (1989), 64:195–277.Google Scholar
- Jessica Litman. 1990. The public domain. Emory Law J. 39 (1990), 965.Google Scholar
- Paul A. Lombardo. 2008. Three Generations, No Imbeciles: Eugenics, the Supreme Court, and Buck v. Bell. JHU Press.Google Scholar
- David Loshin. 2002. Knowledge Integrity: Data Ownership. Retrieved from http://www.datawarehouse.com/article/?articleid=3052.Google Scholar
- Ceil Lucas. 2001. The Sociolinguistics of Sign Languages. Cambridge University Press.Google Scholar
- Ceil Lucas. 2014. The Sociolinguistics of the Deaf Community. Elsevier.Google Scholar
- Emily Lund. 2016. Vocabulary knowledge of children with cochlear implants: A meta-analysis. J. Deaf Stud. Deaf Educ. 21, 2 (2016), 107–121.Google ScholarCross Ref
- Yongsen Ma, Gang Zhou, Shuangquan Wang, Hongyang Zhao, and Woosub Jung. 2018. SignFi: Sign language recognition using WiFi. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2, 1 (Mar. 2018), 23:1–23:21. DOI:https://doi.org/10.1145/3191755.Google ScholarDigital Library
- Maartjedemeulder. 2020. Retrieved from https://acadeafic.org/.Google Scholar
- Kelly Mack, Danielle Bragg, Meredith Ringel Morris, Maarten W. Bos, Isabelle Albi, and Andrés Monroy-Hernández. 2020. Social app accessibility for deaf signers. Proc. ACM Hum.-comput. Interact. 4, CSCW2 (2020), 1–31.Google ScholarDigital Library
- Rachel I. Mayberry and Robert Kluender. 2018. Rethinking the critical period for language: New insights into an old question from American sign language. Biling.: Lang. Cog. 21, 5 (2018), 886–905.Google ScholarCross Ref
- Carolyn McCaskill, Ceil Lucas, Robert Bayley, and Joseph Hill. 2011. The Hidden Treasure of Black ASL: Its History and Structure. Gallaudet University Press, Washington, DC.Google Scholar
- David McKee, Rachel McKee, Sara Pivac Alexander, and Lynette Pivac. 2015. The Online Dictionary of New Zealand Sign Language. Retrieved from http://nzsl.vuw.ac.nz/.Google Scholar
- Michael McKee, Deirdre Schlehofer, and Denise Thew. 2013. Ethical issues in conducting research with deaf populations. Amer. J. Pub. Health 103, 12 (2013), 2174–2178.Google ScholarCross Ref
- Michael McKee, Denise Thew, Matthew Starr, Poorna Kushalnagar, John T. Reid, Patrick Graybill, Julia Velasquez, and Thomas Pearson. 2012. Engaging the deaf American sign language community: Lessons from a community-based participatory research center. Prog. Commun. Health Partner.: Res., Educ. Act. 6, 3 (2012), 321.Google ScholarCross Ref
- Irit Meir, Wendy Sandler, Carol Padden, Mark Aronoff et al. 2010. Emerging sign languages. Oxford Handb. Deaf Stud., Lang. Educ. 2 (2010), 267–280.Google Scholar
- Ross E. Mitchell, Travas A. Young, Bellamie Bachleda, and Michael A. Karchmer. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6, 3 (2006), 306–335. DOI:https://doi.org/10.1353/sls.2006.0019.Google ScholarCross Ref
- Rhonda J. Moore, Ross Smith, and Qi Liu. 2020. Using computational ethnography to enhance the curation of real-world data (RWD) for chronic pain and invisible disability use cases. SIGACCESS Access. Comput. 127 (July 2020). DOI:https://doi.org/10.1145/3412836.3412840.Google Scholar
- Donald F. Moores. 2010. The history of language and communication issues in deaf education. Oxford Handb. Deaf Stud., Lang. Educ. 2 (2010), 17–30.Google Scholar
- Meredith Ringel Morris. 2020. AI and accessibility. Commun. ACM 63, 6 (May 2020), 35–37. DOI:https://doi.org/10.1145/3356727.Google ScholarDigital Library
- Michael J. Muller and Sarah Kuhn. 1993. Participatory design. Commun. ACM 36, 6 (1993), 24–28.Google ScholarDigital Library
- Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). 739–753. DOI:https://doi.org/10.1109/SP.2019.00065.Google ScholarCross Ref
- Brenda Nicodemus and Karen Emmorey. 2015. Directionality in ASL-English interpreting: Accuracy and articulation quality in L1 and L2. Interpreting 17, 2 (2015), 145–166.Google ScholarCross Ref
- Kim E. Nielsen. 2012. A Disability History of the United States. Vol. 2. Beacon Press.Google Scholar
- U.S. Department of Justice Civil Rights Division. 2014. Effective Communication. Retrieved from https://www.ada.gov/effective-comm.htm.Google Scholar
- Eng-Jon Ong, Oscar Koller, Nicolas Pugeault, and Richard Bowden. 2014. Sign spotting using hierarchical sequential patterns with temporal intervals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). 1931–1938.Google ScholarDigital Library
- Sylvie C. W. Ong and Surendra Ranganath. 2005. Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 27, 6 (2005), 873–891.Google ScholarDigital Library
- Eleni Orfanidou, Bencie Woll, and Gary Morgan. 2014. Research Methods in Sign Language Studies: A Practical Guide. John Wiley & Sons.Google Scholar
- Carol Padden and Jacqueline Humphries. 2020. Who goes first? Deaf people and CRISPR germline editing. Perspect. Biol. Med. 63, 1 (2020), 54–65.Google ScholarCross Ref
- European Parliament and Council of the European Union. 2016. Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Data Protection Directive). L119, p. 1-88.Google Scholar
- David M. Perlmutter. 1992. Sonority and syllable structure in American sign language. Linguistic Inquiry 23, 3 (Summer 1992), 407–442. https://www.jstor.org/action/doBasicSearch?Query=perlmutter+%22Sonority+and+syllable+structure+in+American+sign+language%22.Google Scholar
- Roland Pfau, Markus Steinbach, and Bencie Woll. 2012. Sign Language. De Gruyter Mouton, Berlin, Boston. DOI:https://doi.org/10.1515/9783110261325.Google Scholar
- Rob Picheta. 2020. High-tech glove translates sign language into speech in real time. Retrieved from https://www.cnn.com/2020/06/30/health/sign-language-glove-ucla-scn-scli-intl/index.html.Google Scholar
- Chen Pichler, Julie Hochgesang, Doreen Simons, and Diane Lillo-Martin. [n.d.]. Community input on recontesting for data sharing. In Proceedings of Language Resources and Evaluation Conference.Google Scholar
- Lionel Pigou, Sander Dieleman, Pieter-Jan Kindermans, and Benjamin Schrauwen. 2014. Sign language recognition using convolutional neural networks. In Proceedings of the European Conference on Computer Vision Workshops (ECCVW’14), Lourdes Agapito, Michael M. Bronstein, and Carsten Rother (Eds.), Vol. I. Springer International Publishing, 572–578.Google Scholar
- V. Pitsikalis, S. Theodorakis, and P. Maragos. 2010. Data-driven sub-units and modeling structure for continuous sign language recognition with multiple cues. In Proceedings of the LREC Workshop on the Representation and Processing of Sign Languages. 196–203.Google Scholar
- V. Pitsikalis, S. Theodorakis, C. Vogler, and P. Maragos. 2011. Advances in phonetics-based sub-unit modeling for transcription alignment and sign language recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’11). 1–6. DOI:https://doi.org/10.1109/CVPRW.2011.5981681.Google Scholar
- Soraia Silva Prietch, Polianna dos Santos Paim, Ivan Olmos-Pineda, Josefina Guerrero García, and Juan Manuel Gonzalez Calleros. 2019. The human and the context components in the design of automatic sign language recognition systems. In Proceedings of the Iberoamerican Workshop on Human-computer Interaction. Springer, 369–380.Google Scholar
- Soraia Silva Prietch, Ivan Olmos Pineda, Polianna dos S. Paim, J. M. G. Calleros, J. G. García, and R. Resmin. 2019. Discussion on image processing for sign language recognition: An overview of the problem complexity. United Academic Journals (2019), 112–127.Google Scholar
- Renée Punch. 2016. Employment and adults who are deaf or hard of hearing: Current status and experiences of barriers, accommodations, and stress in the workplace. Amer. Ann. Deaf 161, 3 (2016), 384–397.Google ScholarCross Ref
- Ronice Müller Quadros, Deonísio Schmitt, Juliana Lohn, and Tarcísio de Arantes Leite. [n.d.]. Corpus de Libras. Retrieved from http://corpuslibras.ufsc.br/.Google Scholar
- Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings IEEE 77, 2 (1989), 257–286.Google ScholarCross Ref
- Qasim Mahmood Rajpoot and Christian Damsgaard Jensen. 2015. Video surveillance: Privacy issues and legal compliance. In Promoting Social Change and Democracy Through Information Technology. IGI global, 69–92.Google Scholar
- Octavian Robinson and Jonathan Henner. 2018. Authentic voices, authentic encounters: Cripping the university through American sign language. Disab. Stud. Quart. 38, 4 (2018).Google Scholar
- Richard A. Rogers. 2006. From cultural exchange to transculturation: A review and reconceptualization of cultural appropriation. Commun. Theor. 16, 4 (2006), 474–503.Google ScholarCross Ref
- David Rybach. 2006. Appearance-based Features for Automatic Continuous Sign Language Recognition. Ph.D. Dissertation. Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Aachen, Germany.Google Scholar
- Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. 2019. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Vol. 11700. Springer Nature.Google ScholarDigital Library
- Adam Schembri, Kearsy Cormier, and Jordan Fenlon. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb “agreement” in sign languages. Glossa: J. Gen. Ling. 3, 1 (2018).Google Scholar
- Adam Schwartz. 2012. Chicago’s video surveillance cameras: A pervasive and poorly regulated threat to our privacy. Nw. J. Tech. & Intell. Prop. 11 (2012), ix.Google Scholar
- Ann Senghas, Sotaro Kita, and Asli Özyürek. 2004. Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science 305, 5691 (2004), 1779–1782.Google Scholar
- Dylan A. Simon, Andrew S. Gordon, Lisa Steiger, and Rick O. Gilmore. 2015. Databrary: Enabling sharing and reuse of research video. In Proceedings of the 15th ACM/IEEE-CS Joint Conference on Digital Libraries. 279–280.Google Scholar
- T. Simonite. 2019. The best algorithms struggle to recognize black faces equally. WIRED.Google Scholar
- Kristin Snoddon and Maartje De Meulder. 2020. Introduction: Ideologies in sign language vitality and revitalization. Lang. Commun. 74 (2020), 154–163.Google ScholarCross Ref
- Anthony Spadafora. 2019. Microsoft’s new “Data Dignity” team aims to give users more control over their data. Retrieved from https://www.techradar.com/news/microsofts-new-data-dignity-team-aims-to-give-users-more-control-over-their-data.Google Scholar
- Rose Stamp, Adam Schembri, Jordan Fenlon, Ramas Rentelis, Bencie Woll, and Kearsy Cormier. 2014. Lexical variation and change in British sign language. PLoS One 9, 4 (2014), e94053.Google ScholarCross Ref
- T. Starner and A. Pentland. 1995. Real-time American sign language recognition from video using hidden Markov models. In Proceedings of the International Symposium on Computer Vision. 265–270.Google Scholar
- Thad Starner, Joshua Weaver, and Alex Pentland. 1998. Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Anal. Mach. Intell. 20, 12 (Dec. 1998), 1371–1375.Google ScholarDigital Library
- William C. Stokoe. 1960. Sign language structure: An outline of the visual communication systems of the American deaf. Stud. Ling.: Occas. Papers 8, 8 (1960).Google Scholar
- William C. Stokoe, Dorothy C. Casterline, and Carl G. Croneberg. 1965 (reissued 1976). A Dictionary of American Sign Language on Linguistic Principles. Linstok Press.Google Scholar
- Ted Supalla and Patricia Clark. 2014. Sign Language Archaeology: Understanding the Historical Roots of American Sign Language. Gallaudet University Press.Google Scholar
- Valerie Sutton. 1995. Lessons in SignWriting. SignWriting.Google Scholar
- Nazif Can Tamer and Murat Saraçlar. 2020. Keyword search for sign language. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’20). 8184–8188. DOI:https://doi.org/10.1109/ICASSP40776.2020.9054678.Google ScholarCross Ref
- Joshua G. Tanenbaum, Amanda M. Williams, Audrey Desjardins, and Karen Tanenbaum. 2013. Democratizing technology: Pleasure, utility and expressiveness in DIY and maker practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2603–2612.Google ScholarDigital Library
- Sarah F. Taub. 2001. Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge University Press.Google Scholar
- Liz (tear down every statue of Andrew) Jackson. 2019. Disability Dongle: A well intended elegant, yet useless solution to a problem we never knew we had. Disability Dongles are most often conceived of and created in design schools and at IDEO. Retrieved from https://twitter.com/elizejackson/status/1110629818234818570?s=20.Google Scholar
- Jelle Ten Kate, Gerwin Smit, and Paul Breedveld. 2017. 3D-printed upper limb prostheses: A review. Disab. Rehab.: Assist. Technol. 12, 3 (2017), 300–314.Google ScholarCross Ref
- Bernard T. Tervoort. 1953. Structurele Analyze van Visueel Taalgebruik binnen een Groep Dove Kinderen: Structural Analysis of Visual Language use Within a Group of Deaf Children. Deel 2. Materiaal, Registers, Enz. North-Holland Publishing Company.Google Scholar
- The Max Planck Institute for Psycholinguistics The language Archive. 2018. ELAN. Retrieved from https://tla.mpi.nl/tools/tla-tools/elan/elan-description/.Google Scholar
- U.S. Department of Health & Human Services The Office of Research Integrity (ORI). 2020. Responsible Conduct in Data Management: Data Ownership. Retrieved from https://ori.hhs.gov/education/products/n_illinois_u/datamanagement/dotopic.html.Google Scholar
- Emily A. Tobey, Donna Thal, John K. Niparko, Laurie S. Eisenberg, Alexandra L. Quittner, Nae-Yuh Wang, and CDaCI Investigative Team. 2013. Influence of implantation age on school-age language performance in pediatric cochlear implant users. Int. J. Audiol. 52, 4 (2013), 219–229.Google ScholarCross Ref
- Andrea Toliver-Smith and Betholyn Gentry. 2017. Investigating Black ASL: A systematic review. Amer. Ann. Deaf 161, 5 (2017), 560–570.Google ScholarCross Ref
- Shari Trewin. 2018. AI fairness for people with disabilities: Point of view. ArXiv abs/1811.10670 (2018).Google Scholar
- Shari Trewin, Sara Basson, Michael Muller, Stacy Branham, Jutta Treviranus, Daniel Gruen, Daniel Hebert, Natalia Lyckowski, and Erich Manser. 2019. Considerations for AI fairness for people with disabilities. AI Matters 5, 3 (Dec. 2019), 40–63. DOI:https://doi.org/10.1145/3362077.3362086.Google ScholarDigital Library
- Shari Trewin, Meredith Ringel Morris, Stacy Branham, Walter S. Lasecki, Shiri Azenkot, Nicole Bleuel, Shiri Azenkot, Phill Jenkins, and Jeffrey P. Bigham. 2020. Workshop on AI fairness for people with disabilities. SIGACCESS Access. Comput. 125 (Mar. 2020). DOI:https://doi.org/10.1145/3386296.3386297.Google Scholar
- Mieke Van Herreweghe, Myriam Vermeerbergen, Eline Demey, Hannes De Durpel, Hilde Nyffels, and Sam Verstraete. 2015. Het Corpus VGT. Een Digitaal Open Access Corpus van Videos and Annotaties van Vlaamse Gebarentaal, Ontwikkeld Aan de Universiteit Gent Ism KU Leuven. Retrieved from www.corpusvgt.be.Google Scholar
- Jaipreet Virdi. 2020. Hearing Happiness: Deafness Cures in History. University of Chicago Press.Google ScholarCross Ref
- C. Vogler and D. Metaxas. 1997. Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC’97). Orlando, 156–161.Google Scholar
- Christian Vogler and Dimitris Metaxas. 1999. Parallel hidden Markov models for American sign language recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’99), Vol. 1. 116–122.Google ScholarCross Ref
- Christian Vogler and Dimitris Metaxas. 1999. Toward scalability in ASL recognition: Breaking down signs into phonemes. In Gesture-based Communication in Human-computer Interaction (Lecture Notes in Computer Science). Springer, Berlin, 211–224. DOI:https://doi.org/10.1007/3-540-46616-9_19.Google Scholar
- Christian Vogler and Dimitris Metaxas. 2001. A framework for recognizing the simultaneous aspects of American sign language. Comput. Vis. Image Underst. 81, 3 (2001), 358–384.Google ScholarDigital Library
- Christian Vogler and Dimitris Metaxas. 2004. Handshapes and movements: multiple-channel American sign language recognition. In Gesture-based Communication in Human-computer Interaction (Lecture Notes in Computer Science), Antonio Camurri and Gualtiero Volpe (Eds.), Vol. 2915. Springer, Berlin, 247–258. DOI:https://doi.org/10.1007/978-3-540-24598-8_23.Google Scholar
- Ulrich von Agris, Moritz Knorr, and K.-F. Kraiss. 2008. The significance of facial features for automatic sign language recognition. In Proceedings of the International Conference on Automatic Face and Gesture Recognition (FG’08). 1–6.Google ScholarCross Ref
- Matthew J. Vowels, Necati Cihan Camgoz, and Richard Bowden. 2020. NestedVAE: Isolating common factors via weak supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9202–9212.Google ScholarCross Ref
- World Wide Web Consortium (W3C). 2018. Web Content Accessibility Guidelines (WCAG) 2.1. Retrieved from https://www.w3.org/TR/2018/REC-WCAG21-20180605/.Google Scholar
- Elyse Wanshel. 2016. Students Invented Gloves That Can Translate Sign Language into Speech and Text. Retrieved from https://www.huffpost.com/entry/navid-azodi-and-thomas-pryor-signaloud-gloves-translate-american-sign-language-into-speech-text_n_571fb38ae4b0f309baeee06d.Google Scholar
- R. Yang, S. Sarkar, and B. Loeding. 2010. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming. IEEE Trans. Pattern Anal. Mach. Intell. 32, 3 (Mar. 2010), 462–477. DOI:https://doi.org/10.1109/TPAMI.2009.26.Google ScholarDigital Library
- Zhaoyang Yang, Zhenmei Shi, Xiaoyong Shen, and Yu-Wing Tai. 2019. SF-Net: Structured feature network for continuous sign language recognition. arXiv:1908.01341 [cs] (Aug. 2019).Google Scholar
- YouTube. [n.d.]. YouTube.com Results, Query: “ASL homework”. Retrieved from https://www.youtube.com/results?search_query=ASL+homework.Google Scholar
- L. L. C. YouTube. 2005. YouTube. Retrieved from https://www.youtube.com/.Google Scholar
- Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In Proceedings of the Conference on Learning Representations (ICLR’17).Google Scholar
- Hao Zhou, Wengang Zhou, Yun Zhou, and Houqiang Li. 2020. Spatial-temporal multi-cue network for continuous sign language recognition. arXiv:2002.03187 [cs] (Feb. 2020).Google Scholar
- Zhihao Zhou, Kyle Chen, Xiaoshi Li, Songlin Zhang, Yufen Wu, Yihao Zhou, Keyu Meng, Chenchen Sun, Qiang He, Wenjing Fan et al. 2020. Sign-to-speech translation using machine-learning assisted stretchable sensor arrays. Nat. Electron. (2020). DOI:https://doi.org/10.1038/s41928-020-0428-6.Google Scholar
Index Terms
- The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective
Recommendations
FATE/MM 20: 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in MultiMedia
MM '20: Proceedings of the 28th ACM International Conference on MultimediaThe series of FAT/FAccT events aim at bringing together researchers and practitioners interested in fairness, accountability, transparency and ethics of computational methods. The FATE/MM workshop focuses on addressing these issues in the Multimedia ...
Behavioural artificial intelligence: an agenda for systematic empirical studies of artificial inference
AbstractArtificial intelligence (AI) receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate ...
Rebuilding 'ethics' to govern AI: How to re-set the boundaries for the legal sector?
ICAIL '23: Proceedings of the Nineteenth International Conference on Artificial Intelligence and LawArtificial intelligence (AI) has been transforming the legal sector and profession given every day enhancing AI-driven legal tech tools. Considering the far-reaching ethical implications of such tools and the disparate functionalities of 'AI ethics' and '...
Comments