skip to main content
10.1145/3308561.3353774acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article
Public Access
Best Paper

Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective

Published:24 October 2019Publication History

ABSTRACT

Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture. Despite the need for deep interdisciplinary knowledge, existing research occurs in separate disciplinary silos, and tackles separate portions of the sign language processing pipeline. This leads to three key questions: 1) What does an interdisciplinary view of the current landscape reveal? 2) What are the biggest challenges facing the field? and 3) What are the calls to action for people working in the field? To help answer these questions, we brought together a diverse group of experts for a two-day workshop. This paper presents the results of that interdisciplinary workshop, providing key background that is often overlooked by computer scientists, a review of the state-of-the-art, a set of pressing challenges, and a call to action for the research community.

References

  1. Nicoletta Adamo-Villani and Ronnie B. Wilbur. 2015. ASL-Pro: American Sign Language Animation with Prosodic Elements. In Universal Access in Human-Computer Interaction. Access to Interaction, Margherita Antona and Constantine Stephanidis (Eds.). Springer International Publishing, Cham, 307--318.Google ScholarGoogle Scholar
  2. M Ebrahim Al-Ahdal and Md Tahir Nooritawati. 2012. Review in sign language recognition systems. In 2012 IEEE Symposium on Computers & Informatics (ISCI). IEEE, 52--57.Google ScholarGoogle ScholarCross RefCross Ref
  3. Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 259--270.Google ScholarGoogle Scholar
  4. Anwar AlShammari, Asmaa Alsumait, and Maha Faisal. 2018. Building an Interactive E-Learning Tool for Deaf Children: Interaction Design Process Framework. In 2018 IEEE Conference on e-Learning, e-Management and e-Services (IC3e). IEEE, 85--90.Google ScholarGoogle ScholarCross RefCross Ref
  5. UN General Assembly. 2006. Convention on the Rights of Persons with Disabilities. GA Res 61 (2006), 106.Google ScholarGoogle Scholar
  6. British Deaf Association. 2015. George Veditz Quote - 1913. (2015). https://vimeo.com/132549587 Accessed 2019-04--22.Google ScholarGoogle Scholar
  7. J Andrew Bangham, SJ Cox, Ralph Elliott, JRW Glauert, Ian Marshall, Sanja Rankov, and Mark Wells. 2000. Virtual signing: Capture, animation, storage and transmission-an overview of the visicast project. (2000).Google ScholarGoogle Scholar
  8. H-Dirksen L Bauman. 2004. Audism: Exploring the Metaphysics of Oppression. Journal of deaf studies and deaf education 9, 2 (2004), 239--246.Google ScholarGoogle ScholarCross RefCross Ref
  9. H-Dirksen L Bauman and Joseph J Murray. 2014. Deaf Gain: Raising the Stakes for Human Diversity. U of Minnesota Press.Google ScholarGoogle Scholar
  10. Bastien Berret, Annelies Braffort, and others. 2016. Collecting and Analysing a Motion-Capture Corpus of French Sign Language. In Workshop on the Representation and Processing of Sign Languages.Google ScholarGoogle Scholar
  11. Claudia Savina Bianchini, Fabrizio Borgia, Paolo Bottoni, and Maria De Marsico. 2012. SWift: a SignWriting improved fast transcriber. In Proceedings of the International Working Conference on Advanced Visual Interfaces. ACM, 390--393.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Annelies Braffort, Michael Filhol, Maxime Delorme, Laurence Bolot, Annick Choisier, and Cyril Verrecchia. 2016. KAZOO: A Sign Language Generation Platform Based on Production Rules. Univers. Access Inf. Soc. 15, 4 (Nov. 2016), 541--550. http://dx.doi.org/10.1007/s10209-015-0415--2Google ScholarGoogle ScholarCross RefCross Ref
  13. Danielle Bragg, Raja Kushalnagar, and Richard Ladner. 2018. Designing an Animated Character System for American Sign Language. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 282--294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Danielle Bragg, Kyle Rector, and Richard E Ladner. 2015. A User-Powered American Sign Language Dictionary. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, 1837--1848.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Helene Brashear, Valerie Henderson, Kwang-Hyun Park, Harley Hamilton, Seungyon Lee, and Thad Starner. 2006. American sign language recognition in game development for deaf children. In Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility. ACM, 79--86.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Helene Brashear, Thad Starner, Paul Lukowicz, and Holger Junker. 2003. Using Multiple Sensors for Mobile Sign Language Recognition. In 7th IEEE International Symposium on Wearable Computers. IEEE.Google ScholarGoogle Scholar
  17. Diane Brentari. 1996. Trilled Movement: Phonetic Realization and Formal Representation. Lingua 98, 1--3 (1996), 43--71.Google ScholarGoogle Scholar
  18. Diane Brentari. 2011. Handshape in Sign Language Phonology. Companion to phonology (2011), 195--222.Google ScholarGoogle Scholar
  19. Diane Brentari. 2018. Representing Handshapes in Sign Languages Using Morphological Templates1. Geb"ardensprachen: Struktur, Erwerb, Verwendung 13 (2018), 145.Google ScholarGoogle Scholar
  20. Diane Brentari, Jordan Fenlon, and Kearsy Cormier. 2018. Sign Language Phonology. (2018). http://dx.doi.org/10.1093/acrefore/9780199384655.013.117Google ScholarGoogle Scholar
  21. Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. 2018. Neural Sign Language Translation. In IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT.Google ScholarGoogle Scholar
  22. Xiujuan Chai, Hanjie Wang, and Xilin Chen. 2014. The DEVISIGN Large Vocabulary of Chinese Sign Language Database and Baseline Evaluations. Technical Report. Key Lab of Intelligent Information Processing of Chinese Academy of Sciences. 00000.Google ScholarGoogle Scholar
  23. C. Charayaphan and A. E. Marble. 1992. Image Processing System for Interpreting Motion in American Sign Language. Journal of Biomedical Engineering 14, 5 (Sept. 1992), 419--425. http://dx.doi.org/10.1016/0141--5425(92)90088--3Google ScholarGoogle ScholarCross RefCross Ref
  24. Émilie Chételat-Pelé, Annelies Braffort, and J Véronis. 2008. Sign Language Corpus Annotation: toward a new Methodology.. In LREC.Google ScholarGoogle Scholar
  25. Adrean Clark. 2012. How to Write American Sign Language. ASLwrite.Google ScholarGoogle Scholar
  26. Helen Cooper and Richard Bowden. 2010. Sign Language Recognition Using Linguistically Derived Sub -Units .... In LREC Workshop on the Representation and Processing of Sign Languages : Corpora and Sign Language Technologies. Valetta, Malta, 57--61.Google ScholarGoogle Scholar
  27. Helen Cooper, Brian Holt, and Richard Bowden. 2011. Sign Language Recognition. In Visual Analysis of Humans. Springer, 539--562.Google ScholarGoogle Scholar
  28. Helen Cooper, Eng-Jon Ong, Nicolas Pugeault, and Richard Bowden. 2012. Sign Language Recognition Using Sub-Units. The Journal of Machine Learning Research 13, 1 (2012), 2205--2231.Google ScholarGoogle ScholarCross RefCross Ref
  29. Onno A. Crasborn. 2015. Transcription and Notation Methods. In Research Methods in Sign Language Studies. John Wiley & Sons, Ltd, 74--88. http://dx.doi.org/10.1002/9781118346013.ch5Google ScholarGoogle Scholar
  30. R. Cui, H. Liu, and C. Zhang. 2019. A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training. IEEE Transactions on Multimedia 0 (2019), 1--1. http://dx.doi.org/10.1109/TMM.2018.2889563Google ScholarGoogle ScholarCross RefCross Ref
  31. Nils Dahlb"ack, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies-why and how. Knowledge-based systems 6, 4 (1993), 258--266.Google ScholarGoogle Scholar
  32. Maksym Davydov and Olga Lozynska. 2017. Information system for translation into Ukrainian sign language on mobile devices. In 2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT), Vol. 1. IEEE, 48--51.Google ScholarGoogle ScholarCross RefCross Ref
  33. Philippe Dreuw and Hermann Ney. 2008. Towards automatic sign language annotation for the elan tool. In Workshop Programme. 50.Google ScholarGoogle Scholar
  34. Paul Dudis. 2004. Depiction of events in ASL: Conceptual integration of temporal components. (2004).Google ScholarGoogle Scholar
  35. Sarah Ebling and John Glauert. 2016. Building a Swiss German Sign Language avatar with JASigning and evaluating it among the Deaf community. Universal Access in the Information Society 15, 4 (01 Nov 2016), 577--587. http://dx.doi.org/10.1007/s10209-015-0408--1Google ScholarGoogle Scholar
  36. Richard Clark Eckert and Amy June Rowley. 2013. Audism: A Theory and Practice of Audiocentric Privilege. Humanity & Society 37, 2 (2013), 101--130.Google ScholarGoogle ScholarCross RefCross Ref
  37. Ralph Elliott, John RW Glauert, JR Kennaway, and Ian Marshall. 2000. The development of language processing support for the ViSiCAST project. In ASSETS, Vol. 2000. 4th.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Ralph Elliott, John RW Glauert, JR Kennaway, Ian Marshall, and Eva Safar. 2008. Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Universal Access in the Information Society 6, 4 (2008), 375--391.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Michael Erard. 2017. Why Sign-Language Gloves Don't Help Deaf People. The Atlantic 9 (2017). https://www.theatlantic.com/technology/archive/2017/11/why-sign-language-gloves-dont-help-deaf-people/545441/Google ScholarGoogle Scholar
  40. S. S. Fels and G. E. Hinton. 1993. Glove-Talk : A Neural Network Interface between a Data-Glove and a Speech Synthesizer. IEEE Transactions on Neural Networks 4, 1 (Jan. 1993), 2--8. http://dx.doi.org/10.1109/72.182690Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Jens Forster, Christian Oberdörfer, Oscar Koller, and Hermann Ney. 2013. Modality Combination Techniques for Continuous Sign Language Recognition. In Iberian Conference on Pattern Recognition and Image Analysis (Lecture Notes in Computer Science 7887). Springer, Madeira, Portugal, 89--99.Google ScholarGoogle Scholar
  42. Jens Forster, Christoph Schmidt, Thomas Hoyoux, Oscar Koller, Uwe Zelle, Justus Piater, and Hermann Ney. 2012. RWTH -PHOENIX -Weather : A Large Vocabulary Sign Language Recognition and Translation Corpus. In International Conference on Language Resources and Evaluation. Istanbul, Turkey, 3785--3789.Google ScholarGoogle Scholar
  43. Jens Forster, Christoph Schmidt, Oscar Koller, Martin Bellgardt, and Hermann Ney. 2014. Extensions of the Sign Language Recognition and Translation Corpus RWTH -PHOENIX -Weather. In International Conference on Language Resources and Evaluation. Reykjavik, Island, 1911--1916.Google ScholarGoogle Scholar
  44. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018).Google ScholarGoogle Scholar
  45. Ann E Geers, Christine M Mitchell, Andrea Warner-Czyz, Nae-Yuh Wang, Laurie S Eisenberg, CDaCI Investigative Team, and others. 2017. Early sign language exposure and cochlear implantation benefits. Pediatrics 140, 1 (2017).Google ScholarGoogle Scholar
  46. Sylvie Gibet, Nicolas Courty, Kyle Duarte, and Thibaut Le Naour. 2011. The SignCom system for data-driven animation of interactive virtual signers: Methodology and Evaluation. ACM Transactions on Interactive Intelligent Systems (TiiS) 1, 1 (2011), 6.Google ScholarGoogle Scholar
  47. Neil Stephen Glickman. 1993. Deaf Identity Development: Construction and Validation of a Theoretical Model. (1993).Google ScholarGoogle Scholar
  48. Gary J. Grimes. 1983. Digital Data Entry Glove Interface Device. (Nov. 1983). US Patent.Google ScholarGoogle Scholar
  49. Kirsti Grobel and Marcell Assan. 1997. Isolated sign language recognition using hidden Markov models. In 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Vol. 1. IEEE, 162--167.Google ScholarGoogle ScholarCross RefCross Ref
  50. Thomas Hanke. 2004. HamNoSys-representing sign language data in language resources and language processing contexts. In LREC, Vol. 4. 1--6.Google ScholarGoogle Scholar
  51. Dhananjai Hariharan, Sedeeq Al-khazraji, and Matt Huenerfauth. 2018. Evaluation of an English Word Look-Up Tool for Web-Browsing with Sign Language Video for Deaf Readers. In International Conference on Universal Access in Human-Computer Interaction. Springer, 205--215.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Alexis Heloir, Sylvie Gibet, Franck Multon, and Nicolas Courty. 2005. Captured Motion Data Processing for Real Time Synthesis of Sign Language. In International Gesture Workshop. Springer, 168--171.Google ScholarGoogle Scholar
  53. Hynek Hermansky. 2013. Multistream recognition of speech: Dealing with unknown unknowns. Proc. IEEE 101, 5 (2013), 1076--1088.Google ScholarGoogle ScholarCross RefCross Ref
  54. Jie Huang, Wengang Zhou, Qilin Zhang, Houqiang Li, and Weiping Li. 2018. Video-Based Sign Language Recognition without Temporal Segmentation. In Proceedings of the Thirty -Second vphantomAAAI vphantom Conference on Artificial Intelligence. New Orleans, Louisiana, USA.Google ScholarGoogle ScholarCross RefCross Ref
  55. Matt Huenerfauth, Elaine Gale, Brian Penly, Sree Pillutla, Mackenzie Willard, and Dhananjai Hariharan. 2017. Evaluation of language feedback methods for student videos of american sign language. ACM Transactions on Accessible Computing (TACCESS) 10, 1 (2017), 2.Google ScholarGoogle Scholar
  56. Matt Huenerfauth and V Hanson. 2009. Sign language in the interface: access for deaf signers. Universal Access Handbook. NJ: Erlbaum 38 (2009).Google ScholarGoogle Scholar
  57. Matt Huenerfauth and Hernisa Kacorri. 2014. Release of Experimental Stimuli and Questions for Evaluating Facial Expressions in Animations of American Sign Language. In Proceedings of the the 6th Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel, The 9th International Conference on Language Resources and Evaluation (LREC 2014), Reykjavik, Iceland.Google ScholarGoogle Scholar
  58. Matt Huenerfauth, Mitch Marcus, and Martha Palmer. 2006. Generating American Sign Language classifier predicates for English-to-ASL machine translation. Ph.D. Dissertation. University of Pennsylvania.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Matt Huenerfauth, Liming Zhao, Erdan Gu, and Jan Allbeck. 2007. Evaluating American Sign Language Generation Through the Participation of Native ASL Signers. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility (Assets '07). ACM, New York, NY, USA, 211--218. http://dx.doi.org/10.1145/1296843.1296879Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Tom Humphries. 1975. Audism: The Making of a Word. Unpublished essay (1975).Google ScholarGoogle Scholar
  61. Saba Joudaki, Dzulkifli bin Mohamad, Tanzila Saba, Amjad Rehman, Mznah Al-Rodhaan, and Abdullah Al-Dhelaan. 2014. Vision-based sign language classification: a directional review. IETE Technical Review 31, 5 (2014), 383--391.Google ScholarGoogle ScholarCross RefCross Ref
  62. Hamid Reza Vaezi Joze and Oscar Koller. 2018. MS -ASL : A Large -Scale Data Set and Benchmark for Understanding American Sign Language. arXiv:1812.01053 [cs] (Dec. 2018).Google ScholarGoogle Scholar
  63. Hernisa Kacorri and Matt Huenerfauth. 2016. Continuous Profile Models in ASL Syntactic Facial Expression Synthesis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 2084--2093. http://dx.doi.org/10.18653/v1/P16--1196Google ScholarGoogle ScholarCross RefCross Ref
  64. Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, Kellie Menzies, and Mackenzie Willard. 2017. Regression Analysis of Demographic and Technology-Experience Factors Influencing Acceptance of Sign Language Animation. ACM Trans. Access. Comput. 10, 1, Article 3 (April 2017), 33 pages. http://dx.doi.org/10.1145/3046787Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Avi C. Kak. 2002. Purdue RVL -SLLL ASL Database for Automatic Recognition of American Sign Language. In Proceedings of the 4th IEEE International Conference on Multimodal Interfaces (ICMI '02). IEEE Computer Society, Washington, DC, USA, 167--172. http://dx.doi.org/10.1109/ICMI.2002.1166987Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Kostas Karpouzis, George Caridakis, S-E Fotinea, and Eleni Efthimiou. 2007. Educational resources and implementation of a Greek sign language synthesis architecture. Computers & Education 49, 1 (2007), 54--74.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Jonathan Keane, Diane Brentari, and Jason Riggle. 2012. Coarticulation in ASL Fingerspelling. In Proceedings of the North East Linguistic Society , Vol. 42.Google ScholarGoogle Scholar
  68. Elizabeth Keating, Terra Edwards, and Gene Mirus. 2018. Cybersign and new Proximities: Impacts of New Communication Technologies on Space and Language. Journal of Pragmatics 40, 6 (2018), 1067--1081.Google ScholarGoogle ScholarCross RefCross Ref
  69. Rafiqul Zaman Khan and Noor Adnan Ibraheem. 2012. Hand Gesture Recognition: a Literature Review. International journal of artificial Intelligence & Applications 3, 4 (2012), 161.Google ScholarGoogle Scholar
  70. Michael Kipp. 2017. Anvil. (2017). https://www.anvil-software.org/ Accessed 2019-04--29.Google ScholarGoogle Scholar
  71. Michael Kipp, Quan Nguyen, Alexis Heloir, and Silke Matthes. 2011. Assessing the Deaf User Perspective on Sign Language Avatars. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility. ACM, 107--114.Google ScholarGoogle Scholar
  72. Oscar Koller, Necati Cihan Camgoz, Hermann Ney, and Richard Bowden. 2019. Weakly Supervised Learning with Multi -Stream CNN -LSTM -HMMs to Discover Sequential Parallelism in Sign Language Videos. IEEE Transactions on Pattern Analysis and Machine Intelligence 0 (2019), 15.Google ScholarGoogle Scholar
  73. Oscar Koller, Hermann Ney, and Richard Bowden. 2016. Deep Hand : How to Train a CNN on 1 Million Hand Images When Your Data Is Continuous and Weakly Labelled. In IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA, 3793--3802.Google ScholarGoogle ScholarCross RefCross Ref
  74. Oscar Koller, Sepehr Zargaran, and Hermann Ney. 2017. Re-Sign : Re -Aligned End -To -End Sequence Modelling With Deep Recurrent CNN -HMMs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA, 4297--4305.Google ScholarGoogle ScholarCross RefCross Ref
  75. Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2016. Deep Sign : Hybrid CNN -HMM for Continuous Sign Language Recognition. In British Machine Vision Conference. York, UK.Google ScholarGoogle ScholarCross RefCross Ref
  76. Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2018. Deep Sign : Enabling Robust Statistical Continuous Sign Language Recognition via Hybrid CNN -HMMs. International Journal of Computer Vision 126, 12 (Dec. 2018), 1311--1325. http://dx.doi.org/10.1007/s11263-018--1121--3Google ScholarGoogle ScholarCross RefCross Ref
  77. Reiner Konrad. 2012. Sign language corpora survey. (2012). https://www.sign-lang.uni-hamburg.de/dgs-korpus/files/inhalt_pdf/SL-Corpora-Survey_update_2012.pdfGoogle ScholarGoogle Scholar
  78. Harlan Lane. 2017. A Chronology of the Oppression of Sign Language in France and the United States. In Recent perspectives on American Sign Language. Psychology Press, 119--161.Google ScholarGoogle Scholar
  79. Simon Lang, Marco Block, and Raúl Rojas. 2012. Sign language recognition using kinect. In International Conference on Artificial Intelligence and Soft Computing. Springer, 394--402.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Seungyon Lee, Valerie Henderson, Harley Hamilton, Thad Starner, Helene Brashear, and Steven Hamilton. 2005. A gesture-based American Sign Language Game for Deaf Children. In CHI'05 Extended Abstracts on Human Factors in Computing Systems. ACM, 1589--1592.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Rung-Huei Liang and Ming Ouhyoung. 1998. A Real -Time Continuous Gesture Recognition System for Sign Language. In Proceedings of 3rd International Conference on Face an Gesture Recognition. Nara, Japan, 558--567.Google ScholarGoogle Scholar
  82. Pengfei Lu and Matt Huenerfauth. 2010. Collecting a motion-capture corpus of American Sign Language for data-driven generation research. In Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies. Association for Computational Linguistics, 89--97.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Silke Matthes, Thomas Hanke, Anja Regen, Jakob Storz, Satu Worseck, Eleni Efthimiou, Athanasia-Lida Dimou, Annelies Braffort, John Glauert, and Eva Safar. 2012. Dicta-Sign--building a multilingual sign language corpus. In Proc. of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions Between Corpus and Lexicon (LREC), European Language Resources Association. Istanbul.Google ScholarGoogle Scholar
  84. Rachel I Mayberry and Robert Kluender. 2018. Rethinking the critical period for language: New insights into an old question from American Sign Language. Bilingualism: Language and Cognition 21, 5 (2018), 886--905.Google ScholarGoogle ScholarCross RefCross Ref
  85. Carolyn McCaskill, Ceil Lucas, Robert Bayley, and Joseph Hill. 2011. The hidden treasure of Black ASL: Its history and structure. Structure 600 (2011), 83726.Google ScholarGoogle Scholar
  86. Masahiro Mori, Karl F MacDorman, and Norri Kageki. 2012. The uncanny valley [from the field]. IEEE Robotics & Automation Magazine 19, 2 (2012), 98--100.Google ScholarGoogle ScholarCross RefCross Ref
  87. GRS Murthy and RS Jadon. 2009. A Review of Vision Vased Hand Gestures Recognition. International Journal of Information Technology and Knowledge Management 2, 2 (2009), 405--410.Google ScholarGoogle Scholar
  88. University of Science and SLR Group Technology of China, Multimedia Computing & Communication. 2019. Chinese Sign Language Recognition Dataset. (2019). http://home.ustc.edu.cn/ pjh/dataset/cslr/index.html Accessed 2019-04--29.Google ScholarGoogle Scholar
  89. World Federation of the Deaf. 2018. Our Work. (2018). http://wfdeaf.org/our-work/ Accessed 2019-03--26.Google ScholarGoogle Scholar
  90. Mariusz Oszust and Marian Wysocki. 2013. Polish sign language words recognition with kinect. In 2013 6th International Conference on Human System Interactions (HSI). IEEE, 219--226.Google ScholarGoogle ScholarCross RefCross Ref
  91. Cemil Oz and Ming C. Leu. 2011. American Sign Language Word Recognition with a Sensory Glove Using Artificial Neural Networks. Engineering Applications of Artificial Intelligence 24, 7 (Oct. 2011), 1204--1213. http://dx.doi.org/10.1016/j.engappai.2011.06.015Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Helen Petrie, Wendy Fisher, Kurt Weimann, and Gerhard Weber. 2004. Augmenting icons for deaf computer users. In CHI'04 Extended Abstracts on Human Factors in Computing Systems. ACM, 1131--1134.Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. S. Prillwitz, R. Leven, H. Zienert, R. Zienert, and T. Hanke. 1989. HamNoSys. Version 2.0. Signum, Hamburg.Google ScholarGoogle Scholar
  94. Jeanne Reis, Erin T Solovey, Jon Henner, Kathleen Johnson, and Robert Hoffmeister. 2015. ASL CLeaR: STEM education tools for deaf students. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility. ACM, 441--442.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Jason Rodolitz, Evan Gambill, Brittany Willis, Christian Vogler, and Raja Kushalnagar. 2019. Accessibility of voice-activated agents for people who are deaf or hard of hearing. Journal on Technology and Persons with Disabilities 7 (2019).Google ScholarGoogle Scholar
  96. Wendy Sandler. 2006. Sign Language and Linguistic Universals. Cambridge University Press.Google ScholarGoogle Scholar
  97. Johan Schalkwyk, Doug Beeferman, Francc oise Beaufays, Bill Byrne, Ciprian Chelba, Mike Cohen, Maryam Kamvar, and Brian Strope. 2010. Google search by voice: A case study. In Advances in speech recognition. Springer, 61--90.Google ScholarGoogle Scholar
  98. Christoph Schmidt, Oscar Koller, Hermann Ney, Thomas Hoyoux, and Justus Piater. 2013a. Enhancing Gloss -Based Corpora with Facial Features Using Active Appearance Models. In International Symposium on Sign Language Translation and Avatar Technology , Vol. 2. Chicago, IL, USA.Google ScholarGoogle Scholar
  99. Christoph Schmidt, Oscar Koller, Hermann Ney, Thomas Hoyoux, and Justus Piater. 2013b. Using Viseme Recognition to Improve a Sign Language Translation System. In International Workshop on Spoken Language Translation. Heidelberg, Germany, 197--203.Google ScholarGoogle Scholar
  100. Jérémie Segouat and Annelies Braffort. 2009. Toward the study of sign language coarticulation: methodology proposal. In Proceedings of the Second International Conferences on Advances in Computer-Human Interactions, Cancun, 2009. 369--374. https://doi.org/10.1109/ACHI.2009.25Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. B. Shi, A. M. Del Rio, J. Keane, J. Michaux, D. Brentari, G. Shakhnarovich, and K. Livescu. 2018. American Sign Language Fingerspelling Recognition in the Wild. In 2018 IEEE Spoken Language Technology Workshop (SLT ). 145--152. http://dx.doi.org/10.1109/SLT.2018.8639639Google ScholarGoogle Scholar
  102. ShuR. 2013. SLinto Dictionary. (2013). http://slinto.com/us/ Accessed 2019-04--29.Google ScholarGoogle Scholar
  103. LLC. Signing Savvy. 2019. SigningSavvy. (2019). https://www.signingsavvy.com/ Accessed 2019-05-02.Google ScholarGoogle Scholar
  104. Robert Sparrow. 2005. Defending deaf culture: The case of cochlear implants. Journal of Political Philosophy 13, 2 (2005), 135--152.Google ScholarGoogle ScholarCross RefCross Ref
  105. T. Starner and A. Pentland. 1995. Real-Time American Sign Language Recognition from Video Using Hidden Markov Models. In International Symposium on Computer Vision. 265--270.Google ScholarGoogle Scholar
  106. William C. Stokoe. 1960. Sign Language Structure : An Outline of the Visual Communication Systems of the American Deaf. Studies in linguistics: Occasional papers 8 (1960).Google ScholarGoogle Scholar
  107. W. C. Stokoe, D. Casterline, and C. Croneberg. 1965. A Dictionary of American Sign Language on Linguistic Principles. Linstok Press.Google ScholarGoogle Scholar
  108. Jesus Suarez and Robin R Murphy. 2012. Hand Gesture Recognition with Depth Images: A Review. In 2012 IEEE RO-MAN: the 21st IEEE international symposium on robot and human interactive communication. IEEE, 411--417.Google ScholarGoogle Scholar
  109. V. Sutton and Deaf Action Committee for Sign Writing. 2000. Sign Writing. Deaf Action Committee (DAC).Google ScholarGoogle Scholar
  110. Shinichi Tamura and Shingo Kawasaki. 1988. Recognition of Sign Language Motion Images. Pattern Recognition 21, 4 (1988), 343--353. http://dx.doi.org/10.1016/0031--3203(88)90048--9Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Five Technologies. 2015. Five App. (2015). https://fiveapp.mobi/ Accessed 2019-04--29.Google ScholarGoogle Scholar
  112. The Max Planck Institute for Psycholinguistics The language Archive. 2018. ELAN. (2018). https://tla.mpi.nl/tools/tla-tools/elan/elan-description/ Accessed 2019-04--29.Google ScholarGoogle Scholar
  113. D. Uebersax, J. Gall, M. Van den Bergh, and L. Van Gool. 2011. Real-Time Sign Language Letter and Word Recognition from Depth Data. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops ). 383--390. http://dx.doi.org/10.1109/ICCVW.2011.6130267Google ScholarGoogle Scholar
  114. Els Van der Kooij. 2002. Phonological Categories in Sign Language of the Netherlands: The Role of Phonetic Implementation and Iconicity. Netherlands Graduate School of Linguistics.Google ScholarGoogle Scholar
  115. Vcom3D. 2019. SigningAvatar. (2019). http://www.vcom3d.com/ Accessed 2019-04--29.Google ScholarGoogle Scholar
  116. Tony Veale, Alan Conway, and Bróna Collins. 1998. The challenges of cross-modal translation: English-to-Sign-Language translation in the Zardoz system. Machine Translation 13, 1 (1998), 81--106.Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. Paranjape Ketki Vijay, Naphade Nilakshi Suhas, Chafekar Suparna Chandrashekhar, and Deshpande Ketaki Dhananjay. 2012. Recent developments in sign language recognition: A review. Int. J. Adv. Comput. Eng. Commun. Technol 1, 2 (2012), 21--26.Google ScholarGoogle Scholar
  118. U. von Agris and K.-F. Kraiss. 2007. Towards a Video Corpus for Signer -Independent Continuous Sign Language Recognition. In GW 2007 The 7th International Workshop on Gesture in Human -Computer Interaction and Simulation , Sales Dias and Jota (Eds.). Lisbon, Portugal, 10--11.Google ScholarGoogle Scholar
  119. Ronnie B Wilbur. 2000. Phonological and prosodic layering of nonmanuals in American Sign Language. The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima (2000), 215--244.Google ScholarGoogle Scholar
  120. Ying Wu and Thomas S Huang. 1999. Vision-based gesture recognition: A review. In International Gesture Workshop. Springer, 103--115.Google ScholarGoogle ScholarCross RefCross Ref
  121. Alexandros Yeratziotis. 2013. Sign Short Message Service (SSMS). (2013). http://www.ssmsapp.com/ Accessed 2019-04--29.Google ScholarGoogle Scholar
  122. Zahoor Zafrulla, Helene Brashear, Peter Presti, Harley Hamilton, and Thad Starner. 2011a. CopyCat: an American sign language game for deaf children. In Face and Gesture 2011. IEEE, 647--647.Google ScholarGoogle ScholarCross RefCross Ref
  123. Zahoor Zafrulla, Helene Brashear, Thad Starner, Harley Hamilton, and Peter Presti. 2011b. American Sign Language Recognition with the Kinect. In Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI '11). ACM, New York, NY, USA, 279--286. http://dx.doi.org/10.1145/2070481.2070532Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. Morteza Zahedi, Philippe Dreuw, David Rybach, Thomas Deselaers, Jan Bungeroth, and Hermann Ney. 2006. Continuous Sign Language Recognition - Approaches from Speech Recognition and Available Data Resources. In LREC Workshop on the Representation and Processing of Sign Languages : Lexicographic Matters and Didactic Scenarios. Genoa, Italy, 21--24.Google ScholarGoogle Scholar
  125. Liwei Zhao, Karin Kipper, William Schuler, Christian Vogler, Norman Badler, and Martha Palmer. 2000. A machine translation system from English to American Sign Language. In Conference of the Association for Machine Translation in the Americas. Springer, 54--67.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in
                • Published in

                  cover image ACM Conferences
                  ASSETS '19: Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility
                  October 2019
                  730 pages
                  ISBN:9781450366762
                  DOI:10.1145/3308561

                  Copyright © 2019 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 24 October 2019

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • research-article

                  Acceptance Rates

                  ASSETS '19 Paper Acceptance Rate41of158submissions,26%Overall Acceptance Rate436of1,556submissions,28%

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader