skip to main content
10.1145/971478.971489acmotherconferencesArticle/Chapter ViewAbstractPublication PagespuiConference Proceedingsconference-collections
Article

Using eye movements to determine referents in a spoken dialogue system

Authors Info & Claims
Published:15 November 2001Publication History

ABSTRACT

Most computational spoken dialogue systems take a "literary" approach to reference resolution. With this type of approach, entities that are mentioned by a human interactor are unified with elements in the world state based on the same principles that guide the process during text interpretation. In human-to-human interaction, however, referring is a much more collaborative process. Participants often under-specify their referents, relying on their discourse partners for feedback if more information is needed to uniquely identify a particular referent. By monitoring eye-movements during this interaction, it is possible to improve the performance of a spoken dialogue system on referring expressions that are underspecified according to the literary model. This paper describes a system currently under development that employs such a strategy.

References

  1. Allopenna, P., Magnuson, J., and Tanenhaus, M., 1998. Tracking the time course of spoken word recognition. Journal of Memory and Language, vol. 38, pages 419--439.Google ScholarGoogle ScholarCross RefCross Ref
  2. Baldwin, B., 1995. CogNiac: A discourse processing engine. Ph.D. Thesis, University of Pennsylvania, Department of Computer and Information Sciences. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Clark, H., and Schaefer, E., 1989. Collaborating on contributions to converstations. In Dietrich, R., and Graumann, C. (eds.) Language processing in Social Contexts. Elsevier Press.Google ScholarGoogle Scholar
  4. Clark, H., and Wilkes-Gibbs, D., 1986. Referring as a collaborative process. Cognition, vol. 22, pages 1--39.Google ScholarGoogle Scholar
  5. Crain, S. and Steedman, M., 1985. On not being led up the garden path: the use of context by the Psychological Parser. In Dowty, D., Kartunnen, L., and Zwicky, A. (eds.) Natural Language Parsing: Psychological, Computational and Theoretical Perspectives. Cambridge University Press.Google ScholarGoogle Scholar
  6. Dowding, J., Gawron, M., Appelt, D., Cherny, L., Moore, R. and Moran, D. 1993. Gemini: A natural language system for spoken language understanding. In Proceedings for the Thirty-First Annual Meeting of the Association for Computational Linguistics. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Dowding, J., Hockey, B. A., Gawron, M. J., and Culy, C. 2001. Practical issues in compiling typed unification grammars for speech recognition. Proceedings for the 39th Annual Meeting of the Association for Computational Linguistics. Toulouse, France. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Griffin, Z., and Bock, K. 2000. What the eyes say about speaking. Psychological Science, vol. 11(4), pages 274--279.Google ScholarGoogle Scholar
  9. Martin, D., Cheyer, A. and Moran, D. 1998. Building distributed software systems with the open agent architecture. In Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology.Google ScholarGoogle Scholar
  10. Moore, R., Dowding, J., Bratt, H., Gawron, J., Gorfu, Y., and Cheyer, A. 1997. CommandTalk: A spoken-language interface for battlefield simulations. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 1--7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. PSA, 2001. Personal Satellite Assistant (PSA) Project. http://ic.arc.nasa.gov/ic/psa. As of 2 July 2001.Google ScholarGoogle Scholar
  12. Nuance, 2001. Nuance Communications, Inc. http://www.nuance.com. As of July 2, 2001.Google ScholarGoogle Scholar
  13. SMI, 2001. Sensomotoric Instruments, Inc. http://www.smi.de/el/index.html. As of 2 July 2001.Google ScholarGoogle Scholar
  14. Stent, A., Dowding, J., Gawron, J., Bratt, E., and Moore, R. 1999. The CommandTalk spoken dialog system. In Proceedings of the Thirty-Seventh Annual Meeting of the Association for Computational Linguistics, pages 183--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K., and Sedivy, J., 1995. Integration of visual and linguistic information in spoken language comprehension. Science, vol. 268, pages 632--634.Google ScholarGoogle Scholar
  16. Van Eijck, J., and Moore, R. 1992. Semantic rules for English. In Alshawi, H. (editor) The Core Language Engine. MIT Press.Google ScholarGoogle Scholar

Index Terms

  1. Using eye movements to determine referents in a spoken dialogue system

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      PUI '01: Proceedings of the 2001 workshop on Perceptive user interfaces
      November 2001
      241 pages
      ISBN:9781450374736
      DOI:10.1145/971478

      Copyright © 2001 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 November 2001

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader