skip to main content
10.1145/2750858.2806060acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Exploring feedback strategies to improve public speaking: an interactive virtual audience framework

Published:07 September 2015Publication History

ABSTRACT

Good public speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak publicly requires a lot of training and practice. Recent technological developments enable new approaches for public speaking training that allow users to practice in a safe and engaging environment. We explore feedback strategies for public speaking training that are based on an interactive virtual audience paradigm. We investigate three study conditions: (1) a non-interactive virtual audience (control condition), (2) direct visual feedback, and (3) nonverbal feedback from an interactive virtual audience. We perform a threefold evaluation based on self-assessment questionnaires, expert assessments, and two objectively annotated measures of eye-contact and avoidance of pause fillers. Our experiments show that the interactive virtual audience brings together the best of both worlds: increased engagement and challenge as well as improved public speaking skills as judged by experts.

References

  1. Anderson, K., and et al. The TARDIS framework: Intelligent virtual agents for social coaching in job interviews. In Proceedings of International Conference on Advances in Computer Entertainment (2013), 476--491. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Baltrusaitis, T., Robinson, P., and Morency, L.-P. Constrained local neural fields for robust facial landmark detection in the wild. In IEEE International Conference on Computer Vision Workshops (ICCVW), IEEE (2013), 354--361. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Batrinca, L., Stratou, G., Shapiro, A., Morency, L.-P., and Scherer, S. Cicero - towards a multimodal virtual audience platform for public speaking training. In Proceedings of Intelligent Virtual Agents (IVA) 2013, Springer (2013), 116--128.Google ScholarGoogle ScholarCross RefCross Ref
  4. Chen, L., Feng, G., Joe, J., Leong, C. W., Kitchen, C., and Lee, C. M. Towards automated assessment of public speaking skills using multimodal cues. In Proceedings of the 16th International Conference on Multimodal Interaction, 200--203. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Chollet, M., Stratou, G., Shapiro, A., Morency, L.-P., and Scherer, S. An interactive virtual audience platform for public speaking training. In Proceedings of International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (2014), 1657--1658. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. DeVault, D., and et al. Simsensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of Autonomous Agents and Multiagent Systems (AAMAS) (2014), 1061--1068. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. DiMatteo, M. R., Hays, R. D., and Prince, L. M. Relationship of physicians' nonverbal communication skill to patient satisfaction, appointment noncompliance, and physician workload. Health Psychology 5, 6 (1986), 581.Google ScholarGoogle ScholarCross RefCross Ref
  8. Durlak, J. A. How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology 34, 9 (2009), 917--928.Google ScholarGoogle ScholarCross RefCross Ref
  9. Feng, A., Huang, Y., Xu, Y., and Shapiro, A. Fast, automatic character animation pipelines. Computer Animation and Virtual Worlds (2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Harris, S. R., Kemmerling, R. L., and North, M. M. Brief virtual reality therapy for public speaking anxiety. Cyberpsychology and Behavior 5 (2002), 543--550.Google ScholarGoogle ScholarCross RefCross Ref
  11. Hart, J., Gratch, J., and Marsella, S. How Virtual Reality Training Can Win Friends and Influence People. Human Factors in Defence. Ashgate, 2013, ch. 21, 235--249.Google ScholarGoogle Scholar
  12. Hedges, L. V. Distribution theory for glass's estimator of effect size and related estimators. Journal of Educational Statistics 6, 2 (1981), 107--128.Google ScholarGoogle ScholarCross RefCross Ref
  13. Hernandez, M., Choi, J., and Medioni, G. Laser scan quality 3-d face modeling using a low-cost depth camera. In Proceedings of the 20th European Signal Processing Conference (2012), 1995--1999.Google ScholarGoogle Scholar
  14. Hook, J. N., Smith, C. A., and Valentiner, D. P. A short-form of the personal report of confidence as a speaker. Personality and Individual Differences 44, 6 (2008), 1306--1313.Google ScholarGoogle ScholarCross RefCross Ref
  15. Hoque, M., Courgeon, M., Martin, J.-C., Bilge, M., and Picard, R. Mach: My automated conversation coach. In Proceedings of International Joint Conference on Pervasive and Ubiquitous Computing (2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jennett, C., Cox, A. L., Cairns, P., Dhoparee, S., Epps, A., Tijs, T., and Walton, A. Measuring and defining the experience of immersion in games. International Journal of Human-Computer Studies 66 (2008), 641--661. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Johnson, W. L., Rickel, J. W., and Lester, J. C. Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education 11, 1 (2000), 47--78.Google ScholarGoogle Scholar
  18. Kwon, J. H., Powell, J., and Chalmers, A. How level of realism influences anxiety in virtual reality environments for a job interview. International Journal of Human-Computer Studies 71, 10 (2013), 978--987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Lane, H. C., Hays, M. J., Core, M. G., and Auerbach, D. Learning intercultural communication skills with virtual humans: Feedback and fidelity. Journal of Educational Psychology Special Issue on Advanced Learning Technologies 105, 4 (2013), 1026--1035.Google ScholarGoogle Scholar
  20. Lane, H. C., and Wray, R. E. Individualized Cultural and Social Skills Learning with Virtual Humans. Adaptive Technologies for Training and Education. Cambridge University Press, 2012, ch. 10.Google ScholarGoogle Scholar
  21. Li, H., Weise, T., and Pauly, M. Example-based facial rigging. ACM Transactions on Graphics (Proceedings SIGGRAPH 2010) 29, 3 (July 2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Lucas, G., Gratch, J., King, A., and Morency, L.-P. It's only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior 37 (2014), 94--100.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. MacIntyre, P. D., Thivierge, K. A., and MacDonald, J. R. The effects of audience interest, responsiveness, and evaluation on public speaking anxiety and related variables. Communication Research Reports 14, 2 (1997), 157--168.Google ScholarGoogle ScholarCross RefCross Ref
  24. North, M. M., North, S. M., and Coble, J. R. Virtual reality therapy: An effective treatment for the fear of public speaking. International Journal of Virtual Reality 3 (1998), 2--6.Google ScholarGoogle ScholarCross RefCross Ref
  25. Park, S., Shoemark, P., and Morency, L.-P. Toward crowdsourcing micro-level behavior annotations: the challenges of interface, training, and generalization. In Proceedings of the 18th International Conference on Intelligent User Interfaces (IUI '14), ACM (2014), 37--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Paul, G. L. Insight vs. Desensitization in Psychotherapy: An Experiment in Anxiety Reduction. Stanford University Press, 1966.Google ScholarGoogle Scholar
  27. Pertaub, D. P., Slater, M., and Barker, C. An experiment on public speaking anxiety in response to three different types of virtual audience. Presence: Teleoperators and virtual environments 11 (2002), 68--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Peterson, R. T. An examination of the relative effectiveness of training in nonverbal communication: Personal selling implications. Journal of Marketing Education 27, 2 (2005), 143--150.Google ScholarGoogle ScholarCross RefCross Ref
  29. Rosenberg, A., and Hirschberg, J. Acoustic/prosodic and lexical correlates of charismatic speech. In Proceedings of Interspeech 2005, ISCA (2005), 513--516.Google ScholarGoogle ScholarCross RefCross Ref
  30. Rowe, J., Shores, L., Mott, B., and Lester, J. C. Integrating learning and engagement in narrative-centered learning environments. In Proceedings of the Tenth International Conference on Intelligent Tutoring Systems (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Scherer, S., Layher, G., Kane, J., Neumann, H., and Campbell, N. An audiovisual political speech analysis incorporating eye-tracking and perception data. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), ELRA (2012), 1114--1120.Google ScholarGoogle Scholar
  32. Scherer, S., Marsella, S., Stratou, G., Xu, Y., Morbini, F., Egan, A., Rizzo, A., and Morency, L.-P. Perception markup language: Towards a standardized representation of perceived nonverbal behaviors. In Proceedings of Intelligent Virtual Agents (IVA'12), LNAI 7502, Springer (2012), 455--463. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Schreiber, L. M., Gregory, D. P., and Shibley, L. R. The development and test of the public speaking competence rubric. Communication Education 61, 3 (2012), 205--233.Google ScholarGoogle ScholarCross RefCross Ref
  34. Shapiro, A. Building a character animation system. In Motion in Games, J. Allbeck and P. Faloutsos, Eds., vol. 7060 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg, 2011, 98--109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Shapiro, A., Feng, A., Wang, R., Li, H., Bolas, M., Medioni, G., and Suma, E. Rapid avatar capture and simulation using commodity depth sensors. Computer Animation and Virtual Worlds 25, 3--4 (2014), 201--211. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Sloetjes, H., and Wittenburg, P. Annotation by category: Elan and iso dcr. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), European Language Resources Association (ELRA) (2008).Google ScholarGoogle Scholar
  37. Spence, S. H. Social skills training with children and young people: Theory, evidence, and practice. Child and Adolescent Mental Health 8, 2 (2003), 84--96.Google ScholarGoogle ScholarCross RefCross Ref
  38. Strangert, E., and Gustafson, J. What makes a good speaker? subject ratings, acoustic measurements and perceptual evaluations. In Proceedings of Interspeech 2008, ISCA (2008), 1688--1691.Google ScholarGoogle ScholarCross RefCross Ref
  39. Swartout, W., Artstein, R., Forbell, E., Foutz, S., Lane, H. C., Lange, B., Morie, J., Rizzo, A., and Traum, D. Virtual humans for learning. AI Magazine 34, 4 (2013), 13--30.Google ScholarGoogle ScholarCross RefCross Ref
  40. Tanaka, H., Sakti, S., Neubig, G., Toda, T., Negoro, H., Iwasaka, H., and Nakamura, S. Automated social skills trainer. In ACM International Conference on Intelligent User Interfaces (IUI) (2015). Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Tanveer, M., Lin, E., and Hoque, M. E. Rhema: A real-time in-situ intelligent interface to help people with public speaking,. In ACM International Conference on Intelligent User Interfaces (IUI) (2015). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Wang, N., and Gratch, J. Don't Just Stare at Me! In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI) (Chicago, IL, 2010), 1241--1250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Wang, R., Choi, J., and Medioni, G. Accurate full body scanning from a single fixed 3d camera. In 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, IEEE (2012), 432--439. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Exploring feedback strategies to improve public speaking: an interactive virtual audience framework

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      UbiComp '15: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing
      September 2015
      1302 pages
      ISBN:9781450335744
      DOI:10.1145/2750858

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 September 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      UbiComp '15 Paper Acceptance Rate101of394submissions,26%Overall Acceptance Rate764of2,912submissions,26%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader