skip to main content
10.1145/2493432.2493502acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

MACH: my automated conversation coach

Published:08 September 2013Publication History

ABSTRACT

MACH--My Automated Conversation coacH--is a novel system that provides ubiquitous access to social skills training. The system includes a virtual agent that reads facial expressions, speech, and prosody and responds with verbal and nonverbal behaviors in real time. This paper presents an application of MACH in the context of training for job interviews. During the training, MACH asks interview questions, automatically mimics certain behavior issued by the user, and exhibit appropriate nonverbal behaviors. Following the interaction, MACH provides visual feedback on the user's performance. The development of this application draws on data from 28 interview sessions, involving employment-seeking students and career counselors. The effectiveness of MACH was assessed through a weeklong trial with 90 MIT undergraduates. Students who interacted with MACH were rated by human experts to have improved in overall interview performance, while the ratings of students in control groups did not improve. Post-experiment interviews indicate that participants found the interview experience informative about their behaviors and expressed interest in using MACH in the future.

Skip Supplemental Material Section

Supplemental Material

ubi1548.mp4

mp4

34.5 MB

References

  1. M. Schroder et al., "Building Autonomous Sensitive Artificial Listeners," IEEE Transactions on Affective Computing, vol. 3, no. 2, pp. 1--20, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Gratch et al., "Virtual Rapport," Intelligent Virtual Agents, vol. 4133, pp. 14--27, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. R. El Kaliouby and P. Robinson, "Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures," in In the IEEE International Workshop on Real Time Computer Vision for Human Computer Interaction, CVPR, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. G. Littlewort et al., "The computer expression recognition toolbox (CERT)," in IEEE International Conference on Automatic Face Gesture Recognition and Workshops FG, 2011, pp. 298--305.Google ScholarGoogle Scholar
  5. C. Smith, "Using Social Stories to Enhance Behaviour in Children with Autistic Spectrum Difficulties," Educational Psychology in Practice, vol. 17, no. 4, pp. 337--345, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  6. M. Solomon, B. L. Goodlin-Jones, and T. F. Anders, "A social adjustment enhancement intervention for high functioning autism, Asperger's syndrome, and pervasive developmental disorder NOS,"Journal of autism and developmental disorders, vol. 34, no. 6, pp. 649--668, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  7. M. H. Charlop-christy and S. E. Kelso, "Teaching Children With Autism Conversational Speech Using a Cue Card / Written Script Program," Children, vol. 26, no. 2, pp. 108--127, 2003.Google ScholarGoogle Scholar
  8. C. B. Denning, "Social Skills Interventions for Students With Asperger Syndrome and High-Functioning Autism : Research Findings and Implications for Teachers," Beyond Behavior, vol. 16, no. 3, pp. 16--24, 2007.Google ScholarGoogle Scholar
  9. C. Beard, "Cognitive bias modification for anxiety: current evidence and future directions.," Expert Review of Neurotherapeutics, vol. 11, no. 2, pp. 299--311, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  10. "HireVue." {Online}. Available: hirevue.com/. {Accessed: 06-Nov-2013}.Google ScholarGoogle Scholar
  11. M. LaFrance, Lip Service. W. W. Norton & Company, 2011.Google ScholarGoogle Scholar
  12. C. Nass, Y. Moon, and N. Green, "Are Machines Gender Neutral? Gender-Stereotypic Responses to Computers With Voices," Journal of Applied Social Psychology, vol. 27, no. 10, pp. 864--876, 1997.Google ScholarGoogle ScholarCross RefCross Ref
  13. P. Ekman and W. Friesen, "Facial Action Coding System: A Technique for the Measurement of Facial Movement". Palo Alto:, 1978.Google ScholarGoogle Scholar
  14. M. Mori, "The Uncanny Valley," Energy, vol. 7, no. 4, pp. 33--35, 1970.Google ScholarGoogle Scholar
  15. J. Cassell, "Embodied conversational agents: representation and intelligence in user interfaces," AI Magazine, vol. 22, no. 4, pp. 67--84, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. Mutlu, "Designing Embodied Cues for Dialog with Robots," AI Magazine, vol. 32, no. 4, pp. 17--30, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. Cassell, "More than just a pretty face: conversational protocols and the affordances of embodiment," Knowledge-Based Systems, vol. 14, no. 1-2, pp. 55--64, 2001.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. Cassell and A. Tartaro, "Intersubjectivity in human-agent interaction," Interaction Studies, vol. 3, pp. 391--410, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  19. T. Rosenberger, "PROSODIC FONT: the Space between the Spoken and the Written," Massachusetts Institute of Technology, 1998.Google ScholarGoogle Scholar
  20. "Blue Planet Public Speaking." {Online}. Available: http://www.blueplanet.org/. {Accessed: 6-June-2013}.Google ScholarGoogle Scholar
  21. D. M. Clark, "A cognitive perspective on social phobia," in International Handbook of Social Anxiety Concepts Research and Interventions Relating to the Self and Shyness, vol. 42, no. 1, W. R. Crozier and L. E. Alden, Eds. John Wiley & Sons Ltd, 2001, pp. 405--430.Google ScholarGoogle Scholar
  22. B. Froba and A. Ernst, "Face detection with the modified census transform," in Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004, pp. 91--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T. Kanade, J. F. Cohn, and Y. T. Y. Tian, "Comprehensive database for facial expression analysis," in Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000, pp. 46--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, "Coding facial expressions with Gabor wavelets," in Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 200--205. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. S. Kawato et al., "Real-time detection of nodding and head-shaking by directly detecting and tracking the 'between-eyes'," in Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000, pp. 40--45. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. P. Boersma and D. Weenink, "Praat: doing phonetics by computer {Computer program}." {Online}. Available: www.praat.org. {Accessed: 21-Jun-2013}.Google ScholarGoogle Scholar
  27. "Nuance Communications." {Online}. Available: http://www.nuance.com/for-developers/dragon/index.htm. {Accessed: 28-Jun-2013}.Google ScholarGoogle Scholar
  28. M. Courgeon et al.,"Impact of Expressive Wrinkles on Perception of a Virtual Character's Facial Expressions of Emotions," in Proceedings of the 9th International Conference on Intelligent Virtual Agents, 2009, pp. 201--214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. "CereProc." {Online}. Available: http://cereproc.com. {Accessed: 28-Jun-2013}.Google ScholarGoogle Scholar
  30. J. Brooke, "SUS - A quick and dirty usability scale," Usability evaluation in industry, pp. 189--194, 1996.Google ScholarGoogle Scholar

Index Terms

  1. MACH: my automated conversation coach

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader