skip to main content
10.1145/3313831.3376625acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Mirror Ritual: An Affective Interface for Emotional Self-Reflection

Published:23 April 2020Publication History

ABSTRACT

This paper introduces a new form of real-time affective interface that engages the user in a process of conceptualisation of their emotional state. Inspired by Barrett's Theory of Constructed Emotion, 'Mirror Ritual' aims to expand upon the user's accessible emotion concepts, and to ultimately provoke emotional reflection and regulation. The interface uses classified emotions – obtained through facial expression recognition -- as a basis for dynamically generating poetry. The perceived emotion is used to seed a poetry generation system based on OpenAI's GPT-2 model, fine-tuned on a specially curated corpus. We evaluate the device's ability to foster a personalised, meaningful experience for individual users over a sustained period. A qualitative analysis revealed that participants were able to affectively engage with the mirror, with each participant developing a unique interpretation of its poetry in the context of their own emotional landscape.

Skip Supplemental Material Section

Supplemental Material

paper498pv.mov

mov

45.5 MB

a498-rajcic-presentation.mp4

mp4

90.7 MB

References

  1. Cristina Albu. 2016. Mirror affect : seeing self, observing others in contemporary art. University of Minnesota Press, Minneapolis, MN.Google ScholarGoogle Scholar
  2. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, NY, NY. DOI: http://dx.doi.org/https://doi.org/10.1145/3290605.3300233Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Octavio Arriaga, Matias Valdenegro-Toro, and Paul Plöger. 2017. Real-time convolutional neural networks for emotion and gender classification. arXiv preprint arXiv:1710.07557 (2017).Google ScholarGoogle Scholar
  4. Lisa Feldman Barrett. 2017. The theory of constructed emotion: an active inference account of interoception and categorization. Soc Cogn Affect Neurosci 12, 1 (Jan 2017), 1--23. DOI: http://dx.doi.org/10.1093/scan/nsw154Google ScholarGoogle ScholarCross RefCross Ref
  5. Lisa Feldman Barrett and Ajay Bhaskar Satpute. 2013. Large-scale brain networks in affective and social neuroscience: towards an integrative functional architecture of the brain. Current opinion in neurobiology 23, 3 (2013), 361--372.Google ScholarGoogle Scholar
  6. Daniel Besserer, Johannes Bäurle, Alexander Nikic, Frank Honold, Felix Schüssel, and Michael Weber. 2016. FitMirror: A Smart Mirror For Positive Affect in Everyday User Morning Routines. In MA3HMI '16: Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. ACM, NY, NY, 48--55. DOI: http://dx.doi.org/10.1145/3011263.3011265Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Mark Blythe and Andrew Monk (Eds.). 2018. Funology 2. Springer International Publishing. DOI: http://dx.doi.org/10.1007/978--3--319--68213--6Google ScholarGoogle ScholarCross RefCross Ref
  8. Margaret A. Boden. 1998. Creativity and artificial intelligence. Artificial Intelligence 103, 1--2 (August 1998), 347--356.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Margaret A. Boden. 2010. Creativity and Art: Three Roads to Surprise. Oxford University Press.Google ScholarGoogle Scholar
  10. Kirsten Boehner, Rogério DePaula, Paul Dourish, and Phoebe Sengers. 2005. Affect: from information to interaction. In Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility. ACM, 59--68.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. David Bolter and Diane Gromala. 2003. Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. MIT Press, Cambridge, Mass.Google ScholarGoogle Scholar
  12. Oliver Bown and Jon McCormack. 2009. Creative Agency: A Clearer Goal for Artificial Life in the Arts. In ECAL (2) (Lecture Notes in Computer Science), George Kampis, István Karsai, and Eörs Szathmáry (Eds.), Vol. 5778. Springer, 254--261.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Bradski. 2000. The OpenCV Library. Dr. Dobb's Journal of Software Tools (2000).Google ScholarGoogle Scholar
  14. Rob Brezsney. 2018. Free Will Astrology. https://freewillastrology.com. (2018). Accessed: 2018--12--12.Google ScholarGoogle Scholar
  15. Joseph Bullington. 2005. Affective computing and emotion recognition systems: the future of biometric surveillance?. In Proceedings of the 2nd annual conference on Information security curriculum development. ACM, 95--99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Rafael A Calvo, Karthik Dinakar, Rosalind Picard, and Pattie Maes. 2016. Computing in mental health. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 3438--3445.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Antonio R. Damasio. 1994. Descartes' error : emotion, reason, and the human brain / Antonio R. Damasio. G.P. Putnam New York. xix, 312 p. : pages.Google ScholarGoogle Scholar
  18. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs/1810.04805 (2018). http://arxiv.org/abs/1810.04805Google ScholarGoogle Scholar
  19. Paul Dourish. 2001. Where the action is: the foundations of embodied interaction. Number x, 233. MIT Press, Cambridge, Mass.Google ScholarGoogle Scholar
  20. Cameron M Doyle and Kristen A Lindquist. 2017. Language and Emotion. The Science of Facial Expression (2017), 415.Google ScholarGoogle Scholar
  21. Paul Ekman. 1999. Basic emotions. Handbook of cognition and emotion (1999), 45--60.Google ScholarGoogle Scholar
  22. Maria Gendron and Lisa Feldman Barrett. 2018. Emotion perception as conceptual synchrony. Emotion Review 10, 2 (2018), 101--110.Google ScholarGoogle ScholarCross RefCross Ref
  23. Maria Gendron, Kristen A Lindquist, Lawrence Barsalou, and Lisa Feldman Barrett. 2012. Emotion words shape emotion percepts. Emotion 12, 2 (2012), 314.Google ScholarGoogle ScholarCross RefCross Ref
  24. Helena Goscilo. 2010. Vision, Vanitas, and Veritas: The Mirror in Art. Studies in 20th 21st Century Literature 34, 2 (2010), 282--319.Google ScholarGoogle Scholar
  25. Marc Hassenzahl, Kai Eckoldt, Sarah Diefenbach, Matthias Laschke, Eva Len, and Joonhwan Kim. 2013. Designing Moments of Meaning and Pleasure. Experience Design and Happiness. International Journal of Design 7, 3 (2013), 21--31.Google ScholarGoogle ScholarCross RefCross Ref
  26. Akpa Akpro Elder Hippocrate, Edith Talina Luhanga, Takata Masashi, Ko Watanabe, and Keiichi Yasumoto. 2017. Smart gyms need smart mirrors: design of a smart gym concept through contextual inquiry. In UbiComp '17 Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. ACM, ACM, NY, NY, 658--661.Google ScholarGoogle Scholar
  27. Kristina Höök. 2004. User-centred design and evaluation of affective interfaces. In From brows to trust. Springer, 127--160.Google ScholarGoogle Scholar
  28. Kristina Höök. 2009. Affective loop experiences: designing for interactional embodiment. Philosophical Transactions of the Royal Society B: Biological Sciences 364, 1535 (2009), 3585--3595.Google ScholarGoogle ScholarCross RefCross Ref
  29. Sue Jackson, Bob Eklund, and Andrew Martin. 2010. The FLOW Manual: The Manual for the Flow Scales. Mind Garden, Inc., Brisbane, Queensland.Google ScholarGoogle Scholar
  30. Rachel Jacobs, Holger Schnädelbach, Nils Jäger, Silvia Leal, Robin Shackford, Steve Benford, and Roma Patel. 2019. The Performative Mirror Space. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, NY, NY, USA, Article 400, 14 pages. DOI: http://dx.doi.org/10.1145/3290605.3300630Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Pegah Karimi, Mary Lou Maher, Nicholas Davis, and Kazjon Grace. 2019. Deep Learning in a Computational Model for Conceptual Shifts in a Co-Creative Design System. In 10th International Conference on Computational Creativity, Kazjon Grace, Michael Cook, Dan Ventura, and Mary Lou Maher (Eds.). UNC Charlotte, North Carolina, Association for Computational Creativity, 17--24.Google ScholarGoogle Scholar
  32. Karim S. Kassam and Wendy Berry Mendes. 2013. The Effects of Measuring Emotion: Physiological Reactions to Emotional Situations Depend on whether Someone Is Asking. PLOS ONE 8, 6 (06 2013), 1--8. DOI: http://dx.doi.org/10.1371/journal.pone.0064959Google ScholarGoogle ScholarCross RefCross Ref
  33. Janin Koch, Andrés Lucero, Lena Hegemann, and Antti Oulasvirta. 2019. May AI?: Design Ideation with Cooperative Contextual Bandits. In Proceeding of ACM SIGCHI 2019, Anind Dey and Shengdong Zhao (Eds.). ACM SIGCHI, ACM, NY, NY, Paper No. 633.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Zoltán Kövecses. 2003. Metaphor and emotion: Language, culture, and body in human feeling. Cambridge University Press.Google ScholarGoogle Scholar
  35. Myron W. Krueger, Thomas Gionfriddo, and Katrin Hinrichsen. 1985. VIDEOPLACE--An Artificial Reality. In ACM CHI '85 Proceedings. ACM, New York, 35--40.Google ScholarGoogle Scholar
  36. Lucian Leahu and Phoebe Sengers. 2014. Freaky: Performing Hybrid Human-machine Emotion. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS '14). ACM, NY, NY, USA, 607--616. DOI: http://dx.doi.org/10.1145/2598510.2600879Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Kristen A Lindquist and Lisa Feldman Barrett. 2008. Constructing emotion: the experience of fear as a conceptual act. Psychol Sci 19, 9 (Sep 2008), 898--903. DOI: http://dx.doi.org/10.1111/j.1467--9280.2008.02174.xGoogle ScholarGoogle ScholarCross RefCross Ref
  38. Kristen A Lindquist, Ajay B Satpute, and Maria Gendron. 2015. Does language do more than communicate emotion? Current Directions in Psychological Science 24, 2 (2015), 99--108.Google ScholarGoogle ScholarCross RefCross Ref
  39. Kristen A Lindquist, Tor D Wager, Hedy Kober, Eliza Bliss-Moreau, and Lisa Feldman Barrett. 2012. The brain basis of emotion: a meta-analytic review. Behav Brain Sci 35, 3 (Jun 2012), 121--143. DOI: http://dx.doi.org/10.1017/S0140525X11000446Google ScholarGoogle ScholarCross RefCross Ref
  40. Rafael Lozano-Hemmer. 2015. "Redundant Assembly" (video documentation). (2015). https://vimeo.com/178619691Google ScholarGoogle Scholar
  41. Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa Llano Rodriguez, Matthew Yee-King, and Mark d'Inverno. 2019. In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound. In Proceeding of ACM SIGCHI 2019, Anind Dey and Shengdong Zhao (Eds.). ACM SIGCHI, ACM, NY, NY, Paper No. 38. DOI: http://dx.doi.org/https://doi.org/10.1145/3290605.3300268Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Elisa D. Mekler and Kasper Hornbæk. 2019. A Framework for the Experience of Meaning in Human-Computer Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, NY, NY, USA, Article 225, 15 pages. DOI: http://dx.doi.org/10.1145/3290605.3300455Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Sabine Melchior-Bonnet. 2002. The Mirror: A History (English edition; originally published in French in 1994 under the title histoire du miroir ed.). Routledge, New York, N.Y.Google ScholarGoogle Scholar
  44. S. Melchior-Bonnet, K.H. Jewett, and J. Delumeau. 2001. The Mirror: A History. Routledge. https://books.google.com.au/books?id=P-c1g6QIPHICGoogle ScholarGoogle Scholar
  45. Donald A. Norman. 1990. The design of everyday things (1st doubleday/currency ed.). Number xv, 257. Doubleday, New York.Google ScholarGoogle Scholar
  46. Mark Pendergrast. 2003. Mirror mirror : a history of the human love affair with reflection. Basic Books, New York, N.Y.Google ScholarGoogle Scholar
  47. Rosalind W. Picard. 1997. Affective Computing. MIT Press, Cambridge, MA, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Rosalind W Picard. 2009. Future affective technology for autism and emotion communication. Philosophical Transactions of the Royal Society B: Biological Sciences 364, 1535 (2009), 3575--3584.Google ScholarGoogle ScholarCross RefCross Ref
  49. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. URL https://openai.com/blog/better-language-models (2019).Google ScholarGoogle Scholar
  50. A. S. M. Mahfujur Rahman, Thomas T. Tran, Sk Alamgir Hossain, and Abdulmotaleb El Saddik. 2010. Augmented Rendering of Makeup Features in a Smart Interactive Mirror System for Decision Support in Cosmetic Products Selection. In DS-RT '10 Proceedings of the 2010 IEEE/ACM 14th International Symposium on Distributed Simulation and Real Time Applications. IEEE Computer Society, IEEE Computer Society Press, Washington, D.C., 203--206.Google ScholarGoogle Scholar
  51. Hiranmayi Ranganathan, Shayok Chakraborty, and Sethuraman Panchanathan. 2016. Multimodal emotion recognition using deep learning architectures. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1--9.Google ScholarGoogle ScholarCross RefCross Ref
  52. Phoebe Sengers, Kirsten Boehner, Simeon Warner, and Tom Jenkins. 2005. Evaluating affector: co-interpreting what ?works'. In CHI 2005 Workshop on Innovative Approaches to Evaluating Affective Interfaces.Google ScholarGoogle Scholar
  53. Robert J. Sternberg, B. E. Conway, J. L. Ketron, and M. Bernstein. 1981. People's conceptions of intelligence. Journal of Personality and Social Psychology 41, 1 (July 1981), 37--55.Google ScholarGoogle ScholarCross RefCross Ref
  54. Jared B Torre and Matthew D Lieberman. 2018. Putting feelings into words: Affect labeling as implicit emotion regulation. Emotion Review 10, 2 (2018), 116--124.Google ScholarGoogle ScholarCross RefCross Ref
  55. Camille Utterback. 1999. Text Rain (Romy Achituv Camille Utterback). (1999). http://camilleutterback.com/projects/text-rain/Google ScholarGoogle Scholar
  56. Michel Valstar, Björn Schuller, Kirsty Smith, Florian Eyben, Bihan Jiang, Sanjay Bilakhia, Sebastian Schnieder, Roddy Cowie, and Maja Pantic. 2013. AVEC 2013: the continuous audio/visual emotion and depression recognition challenge. In Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge. ACM, 3--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008.Google ScholarGoogle Scholar
  58. Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard, and Alex Pentland. 2008. Social signal processing: state-of-the-art and future perspectives of an emerging domain. In Proceedings of the 16th ACM international conference on Multimedia - MM '08. ACM Press, 1061--1070. DOI: http://dx.doi.org/10.1145/1459359.1459573Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Paul Viola and Michael J. Jones. 2004. Robust Real-Time Face Detection. International Journal of Computer Vision 57, 2 (01 May 2004), 137--154. DOI: http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fbGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  60. Jayne Wallace, Jon Rogers, Michael Shorter, Pete Thomas, Martin Skelly, and Richard Cook. 2018. The SelfReflector: Design, IoT and the High Street. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, NY, NY, USA, Article 423, 12 pages. DOI: http://dx.doi.org/10.1145/3173574.3173997Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Peter Wright and John McCarthy. 2004. Technology as Experience. MIT Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Mirror Ritual: An Affective Interface for Emotional Self-Reflection

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
            April 2020
            10688 pages
            ISBN:9781450367080
            DOI:10.1145/3313831

            Copyright © 2020 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 23 April 2020

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate6,199of26,314submissions,24%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format