skip to main content
10.1145/3399715.3399928acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaviConference Proceedingsconference-collections
research-article

Emotions on the Go: Mobile Emotion Assessment in Real-Time using Facial Expressions

Published:02 October 2020Publication History

ABSTRACT

Exploiting emotions for user interface evaluation became an increasingly important research objective in Human-Computer Interaction. Emotions are usually assessed through surveys that do not allow information to be collected in real-time. In our work, we suggest the use of smartphones for mobile emotion assessment. We use the front-facing smartphone camera as a tool for emotion detection based on facial expressions. Such information can be used to reflect on emotional states or provide emotion-aware user interface adaptation. We collected facial expressions along with app usage data in a two-week field study consisting of a one-week training phase and a one-week testing phase. We built and evaluated a person-dependent classifier, yielding an average classification improvement of 33% compared to classifying facial expressions only. Furthermore, we correlate the estimated emotions with concurrent app usage to draw insights into changes in mood. Our work is complemented by a discussion of the feasibility of probing emotions on-the-go and potential use cases for future emotion-aware applications.

References

  1. Jussi Ängeslevä, Carson Reynolds, and Sile O'Modhrain. 2004. EmoteMail. In ACM SIGGRAPH 2004 Posters (Los Angeles, California) (SIGGRAPH '04). ACM, New York, NY, USA. https://doi.org/10.1145/1186415.1186426Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Yadid Ayzenberg, Javier Hernandez Rivera, and Rosalind Picard. 2012. FEEL: Frequent EDA and Event Logging - a Mobile Social Interaction Stress Monitoring System. In CHI '12 Extended Abstracts on Human Factors in Computing Systems (Austin, Texas, USA) (CHI EA '12). Association for Computing Machinery, New York, NY, USA, 2357--2362. https://doi.org/10.1145/2212776.2223802Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Tadas Baltrusaitis, Peter Robinson, and Louis-Philippe Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 1--10. https://doi.org/10.1109/WACV.2016.7477553Google ScholarGoogle ScholarCross RefCross Ref
  4. Marian Stewart Bartlett, Gwen Littlewort, Ian Fasel, and Javier R. Movellan. 2003. Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction.. In 2003 Conference on Computer Vision and Pattern Recognition Workshop, Vol. 5. 53--53. https://doi.org/10.1109/CVPRW.2003.10057Google ScholarGoogle ScholarCross RefCross Ref
  5. Karen Church, Eve Hoggan, and Nuria Oliver. 2010. A Study of Mobile Mood Awareness and Communication Through MobiMood. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (Reykjavik, Iceland) (NordiCHI '10). ACM, New York, NY, USA, 128--137. https://doi.org/10.1145/1868914.1868933Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yanqing Cui, Jari Kangas, Jukka Holm, and Guido Grassel. 2013. Front-camera Video Recordings As Emotion Responses to Mobile Photos Shared Within Close-knit Groups. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI '13). ACM, New York, NY, USA, 981--990. https://doi.org/10.1145/2470654.2466125Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting Depression via Social Media. ICWSM 13 (2013), 1--10.Google ScholarGoogle Scholar
  8. Jozefien De Leersnyder, Michael Boiger, and Batja Mesquita. 2013. Cultural regulation of emotion: Individual, relational, and structural sources. Frontiers in psychology 4 (2013).Google ScholarGoogle Scholar
  9. Hui Ding, Shaohua K. Zhou, and Rama Chellappa. 2017. FaceNet2ExpNet: Regularizing a Deep Face Recognition Net for Expression Recognition. In 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017). 118--126. https://doi.org/10.1109/FG.2017.23Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Tilman Dingler. 2016. Cognition-aware Systems As Mobile Personal Assistants. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct (Heidelberg, Germany) (UbiComp '16). ACM, New York, NY, USA, 1035--1040. https://doi.org/10.1145/2968219.2968565Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Paul Ekman. 1984. Expression and the nature of emotion. Approaches to emotion 3 (1984), 19--344.Google ScholarGoogle Scholar
  12. Paul Ekman. 1993. Facial expression and emotion. American psychologist 48, 4 (1993), 384. https://doi.org/10.1037/0003-066X.48.4.384Google ScholarGoogle Scholar
  13. Paul Ekman. 1999. Facial expressions. Handbook of cognition and emotion 16 (1999), 301--320.Google ScholarGoogle Scholar
  14. Paul Ekman, Wallace V Friesen, Maureen O'sullivan, Anthony Chan, Irene Diacoyanni-Tarlatzis, Karl Heider, Rainer Krause, William Ayhan LeCompte, Tom Pitcairn, Pio E Ricci-Bitti, et al. 1987. Universals and cultural differences in the judgments of facial expressions of emotion. Journal of personality and social psychology 53, 4 (1987), 712. https://doi.org/10.1037/0022--3514.53.4.712Google ScholarGoogle ScholarCross RefCross Ref
  15. Paul Ekman and Harriet Oster. 1979. Facial expressions of emotion. Annual review of psychology 30, 1 (1979), 527--554.Google ScholarGoogle Scholar
  16. Paul Ekman and Erika L Rosenberg. 1997. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, USA.Google ScholarGoogle Scholar
  17. Rana El Kaliouby and Peter Robinson. 2004. FAIM: Integrating Automated Facial Affect Analysis in Instant Messaging. In Proceedings of the 9th International Conference on Intelligent User Interfaces (Funchal, Madeira, Portugal) (IUI '04). ACM, New York, NY, USA, 244--246. https://doi.org/10.1145/964442.964493Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Rana el Kaliouby, Alea Teeters, and Rosalind W. Picard. 2006. An exploratory social-emotional prosthetic for autism spectrum disorders. In International Workshop on Wearable and Implantable Body Sensor Networks (BSN'06). 2 pp.-4. https://doi.org/10.1109/BSN.2006.34Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Jackson Feijó Filho, Thiago Valle, and Wilson Prata. 2014. Non-verbal communications in mobile text chat: emotion-enhanced mobile chat. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services. ACM, 443--446.Google ScholarGoogle Scholar
  20. Marian Harbach, Emanuel Von Zezschwitz, Andreas Fichtner, Alexander De Luca, and Matthew Smith. 2014. It's a hard lock life: A field study of smartphone (un) locking behavior and risk perception. In Symposium on usable privacy and security (SOUPS). 9--11.Google ScholarGoogle Scholar
  21. Behzad Hasani and Mohammad H Mahoor. 2017. Spatio-Temporal Facial Expression Recognition Using Convolutional Neural Networks and Conditional Random Fields. arXiv preprint arXiv:1703.06995 (2017).Google ScholarGoogle Scholar
  22. Mariam Hassib, Daniel Buschek, Paweł W. Wozniak, and Florian Alt. 2017. HeartChat: Heart Rate Augmented Mobile Chat to Support Empathy and Awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). ACM, New York, NY, USA, 2239--2251. https://doi.org/10.1145/3025453.3025758Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Mariam Hassib, Mohamed Khamis, Stefan Schneegass, Ali Sahami Shirazi, and Florian Alt. 2016. Investigating User Needs for Bio-sensing and Affective Wearables. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, California, USA) (CHI EA '16). ACM, New York, NY, USA, 1415--1422. https://doi.org/10.1145/2851581.2892480Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Javier Hernandez, Daniel McDuff, Christian Infante, Pattie Maes, Karen Quigley, and Rosalind Picard. 2016. Wearable ESM: Differences in the experience sampling method across wearable devices. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, 195--205. https://doi.org/10.1145/2935334.2935340Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Ursula Hess, Rainer Banse, and Arvid Kappas. 1995. The intensity of facial expression is determined by underlying affective state and social situation. Journal of personality and social psychology 69, 2 (1995), 280. https://doi.org/10.1037/0022--3514.69.2.280Google ScholarGoogle ScholarCross RefCross Ref
  26. Qiong Huang, Ashok Veeraraghavan, and Ashutosh Sabharwal. 2017. TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets. Machine Vision and Applications 28, 5 (01 Aug 2017), 445--461. https://doi.org/10.1007/s00138-017-0852-4Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Mohamed Khamis, Anita Baier, Niels Henze, Florian Alt, and Andreas Bulling. 2018. Understanding Face and Eye Visibility in Front-Facing Cameras of Smart-phones used in the Wild. Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems 36 (2018), 5. https://doi.org/10.1145/3152832.3173854Google ScholarGoogle Scholar
  28. Kyung Hwan Kim, Seok Won Bang, and Sang Ryong Kim. 2004. Emotion recognition system using short-term monitoring of physiological signals. Medical and biological engineering and computing 42, 3 (2004), 419--427.Google ScholarGoogle Scholar
  29. Thomas Kosch, Mariam Hassib, Daniel Buschek, and Albrecht Schmidt. 2018. Look into My Eyes: Using Pupil Dilation to Estimate Mental Workload for Task Complexity Adaptation. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA '18). ACM, New York, NY, USA, Article LBW617, 6 pages. https://doi.org/10.1145/3170427.3188643Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Thomas Kosch, Mariam Hassib, Paweł W. Woźniak, Daniel Buschek, and Florian Alt. 2018. Your Eyes Tell: Leveraging Smooth Pursuit for Assessing Cognitive Workload. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). ACM, New York, NY, USA, Article 436, 13 pages. https://doi.org/10.1145/3173574.3174010Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Thomas Kosch, Jakob Karolus, Havy Ha, and Albrecht Schmidt. 2019. Your Skin Resists: Exploring Electrodermal Activity As Workload Indicator During Manual Assembly. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (Valencia, Spain) (EICS '19). ACM, New York, NY, USA, Article 8, 5 pages. https://doi.org/10.1145/3319499.3328230Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Thomas Kosch, Albrecht Schmidt, Simon Thanheiser, and Lewis L. Chuang. 2020. One does not Simply RSVP: Mental Workload to Select Speed Reading Parameters using Electroencephalography. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). ACM, New York, NY, USA. https://doi.org/10.1145/3313831.3376766Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Myungho Lee, Kangsoo Kim, Hyunghwan Rho, and Si Jung Kim. 2014. Empa Talk: A Physiological Data Incorporated Human-computer Interactions. In Proceedings of the Extended Abstracts of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI EA '14). ACM, New York, NY, USA, 1897--1902. https://doi.org/10.1145/2559206.2581370Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Yu-Kang Lee, Chun-Tuan Chang, You Lin, and Zhao-Hong Cheng. 2014. The dark side of smartphone usage: Psychological traits, compulsive behavior and technostress. Computers in Human Behavior 31, Supplement C (2014), 373--383. https://doi.org/10.1016/j.chb.2013.10.047Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Robert LiKamWa, Yunxin Liu, Nicholas D. Lane, and Lin Zhong. 2013. MoodScope: Building a Mood Sensor from Smartphone Usage Patterns. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services (Taipei, Taiwan) (MobiSys '13). ACM, New York, NY, USA, 389--402. https://doi.org/10.1145/2462456.2464449Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Zhe Liu, Anbang Xu, Yufan Guo, Jalal U. Mahmud, Haibin Liu, and Rama Akkiraju. 2018. Seemo: A Computational Approach to See Emotions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). ACM, New York, NY, USA, Article 364, 12 pages. https://doi.org/10.1145/3173574.3173938Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Ludo Maat and Maja Pantic. 2007. Gaze-X: Adaptive, affective, multimodal interface for single-user office scenarios. In Artifical Intelligence for Human Computing. Springer, 251--271.Google ScholarGoogle Scholar
  38. Daniel McDuff, Amy Karlson, Ashish Kapoor, Asta Roseway, and Mary Czerwinski. 2012. AffectAura: An Intelligent System for Emotional Memory. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI '12). ACM, New York, NY, USA, 849--858. https://doi.org/10.1145/2207676.2208525Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Daniel McDuff, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Santa Clara, California, USA) (CHI EA '16). ACM, New York, NY, USA, 3723--3726. https://doi.org/10.1145/2851581.2890247Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Mohammad Obaid, Charles Han, and Mark Billinghurst. 2008. "Feed the Fish: An Affect-aware Game. In Proceedings of the 5th Australasian Conference on Interactive Entertainment (Brisbane, Queensland, Australia) (IE '08). ACM, New York, NY, USA, Article 6, 6 pages. https://doi.org/10.1145/1514402.1514408Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Andreas Fsrøvig Olsen and Jim Torresen. 2016. Smartphone accelerometer data used for detecting human emotions. In Systems and Informatics (ICSAI), 2016 3rd International Conference on. IEEE, IEEE, New York, NY, USA, 410--415. https://doi.org/10.1109/ICSAI.2016.7810990Google ScholarGoogle ScholarCross RefCross Ref
  42. Rosalind W. Picard. 2003. Affective computing: challenges. International Journal of Human-Computer Studies 59, 1 (2003), 55--64. https://doi.org/10.1016/S1071-5819(03)00052-1 Applications of Affective Computing in Human-Computer Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Rosalind W. Picard, Elias Vyzas, and Jennifer Healey. 2001. Toward machine emotional intelligence: analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 10 (Oct 2001), 1175--1191. https://doi.org/10.1109/34.954607Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Martin Pielot, Tilman Dingler, Jose San Pedro, and Nuria Oliver. 2015. When Attention is Not Scarce - Detecting Boredom from Mobile Phone Usage. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Osaka, Japan) (UbiComp '15). ACM, New York, NY, USA, 825--836. https://doi.org/10.1145/2750858.2804252Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Paul Rodriguez, Guillem Cucurull, Jordi Gonzàlez, Josep M. Gonfaus, Kamal Nasrollahi, Thomas B. Moeslund, and F. Xavier Roca. 2017. Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification. IEEE Transactions on Cybernetics PP, 99 (2017), 1--11. https://doi.org/10.1109/TCYB.2017.2662199Google ScholarGoogle Scholar
  46. Tobias Ruf, Andreas Ernst, and Christian Küblbeck. 2011. Face Detection with the Sophisticated High-speed Object Recognition Engine (SHORE). Springer Berlin Heidelberg, Berlin, Heidelberg, 243--252. https://doi.org/10.1007/978-3-642-23071-4_23Google ScholarGoogle Scholar
  47. James A Russell. 1994. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological bulletin 115, 1 (1994), 102. https://doi.org/10.1037/0033--2909.115.1.102Google ScholarGoogle Scholar
  48. Jan Scholz, Miriam C Klein, Timothy EJ Behrens, and Heidi Johansen-Berg. 2009. Training induces changes in white-matter architecture. Nature neuroscience 12, 11 (2009), 1370--1371. https://doi.org/10.1038/nn.2412Google ScholarGoogle Scholar
  49. Claudia Schrader, Julia Brich, Julian Frommel, Valentin Riemer, and Katja Rogers. 2017. Rising to the challenge: An emotion-driven approach toward adaptive serious games. In Serious Games and Edutainment Applications. Springer, New York, NY, USA, 3--28.Google ScholarGoogle Scholar
  50. Shams Shapsough, Ahmed Hesham, Youssef Elkhorazaty, Imran A. Zualkernan, and Fadi Aloul. 2016. Emotion recognition using mobile phones. In 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom). IEEE, New York, NY, USA, 1--6. https://doi.org/10.1109/HealthCom.2016.7749470Google ScholarGoogle ScholarCross RefCross Ref
  51. Anna Ståhl, Kristina Höök, Martin Svensson, Alex S. Taylor, and Marco Combetto. 2009. Experiencing the Affective Diary. Personal and Ubiquitous Computing 13, 5 (01 Jun 2009), 365--378. https://doi.org/10.1007/s00779-008-0202-7Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Myrthe Tielman, Mark Neerincx, John-Jules Meyer, and Rosemarijn Looije. 2014. Adaptive Emotional Expression in Robot-child Interaction. In Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction (Bielefeld, Germany) (HRI '14). ACM, New York, NY, USA, 407--414. https://doi.org/10.1145/2559636.2559663Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Hitomi Tsujita and Jun Rekimoto. 2011. HappinessCounter: Smile-encouraging Appliance to Increase Positive Mood. In CHI '11 Extended Abstracts on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI EA '11). ACM, New York, NY, USA, 117--126. https://doi.org/10.1145/1979742.1979608Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Beverly Woolf, Winslow Burleson, Ivon Arroyo, Toby Dragon, David Cooper, and Rosalind Picard. 2009. Affect-aware tutors: recognising and responding to student affect. International Journal of Learning Technology 4, 3--4 (2009), 129--164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. SungHyuk Yoon, Sang-su Lee, Jae-myung Lee, and KunPyo Lee. 2014. Understanding Notification Stress of Smartphone Messenger App. In CHI '14 Extended Abstracts on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI EA '14). ACM, New York, NY, USA, 1735--1740. https://doi.org/10.1145/2559206.2581167Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Tianyi Zhang, Abdallah El Ali, Chen Wang, Alan Hanjalic, and Pablo Cesar. 2020. RCEA: Real-Time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3313831.3376808Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Emotions on the Go: Mobile Emotion Assessment in Real-Time using Facial Expressions

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          AVI '20: Proceedings of the International Conference on Advanced Visual Interfaces
          September 2020
          613 pages
          ISBN:9781450375351
          DOI:10.1145/3399715

          Copyright © 2020 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 2 October 2020

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          AVI '20 Paper Acceptance Rate36of123submissions,29%Overall Acceptance Rate107of408submissions,26%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader