skip to main content
10.1145/2993148.2997633acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Group happiness assessment using geometric features and dataset balancing

Authors Info & Claims
Published:31 October 2016Publication History

ABSTRACT

This paper presents the techniques employed in our team's submissions to the 2016 Emotion Recognition in the Wild contest, for the sub-challenge of group-level emotion recognition. The objective of this sub-challenge is to estimate the happiness intensity of groups of people in consumer photos. We follow a predominately bottom-up approach, in which the individual happiness level of each face is estimated separately. The proposed technique is based on geometric features derived from 49 facial points. These features are used to train a model on a subset of the HAPPEI dataset, balanced across expression and headpose, using Partial Least Squares regression. The trained model exhibits competitive performance for a range of non-frontal poses, while at the same time offering a semantic interpretation of the facial distances that may contribute positively or negatively to group-level happiness. Various techniques are explored in combining these estimations in order to perform group-level prediction, including the distribution of expressions, significance of a face relative to the whole group, and mean estimation. Our best submission achieves an RMSE of 0.8316 on the competition test set, which compares favorably to the RMSE of 1.30 of the baseline.

References

  1. S. de Jong. SIMPLS: An alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 18(3):251 – 263, 1993.Google ScholarGoogle ScholarCross RefCross Ref
  2. A. Dhall, R. Goecke, and T. Gedeon. Automatic group happiness intensity analysis. IEEE Transactions on Affective Computing, 6(1):13–26, Jan 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Dhall, R. Goecke, J. Joshi, J. Hoey, and T. Gedeon. EmotiW 2016: Video and group-level emotion recognition challenges. In Proceedings of the 18th International Conference on Multimodal Interaction, ICMI ’16. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Dhall, J. Joshi, K. Sikka, R. Goecke, and N. Sebe. The more the merrier: Analysing the affect of a group of people in images. In Proceedings of Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, volume 1, pages 1–8, May 2015.Google ScholarGoogle ScholarCross RefCross Ref
  5. P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.Google ScholarGoogle Scholar
  6. A. Gallagher and T. Chen. Clothing cosegmentation for recognizing people. In Proceedings of Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8, June 2008.Google ScholarGoogle ScholarCross RefCross Ref
  7. A. C. Gallagher and T. Chen. Understanding images of groups of people. In Proceedings of Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 256–263, June 2009.Google ScholarGoogle ScholarCross RefCross Ref
  8. R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. Multi-pie. Image and Vision Computing, 28(5):807 – 813, 2010. Best of Automatic Face and Gesture Recognition 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. H. Kaya, F. Eyben, A. A. Salah, and B. Schuller. CCA based feature selection with application to continuous depression recognition from acoustic speech features. In Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3729–3733, May 2014.Google ScholarGoogle ScholarCross RefCross Ref
  10. H. Kaya, F. Gürpinar, S. Afshar, and A. A. Salah. Contrasting and combining least squares based learners for emotion recognition in the wild. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI ’15, pages 459–466, New York, NY, USA, 2015. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. Moore and R. Bowden. Local binary patterns for multi-view facial expression recognition. Computer Vision and Image Understanding, 115(4):541 – 558, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. H.-W. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler. Deep learning for emotion recognition on small datasets using transfer learning. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI ’15, pages 443–449, New York, NY, USA, 2015. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Orrite, A. Gañán, and G. Rogez. HOG-Based Decision Tree for Facial Expression Classification, pages 176–183. Springer, Berlin, Heidelberg, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. Poria, E. Cambria, A. Hussain, and G.-B. Huang. Towards an intelligent framework for multimodal affective data analysis. Neural Networks, 63:104 – 116, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. Qu, H. Gao, E. Monari, J. Beyerer, and J. P. Thiran. Towards robust cascaded regression for face alignment in the wild. In Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1–9, June 2015.Google ScholarGoogle ScholarCross RefCross Ref
  16. F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie, and M. Pantic. AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge, AVEC ’15, pages 3–8, New York, NY, USA, 2015. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. O. Rudovic, V. Pavlovic, and M. Pantic. Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(5):944–958, May 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. E. Sariyanidi, H. Gunes, and A. Cavallaro. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(6):1113–1133, June 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Y. Tong, J. Chen, and Q. Ji. A unified probabilistic framework for spontaneous facial action modeling and understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2):258–273, Feb 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. F. Valstar, J. Gratch, B. W. Schuller, F. Ringeval, D. Lalanne, M. Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic. AVEC 2016 - depression, mood, and emotion recognition workshop and challenge. CoRR, abs/1605.01600, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. F. Valstar, H. Gunes, and M. Pantic. How to distinguish posed from spontaneous smiles using geometric features. In Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI ’07, pages 38–45, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. P. Viola and M. J. Jones. Robust real-time face detection. Int’l Journal of Computer Vision, 57(2):137–154, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. V. Vonikakis, S. Ramanathan, and S. Winkler. Shaping datasets: Optimal data selection for specific target distributions across dimensions. In Proceedings of 2016 IEEE International Conference on Image Processing (ICIP), September 2016.Google ScholarGoogle ScholarCross RefCross Ref
  24. V. Vonikakis and S. Winkler. Emotion-based sequence of family photos. In Proceedings of the 20th ACM International Conference on Multimedia, MM ’12, pages 1371–1372, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. Wu and J. M. Rehg. Centrist: A visual descriptor for scene categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1489–1501, Aug 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. X. Xiong and F. D. la Torre. Supervised descent method and its applications to face alignment. In Proceedings of Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 532–539, June 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Group happiness assessment using geometric features and dataset balancing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal Interaction
      October 2016
      605 pages
      ISBN:9781450345569
      DOI:10.1145/2993148

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 October 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      Overall Acceptance Rate453of1,080submissions,42%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader