skip to main content
10.1145/3209978.3210093acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

A Co-Memory Network for Multimodal Sentiment Analysis

Authors Info & Claims
Published:27 June 2018Publication History

ABSTRACT

With the rapid increase of diversity and modality of data in user-generated contents, sentiment analysis as a core area of social media analytics has gone beyond traditional text-based analysis. Multimodal sentiment analysis has become an important research topic in recent years. Most of the existing work on multimodal sentiment analysis extracts features from image and text separately, and directly combine them to train a classifier. As visual and textual information in multimodal data can mutually reinforce and complement each other in analyzing the sentiment of people, previous research all ignores this mutual influence between image and text. To fill this gap, in this paper, we consider the interrelation of visual and textual information, and propose a novel co-memory network to iteratively model the interactions between visual contents and textual words for multimodal sentiment analysis. Experimental results on two public multimodal sentiment datasets demonstrate the effectiveness of our proposed model compared to the state-of-the-art methods.

References

  1. Claudio Baecchi, Tiberio Uricchio, Marco Bertini, and Alberto Del Bimbo . 2016. A multimodal feature learning approach for sentiment analysis of social network multimedia. Multimedia Tools and Applications Vol. 75, 5 (2016), 2507--2525. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Damian Borth, Rongrong Ji, Tao Chen, Thomas Breuel, and Shih-Fu Chang . 2013. Large-scale visual sentiment ontology and detectors using adjective noun pairs ACM MM. ACM, 223--232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Guoyong Cai and Binbin Xia . 2015. Convolutional neural networks for multimedia sentiment analysis. In NLPCC. Springer, 159--167. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton . 2015. Deep learning. nature Vol. 521, 7553 (2015), 436.Google ScholarGoogle Scholar
  5. Zheng Li, Yu Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang . 2017. End-to-end adversarial memory network for cross-domain sentiment classification IJCAI. 2237. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Teng Niu, Shiai Zhu, Lei Pang, and Abdulmotaleb El Saddik . 2016. Sentiment analysis on multi-view social data. In MMM. Springer, 15--27.Google ScholarGoogle Scholar
  7. Jeffrey Pennington, Richard Socher, and Christopher Manning . 2014. Glove: Global vectors for word representation. In EMNLP. 1532--1543.Google ScholarGoogle Scholar
  8. Mohammad Soleymani, David Garcia, Brendan Jou, Björn Schuller, Shih-Fu Chang, and Maja Pantic . 2017. A survey of multimodal sentiment analysis. Image and Vision Computing Vol. 65 (2017), 3--14.Google ScholarGoogle ScholarCross RefCross Ref
  9. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna . 2016. Rethinking the inception architecture for computer vision CVPR. IEEE, 2818--2826.Google ScholarGoogle Scholar
  10. Nan Xu . 2017. Analyzing multimodal public sentiment based on hierarchical semantic attentional network. In ISI. IEEE, 152--154.Google ScholarGoogle Scholar
  11. Nan Xu and Wenji Mao . 2017. MultiSentiNet: A Deep Semantic Network for Multimodal Sentiment Analysis CIKM. ACM, 2399--2402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Quanzeng You, Jiebo Luo, Hailin Jin, and Jianchao Yang . 2015. Robust image sentiment analysis using progressively trained and domain transferred deep networks. In AAAI. AAAI Press, 381--388. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Yuhai Yu, Hongfei Lin, Jiana Meng, and Zhehuan Zhao . 2016. Visual and textual sentiment analysis of a microblog using deep convolutional neural networks. Algorithms Vol. 9, 2 (2016), 41.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Co-Memory Network for Multimodal Sentiment Analysis

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGIR '18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval
          June 2018
          1509 pages
          ISBN:9781450356572
          DOI:10.1145/3209978

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 June 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • short-paper

          Acceptance Rates

          SIGIR '18 Paper Acceptance Rate86of409submissions,21%Overall Acceptance Rate792of3,983submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader