Skip to main content
Log in

Enhancing Top-N Recommendation Using Stacked Autoencoder in Context-Aware Recommender System

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Context-aware recommender systems (CARS) are a vital module of many corporate, especially within the online commerce domain, where consumers are provided with recommendations about products potentially relevant for them. A traditional CARS, which utilizes deep learning models considers that user’s preferences can be predicted by ratings, reviews, demographics, etc. However, the feedback given by the users is often conflicting when comparing the rating score and the sentiment behind the reviews. Therefore, a model that utilizes either ratings or reviews for predicting items for top-N recommendation may generate unsatisfactory recommendations in many cases. In order to address this problem, this paper proposes an effective context-specific sentiment based stacked autoencoder (CSSAE) to learn the concrete preference of the user by merging the rating and reviews for a context-specific item into a stacked autoencoder. Hence, the user's preferences are consistently predicted to enhance the Top-N recommendation quality, by adapting the recommended list to the exact context where an active user is operating. Experiments on four Amazon (5-core) datasets demonstrated that the proposed CSSAE model persistently outperforms state-of-the-art top-N recommendation methods on various effectiveness metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17(6):734–749

    Article  Google Scholar 

  2. Wang CD, Deng ZH, Lai JH, Philip SY (2018) Serendipitous recommendation in e-commerce using innovator-based collaborative filtering. IEEE Trans Cybern 49(7):2678–2692

    Article  Google Scholar 

  3. T., Kun, Shuyan C., and Aemal J. K. (2018) Personalized travel time estimation for urban road networks: a tensor-based context-aware approach. Expert Syst Appl 103:118–132

    Article  Google Scholar 

  4. Macedo AQ, Marinho LB, Santos RL (2015) Context-aware event recommendation in event-based social networks. In: Proceedings of the 9th ACM conference on recommender-systems, pp 123–130

  5. Adomavicius G, Tuzhilin A (2011) Context-aware recommender systems. In: Recommender systems-handbook. Springer, Boston, pp 217–253

  6. Baltrunas L, Ricci F (2009) Context-based splitting of item ratings in collaborative filtering. In: Proceedings of the third ACM conference on recommender systems, pp 245–248

  7. Raghavan S, Gunasekar S, Ghosh J (2012) Review quality aware collaborative filtering. In: Proceedings of the sixth ACM conference on recommender systems, pp 123–130

  8. Cheng Z, Ding Y, Zhu L, Kankanhalli M (2018) Aspect-aware latent factor model: rating prediction with ratings and reviews. In: Proceedings of the 2018 world wide web conference, pp 639–648

  9. Mobasher B, Burke R, Bhaumik R, Williams C (2007) Toward trustworthy recommender systems: an analysis of attack models and algorithm robustness. ACM Trans Internet Technol 7(4):23

    Article  Google Scholar 

  10. Hart-Davidson W, Michael ML, Christopher K, Michael W (2010) A method for measuring helpfulness in online peer review. In: Proceedings of the 28th ACM international conference on design of communication, pp 115–121

  11. Zhang Y, Zhang H, Zhang M, Liu Y, Ma S (2014) Do users rate or review? Boost phrase-level sentiment labeling with review-level sentiment classification. In: Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval, pp 1027–1030

  12. Cambria E, Schuller B, Xia Y, Havasi C (2013) New avenues in opinion mining and sentiment analysis. IEEE Intell Syst 28(2):15–21

    Article  Google Scholar 

  13. Pappas N, Popescu-Belis A (2013) Sentiment analysis of user comments for one-class collaborative filtering over ted talks. In: Proceedings of the 36th international ACM SIGIR conference on research and development in information retrieval, pp 773–776

  14. Zhang Y, Lai G, Zhang M, Zhang Y, Liu Y, Ma S (2014) Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval, pp 83–92

  15. Faridani S (2011) Using canonical correlation analysis for generalized sentiment analysis, product recommendation and search. In: Proceedings of the fifth ACM conference on recommender systems, pp 355–358

  16. Ganu G, Elhadad N, Marian A (2009) Beyond the stars: improving rating predictions using review text content. In: WebDB, vol 9, pp 1–6

  17. Qiu J, Liu C, Li Y, Lin Z (2018) Leveraging sentiment analysis at the aspects level to predict ratings of reviews. Inf Sci 451:295–309

    Article  Google Scholar 

  18. Cen Y, Zou X, Zhang J, Yang H, Zhou J, Tang,J (2019) Representation learning for attributed multiplex heterogeneous network. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1358–1368

  19. Huang R, Wang N, Han C, Yu F, Cui L (2020) TNAM: a tag-aware neural attention model for Top-N recommendation. Neurocomputing 385:1–12

    Article  Google Scholar 

  20. Unger M, Tuzhilin A, Livne A (2020) Context-aware recommendations based on deep learning frameworks. ACM Trans Manag Inform Syst 11(2):1–15

    Article  Google Scholar 

  21. Wu B, Wen W, Hao Z, Cai R (2020) Multi-context aware user-item embedding for recommendation. Neural Netw 124:86–94

    Article  Google Scholar 

  22. Sedhain S, Menon AK, Sanner S, Xie L (2015) Autorec: autoencoders meet collaborative filtering. In: Proceedings of the 24th international conference on world wide web, pp 111–112

  23. Wu Y, DuBois C, Zheng AX, Ester M (2016) Collaborative denoising auto-encoders for top-n recommender systems. In: Proceedings of the ninth ACM international conference on web search and data mining, pp 153–162

  24. Wang H, Wang N, Yeung DY (2015) Collaborative deep learning for recommender systems. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1235–1244

  25. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408

    MathSciNet  MATH  Google Scholar 

  26. Strub F, Mary J, Gaudel R (2016) Hybrid collaborative filtering with autoencoders. arXiv: 1603.00806

  27. Wang M, Wu Z, Sun X, Feng G, Zhang B (2019) Trust-aware collaborative filtering with a denoising autoencoder. Neural Process Lett 49(2):835–849

    Article  Google Scholar 

  28. Wang K, Xu L, Huang L, Wang CD, Lai JH (2019) SDDRS: stacked discriminative denoising auto-encoder based recommender system. Cogn Syst Res 55:164–174

    Article  Google Scholar 

  29. Dessì D, Dragoni M, Fenu G, Marras M, Recupero DR (2019) Evaluating neural word embeddings created from online course reviews for sentiment analysis. In: Proceedings of the 34th ACM/SIGAPP symposium on applied computing, pp 2124–2127

  30. Phuong TM, Phuong ND (2019) Graph-based context-aware collaborative filtering. Expert Syst Appl 126:9–19

    Article  MathSciNet  Google Scholar 

  31. Ye Q, Law R, Gu B (2009) The impact of online user reviews on hotel room sales. Int J Hosp Manag 28(1):180–182

    Article  Google Scholar 

  32. Feldman R (2013) Techniques and applications for sentiment analysis. Commun ACM 56(4):82–89

    Article  Google Scholar 

  33. Poria S, Gelbukh A, Agarwal B, Cambria E, Howard N (2014) Sentic demo: a hybrid concept-level aspect-based sentiment analysis toolkit. In: ESWC 2014

  34. Liu B (2010) Sentiment analysis and subjectivity. Handbook Nat Lang Process 2(2010):627–666

    Google Scholar 

  35. Loria S (2017) TextBlob: simplified text processing [a Python (2 and 3) library for processing textual data]

  36. Vincent P, Larochelle H, Bengio Y, Manzagol PA (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103

  37. Zheng L, Noroozi V, Yu PS (2017) Joint deep modeling of users and items using reviews for recommendation. In: Proceedings of the tenth ACM international conference on web search and data mining, pp 425–434

  38. Sarwar B, Karypis G, Konstan J, Riedl J (2001). Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on world wide web, pp 285–295

  39. Vozalis MG, Margaritis KG (2007) Using SVD and demographic data for the enhancement of generalized collaborative filtering. Inf Sci 177(15):3017–3037

    Article  Google Scholar 

  40. Scholkopf B, Platt J, Hofmann T (2006). Greedy layer-wise training of deep networks. In: International conference on neural information processing systems. MIT Press, pp 153–160

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Abinaya.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

In this section, the basic architectural components of the auto-encoder have been described and the changes introduced to implement the proposed Context-specific Sentiment based Stacked AutoEncoder (CSSAE) method to fit with the recommendation problem is described in detail with an example with a table provided with the parameters of each layer and the layer input shapes.

1.1 An autoencoder consists of three components

  • Encoder: An encoder is a feedforward, fully connected neural network that a compress i.e. encodes the input into a latent space representation.

  • Code: This part of the network contains the reduced representation of the input that is fed into the decoder.

  • Decoder: Decoder is also a feedforward network like the encoder and has a similar structure to the encoder. This network is responsible for reconstructing the input back to the original dimensions from the code.

1.2 The following are the changes made to the CSSAE model to fit with the recommendation problem

However, to consider the multiple feedback scenario, each context-based item has multiple feedback that is required as input to the network. Therefore, as shown in Fig. 2 the traditional stacked autoencoder was expanded by integrating an additional layer which holds two input layer for both the rating and sentiment data. Thus, the additional layer acts as the input layer in the context-specific sentiment based stacked autoencoder, and Table

Table 8 The following table provides the parameters of each layer and the layer input shapes

8 provides the parameters of each layer and the layer input shapes.

Note: The shape of the input layer is the total number of context-based items in the training set is 19366.

The first layer acts as a dual input layer i.e. two input layers (R_Score[0][0] and S_Score[0][0]) which are linked to their respective custom layer where elementwise_weights are trained for each node. Hence the particular node in the custom layer elementwise_weights_14[0][0] which is connected to R_Score has the value \(r_{ij}^{{}} \times w_{j}^{1}\) and elementwise_weights_15[0][0] which is connected to S_Score has the value \(s_{ij}^{{}} \times w_{j}^{2}\) and both which in turn is fed into the lambda layer for merging two inputs which hold \(r_{ij}^{{}} \times w_{j}^{1} + s_{ij}^{{}} \times w_{j}^{2}\).

Hence corresponding item nodes in the following intermediate layer (lambda_7) has the value as mentioned in Eq. (3)

$$ r_{ij}^{I} = r_{ij}^{{}} \times w_{j}^{1} + s_{ij}^{{}} \times w_{j}^{2} $$

where \(r_{ij}^{I}\) be the combined rating value of an item j rated by user i which is obtained from both \(r_{ij}^{{}}\), the rating and \(s_{ij}^{{}}\) the sentiment score obtained from the review specified by user i to item j.

This intermediate layer (lambda_7) which in turn is linked to consecutive M encoding layers that are being concatenated with User_sideInfm that are employed to find out the hidden representation of the items. The final encoding layer (LatentSpace) is linked to M successive decoding layers that are used to decode the hidden factors acquired from corresponding encoders. The Final decoding layer (UserScorePred) acts as an output where the actual concrete ratings of the items are predicted.

We have selected the intermediate layer (lambda_7) and its successive set of layers as an autoencoder because the nodes in the intermediate layer (lambda_7) has the merger of values of both the rating and sentiment scores as mentioned in Eq. (3)

$$ r_{ij}^{I} = r_{ij}^{{}} \times w_{j}^{1} + s_{ij}^{{}} \times w_{j}^{2} $$

Hence this intermediate layer acts as the input to the autoencoder, which encodes the input to find out the hidden representation of the items. Therefore, the latent feature vector (code) in the stacked autoencoder fits the ratings and reviews simultaneously. The final decoding layer (UserScorePred) acts as an output where the actual concrete ratings of the context-based items are predicted.

Note

In the proposed methodology, Sentiment Analysis is performed using Textblob on Users’ review on context-based items in-order to predict the Numerical sentiment score. This numerical sentiment score is given as the input into stacked autoencoder, a fully-connected neural network that was expanded by integrating an additional layer that holds two input layers for both the rating and sentiment data, where the training takes place end-to-end.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abinaya, S., Devi, M.K.K. Enhancing Top-N Recommendation Using Stacked Autoencoder in Context-Aware Recommender System. Neural Process Lett 53, 1865–1888 (2021). https://doi.org/10.1007/s11063-021-10475-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10475-0

Keywords

Navigation