Tag Recommendation in Social Networks with the Help of Optimized Text Summarization and Fuzzification
محورهای موضوعی : Computer EngineeringMahsa Rahimi 1 , Homayun Motameni 2 , Ebrahim Akbari 3 , Hossein Nematzadeh 4
1 - azad sari
2 - Department of Computer Engineering, Islamic Azad University, Sari, Mazandaran
3 - Department of Computer Engineering, Islamic Azad University, Sari, Mazandaran
4 - Department of Computer Engineering, Islamic Azad University, Sari, Mazandaran
کلید واژه: Text summarization, Fuzzy, Neural network, Hashtag recommendation,
چکیده مقاله :
Hashtags,i.e., terms that are prefixed by a # symbol, are vastly used in social media like Twitter, Instagram, etc. Hashtags present rich sentiment information about people's favorite topics and would make a text more accessible and popular. This paper proposed a model of the hashtag recommendation problem using an automatic summarizer using deep neural and Fuzzy logic system,as also some semantic text mining models. The final summarized text is based on Restricted Boltzmann Machine (RBM),and with the help of Extreme learning machines (ELM), improves the training data, then a fuzzy rule-based method on the sentences is done to build the final result.The experiments on two public data sets improved that the proposed model outperforms the related approaches and is more efficient improvement than previous methods.
[1] Li, Yang & Liu, Ting & Hu, Jingwen., “ Topical Co-Attention Networks for Hashtag Recommendation on Microblogs,” Neurocomputing, Vol. 28, February 2019, pp 356-365.
[2] Belhadi, Asma & Djenouri, Youcef & Lin, Chun-Wei & Cano, Alberto, “ A Data-Driven Approach for Twitter Hashtag Recommendation,” IEEE Access, vol. 8, pp. 79182-79191, 2020.
[3] K. Massoudi, M. Tsagkias, M. De Rijke, and W. Weerkamp, ``Incorporating query expansion and quality indicators in searching microblog posts,'' in Proceedings ofEur. Conf. Inf. Retr., 2011, pp. 362-367.
[4] X. Wang, F. Wei, X. Liu, M. Zhou, and M. Zhang, ``Topic sentiment analysis in Twitter: A graph-based hashtag sentiment classification approach,'' in Proceedings of20th ACM Int. Conf. Inf. Knowl. Manage., 2011, pp. 1031-1040.
[5] Y. Wang, J. Liu, J. Qu, Y. Huang, J. Chen, and X. Feng, ``Hashtag graph-based topic model for tweet mining,'' in Proc. IEEE Int. Conf. Data Mining, Dec. 2014, pp. 1025-1030.
[6] A. Mazzia and J. Juett, “Suggesting hashtags on Twitter,” EECS 545m, Machine Learning, Computer Science and Engineering, University of Michigan, 2009.
[7] E. Zangerle, W. Gassler, and G. Specht, “Recommending#-tags in Twitter,” in Proceedings of the Workshop on Semantic Adaptive Social Web (SASWeb 2011). CEUR Workshop Proceedings, Vol. 730, 2011, pp.67–78.
[8] S. M. Kywe, T.-A. Hoang, E.-P. Lim, and F. Zhu, “On recommending hashtags in Twitter networks,” in Social Informatics. Springer, 2012, pp. 337–350.
[9] S. Sedhai and A. Sun, “Hashtag recommendation for hyperlinked tweets,” in Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. ACM, 2014, pp.831–834.
[10] E. Otsuka, S. A. Wallace, and D. Chiu, “Design and evaluation of a Twitter hashtag recommendation system,” in Proceedings of the 18th International Database Engineering & Applications Symposium. ACM, 2014, pp. 330–333.
[11] A. Tomar, F. Godin, B. Vandersmissen, W. De Neve, and R. Van de Walle, “Towards Twitter hashtag recommendation using distributed word representations and a deep feed forward neural network,” in Proceedings ofAdvances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on. IEEE, 2014, pp. 362–368.
[12] J. Li, H. Xu, X. He, J. Deng and X. Sun, "Tweet modeling with LSTM recurrent neural networks for hashtag recommendation," 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 2016, pp. 1570-1577.
[13] B. Samanta, A. De and N. Ganguly, "STRM: A sister tweet reinforcement process for modeling hashtag popularity," IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, Atlanta, GA, USA, 2017, pp. 1-9.
[14] Yuqi Gao, Jitao Sang, Tongwei Ren, and Changsheng Xu., “ Hashtag-centric Immersive Search on Social Media,” In Proceedings of the 25th ACM international conference on Multimedia (MM '17). Association for Computing Machinery, New York, NY, USA, 2017, 1924–1932.
[15] C. Xing, Y. Wang, J. Liu, Y. Huang, W.-Y. Ma, “Hashtag-based sub-event discovery using mutually generative lda in twitter,” in: Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 2016, pp. 2666–2672.
[16] Amin Javari, Zhankui He, Zijie Huang, Raj Jeetu, and Kevin Chen-Chuan Chang, “Weakly Supervised Attention for Hashtag Recommendation using Graph Data,” In Proceedings of The Web Conference 2020 (WWW '20). Association for Computing Machinery, New York, NY, USA, 2020, pp 1038–1048.
[17] K. Chen, T. Chen, G. Zheng, O. Jin, E. Yao, and Y. Yu, ``Collaborative personalized tweet recommendation,'' in Proceedings of 35th Int. ACM SIGIR Conf. Res. Develop. Inf. Retr. (SIGIR), Portland, OR, USA, Aug. 2012, pp. 661_670.
[18] I. Ronen, I. Guy, E. Kravi, and M. Barnea, ``Recommending social media content to community owners,'' in Proceedings of 37th Int. ACM SIGIR Conf. Res. Develop. Inf. Retr. (SIGIR), Gold Coast, QLD, Australia, Jul. 2014, pp. 243_252.
[19] A. Veit, M. Nickel, S. Belongie, and L. van der Maaten, ``Separating self-expression and visual content in hashtag supervision,'' in Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Salt Lake City, UT, USA, Jun. 2018, pp. 5919_5927.
[20] Mingsong Mao, Jie Lu, Guangquan Zhang, Jinlong Zhang, “Multirelational social recommendations via multigraph ranking,” IEEE Trans. Cybern. Vol. 47, No. 12, 2016, pp 4049–4061.
[21] Qian Zhang, Dianshuang Wu, Jie Lu, Feng Liu, Guangquan Zhang, “A cross-domain recommender system with consistent information transfer”, Decis. Support Syst. Vol. 104, 2017, pp 49–63.
[22] Jing Li, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Tao Lian, Jun Ma, “Neural attentive session-based recommendation,” in: Proceedings of the ACM Conference on Information and Knowledge Management, ACM, 2017, pp. 1419–1428.
[23] Zhu Sun, Jie Yang, Jie Zhang, Alessandro Bozzon, Long-Kai Huang, and Chi Xu. “Recurrent knowledge graph embedding for effective recommendation,” In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). Association for Computing Machinery, New York, NY, USA, 2018, pp 297–305.
[24] H.-M. Lu and C.-H. Lee, ``A Twitter hashtag recommendation model that accommodates for temporal clustering effects,'' IEEE Intell. Syst., Vol. 30, No. 3, May 2015, pp. 18_25.
[25] D. Kowald, S. C. Pujari, and E. Lex, ``Temporal effects on hashtag reuse in Twitter: A cognitive-inspired hashtag recommendation approach,'' in Proceedings of 26th Int. Conf. World Wide Web, Perth, WA, Australia, Apr. 2017, pp. 1401_1410.
[26] E. Denton, J. Weston, M. Paluri, L. Bourdev, and R. Fergus, ``User conditional hashtag prediction for images,'' in Proceedings of 21th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining (KDD), Sydney, NSW, Australia, Aug. 2015, pp. 1731_1740.
[27] Y. Wang, J. Li, I. King, M. R. Lyu, and S. Shi, ``Microblog hashtag generation via encoding conversation contexts,'' in Proceedings of Conf. North, Minneapolis, MN, USA, Jun. 2019, pp. 1624_1633.
[28] M. Li, T. Gan, M. Liu, Z. Cheng, J. Yin, and L. Nie, ``Long-tail hashtag recommendation for micro-videos with graph convolutional network,'' in Proceedings of 28th ACM Int. Conf. Inf. Knowl. Manage. (CIKM), Beijing, China, Nov. 2019, pp. 509_518.
[29] J. Li, H. Xu, X. He, J. Deng, X. Sun,” Tweet modeling with LSTM recurrent neural networks for hashtag recommendation,” in: Proceedings of the International Joint Conference on Neural Networks, IEEE, 2016, pp. 1570–1577.
[30] H. Wang, X. Shi, D.Y. Yeung, “Relational stacked denoising autoencoder for tag recommendation,” in: Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 2015, pp. 3052–3058.
[31] Y. Li, T. Liu, J. Jiang, L. Zhang, “Hashtag recommendation with topical attention-based LSTM,” in: Proceedings of the International Conference on Computational Linguistics, ACM, 2016, pp. 3019–3029.
[32] H. A. Chopade and M. Narvekar, "Hybrid auto text summarization using deep neural network and fuzzy logic system," 2017 International Conference on Inventive Computing and Informatics (ICICI), Coimbatore, India, 2017, pp. 52-56.
[33] S. Karwa and N. Chatterjee, "Discrete Differential Evolution for Text Summarization," 2014 International Conference on Information Technology, Bhubaneswar, India, 2014, pp. 129-133,
[34] Mahdi, A.E., A. Alahmadi, and A. Joorabchi, "Combining Bag-of-Words and Bag-of-Concepts Representationsfor Arabic Text Classification," in 25th IET Irish Signals & Systems Conference 2014 and 2014 China-Ireland International Conference on Information and Communities Technologies (ISSC 2014/CIICT 2014), pp. 343-348, 2014.
[35] Tang, J., et al., "Cross-domain collaboration recommendation," in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, Association for Computing Machinery: Beijing, China, pp. 1285–1293, 2012.
[36] Zhang, X., J. Zhao, and Y. "LeCun, Character-level convolutional networks for text classification," in Proceedings of the 28th International Conference on Neural Information Processing Systems MIT Press: Montreal, Canada., Vol. 1, pp. 649–657, 2015.
[37] Li, P., C. Tang, and X. Xu, "Video summarization with a graph convolutional attention network," Frontiers of Information Technology & Electronic Engineering, Vol. 22, pp. 902-913, 2021.
[38] Zhang, K., et al. "Video Summarization with Long Short-Term Memory," in Computer Vision – ECCV 2016,. Cham: Springer International Publishing, pp.766-782, 2016.
[39] ji, Z., et al., Video Summarization With Attention-Based Encoder–Decoder Networks. IEEE Transactions on Circuits and Systems for Video Technology, 2017, PP 1709-1717.
[40] Zhao, B., X. Li, and X. Lu ", Property-Constrained Dual Learning for Video Summarization," IEEE Transactions on Neural Networks and Learning Systems, Vol. 31, pp. 3989-4000, 2020.
[41] Zhang, P. and Z. Yang, "A Novel AdaBoost Framework With Robust Threshold and Structural Optimization", IEEE Transactions on Cybernetics, Vol. 48, pp. 64-76, 2018.
[42] Wang, Y., et al., "Bernoulli random forests: closing the gap between theoretical consistency and empirical soundness," in Proceedings of the Twenty-Fifth International Joint Conference on ArtificialIntelligence, AAAI Press: New York, New York, USA, pp. 2167–2173, 2016.
[43] Friedman, J.H., "Stochastic gradient boosting," Computational Statistics & Data Analysis, Vol. 38, pp. 367-378, 2002.
[44] Cho, K., et al., "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation," in proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734, 2014.
[45] Zhang, S., et al. "Bidirectional Long Short-Term Memory Networks for Relation Classification,". in PACLIC, pp. 73-78, 2015.
[46] Zhou, P., et al., "Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification,"Journal of Biomedical Informatics, pp. 207-212, 2016.
[47] Kim, Y., "Convolutional Neural Networks for Sentence Classification," in proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751, 2014.
[48] Yin, W., et al., "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs," Transactions of the Association for Computational Linguistics, Vol. 4, pp. 259-272, 2015.
[49] Schwenk, H., et al. "Very Deep Convolutional Networks for Text Classification," in EACL, pp. 1107–1116, 2017.
[50] Sabour, S., N. Frosst, andG.E. Hinton, "Dynamic routing between capsules," in Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc.: Long Beach, California, USA, pp. 3859–3869, 2017.
[51] Yang, M., et al., "Investigating Capsule Networks with Dynamic Routing for Text Classification," pp. 3110-3119, 2018.
[52] Lei, K., et al., "Tag recommendation by text classification with attention-based capsule network," Neurocomputing, Vol. 391, pp. 65-73, 2020.