An Algorithm to Measure the Attention Rate of Mobile Application Users using App Reviews
Subject Areas : Multimedia Processing, Communications Systems, Intelligent SystemsMehrdad Razavi Dehkordi 1 , Hamid Rastegari 2 * , Akbar Nabiollahi Najafabadi 3 , Taghi Javdani Gandomani 4
1 - Department of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
2 - Assistant Professor, Department of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
3 - Assistant Professor, Department of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
4 - Assistant Professor, Department of Computer Science, Shahrekord University, Shahrekord, Iran
Keywords: Mobile application, user attention rate, google play store analysis, user reviews.,
Abstract :
Introduction: Since the emergence of mobile apps, user reviews have been of great importance for app developers, as they contain users’ sentiment, bugs and new requests, user feedback has been valuable to app developers. Due to the large number of reviews, analysing them is a difficult, time-consuming and error-prone task. So far, many works have been done in the field of classifying reviews; But in none of them, it is not clear which category the users paid more attention to after categorization.
Method: In this article, an algorithm called UAR(User Attention Rate) is presented based on the date of reviews and their number to measure the level of attention of users to the category of bug report or feature request. To test the proposed algorithm, the reviews of WhatsApp, NextCloud, Mozilla Firefox and VLC Media Player apps were extracted by the web crawler for 30 days, and then the reviews were given to 4 software experts after classification. Next, after analyzing the reviews by 4 experts, the level of attention of users to the reviews is calculated by them. Then, using the proposed method, amount of users' attention to reviews is measured.
Results: It was found that the proposed algorithm able to have acceptable results in calculating the amount of users' attention to reviews compared to the values calculated by experts. So that the difference in the attention rate calculated by experts according to the proposed algorithm was 2.73%.
Discussion: Due to the large number of reviews posted for apps, it is difficult and time-consuming to review them by the development team. Having a method for User Attention Rate could save the development team’s time and help implement new features in the app, fix bugs and make the app successful. So far, many methods have been presented for automated, however, most of them have focused on invalid old features or ignored new features. A new algorithm is presented to calculate the users' attention rate to the categories of feature requests and bug reports. By using the relevant algorithm, the development team will understand that the user wants to fix the bugs related to the application or add new features to the application.
[1] M. R. Dehkordi, H. Seifzadeh, G. Beydoun, and M. H. Nadimi-Shahraki, “Success prediction of android applications in a novel repository using neural networks,” Complex Intell. Syst., vol. 6, no. 3, pp. 573–590, 2020, doi: 10.1007/s40747-020-00154-3.
[2] W. Martin, F. Sarro, Y. Jia, Y. Zhang, and M. Harman, “A survey of app store analysis for software engineering,” IEEE Trans. Softw. Eng., vol. 43, no. 9, pp. 817–847, 2017, doi: 10.1109/TSE.2016.2630689.
[3] E. Guzman and W. Maalej, “How Do Users Like This Feature? A Fine Grained Sentiment Analysis of App Reviews,” in 2014 IEEE 22nd International Requirements Engineering Conference (RE), Aug. 2014, pp. 153–162, doi: 10.1109/RE.2014.6912257.
[4] D. Pagano and W. Maalej, “User feedback in the appstore: An empirical study,” in 2013 21st IEEE International Requirements Engineering Conference (RE), Jul. 2013, pp. 125–134, doi: 10.1109/RE.2013.6636712.
[5] L. V. G. Carreno and K. Winbladh, “Analysis of user comments: An approach for software requirements evolution,” Proc. - Int. Conf. Softw. Eng., pp. 582–591, 2013, doi: 10.1109/ICSE.2013.6606604.
[6] W. Maalej and D. Pagano, “On the socialness of software,” Proc. - IEEE 9th Int. Conf. Dependable, Auton. Secur. Comput. DASC 2011, pp. 864–871, 2011, doi: 10.1109/DASC.2011.146.
[7] N. Seyff, F. Graf, and N. Maiden, “Using mobile RE tools to give end-users their own voice,” Proc. 2010 18th IEEE Int. Requir. Eng. Conf. RE2010, pp. 37–46, 2010, doi: 10.1109/RE.2010.15.
[8] A. Al-Subaihin et al., “App store mining and analysis,” in Proceedings of the 3rd International Workshop on Software Development Lifecycle for Mobile, Aug. 2015, pp. 1–2, doi: 10.1145/2804345.2804346.
[9] N. Chen, J. Lin, S. C. H. Hoi, X. Xiao, and B. Zhang, “AR-miner: Mining informative reviews for developers from mobile app marketplace,” in Proceedings - International Conference on Software Engineering, May 2014, no. 1, pp. 767–778, doi: 10.1145/2568225.2568263.
[10] E. C. Groen, J. Doerr, and S. Adam, “Towards Crowd-Based Requirements Engineering A Research Preview,” in Requirements Engineering: Foundation for Software Quality, 2015, pp. 247–253.
[11] S. A. Licorish, B. T. R. Savarimuthu, and S. Keertipati, “Attributes that predict which features to fix: Lessons for app store mining,” ACM Int. Conf. Proceeding Ser., vol. Part F1286, pp. 108–117, 2017, doi: 10.1145/3084226.3084246.
[12] W. Maalej, Z. Kurtanović, H. Nabil, and C. Stanik, “On the automatic classification of app reviews,” Requir. Eng., vol. 21, no. 3, pp. 311–331, Sep. 2016, doi: 10.1007/s00766-016-0251-9.
[13] S. Moghaddam, “Beyond Sentiment Analysis: Mining Defects and Improvements from Customer Feedback,” 2015, pp. 400–410.
[14] T. Johann, C. Stanik, A. M. B. Alizadeh, and W. Maalej, “SAFE: A Simple Approach for Feature Extraction from App Descriptions and App Reviews,” in Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017, Sep. 2017, pp. 21–30, doi: 10.1109/RE.2017.71.
[15] S. Scalabrino, G. Bavota, B. Russo, M. Di Penta, and R. Oliveto, “Listening to the Crowd for the Release Planning of Mobile Apps,” IEEE Trans. Softw. Eng., vol. 45, no. 1, pp. 68–86, 2019, doi: 10.1109/TSE.2017.2759112.
[16] D. Martens and W. Maalej, “Towards understanding and detecting fake reviews in app stores,” Empir. Softw. Eng., vol. 24, no. 6, pp. 3316–3355, Dec. 2019, doi: 10.1007/s10664-019-09706-9.
[17] N. Aslam, W. Y. Ramay, K. Xia, and N. Sarwar, “Convolutional neural network based classification of app reviews,” IEEE Access, vol. 8. Institute of Electrical and Electronics Engineers Inc., pp. 185619–185628, 2020, doi: 10.1109/ACCESS.2020.3029634.
[18] Z. Qiao, A. Wang, A. Abrahams, and W. Fan, “Deep Learning-Based User Feedback Classification in Mobile App Reviews,” 2020. [Online]. Available: https://aisel.aisnet.org/sigdsa2020.
[19] “What Criteria are Used by Google to Rank Reviews Most Relevant? | LaPraim Insights.” https://lapraim.com/insights/criteria-for-relevant-google-reviews (accessed Sep. 18, 2023).
[20] “How Google Decides What Reviews Are Most Relevant.” https://www.widewail.com/blog/how-google-decides-what-reviews-are-most-relevant (accessed Sep. 18, 2023).
[21] Z. Wan, X. Xia, D. Lo, and G. C. Murphy, “How does machine learning change software development practices?,” IEEE Trans. Softw. Eng., vol. 47, no. 9, pp. 1857–1871, 2021, doi: 10.1109/TSE.2019.2937083.