List of articles (by subject) Natural Language Processing


    • Open Access Article

      1 - Comparison of Pre-Trained models in Extractive Text Summarization of Mobile User Reviews
      Mehrdad Razavi Dehkordi hamid rastegari Akbar Nabiollahi Najafabadi Taghi Javdani Gandomani
      Since the inception of mobile apps, user feedback has been extremely valuable to app developers as it contains users' feelings, bugs, and new requirements. Due to the large volume of reviews, summarizing them is very difficult and error-prone. So far, many works have be More
      Since the inception of mobile apps, user feedback has been extremely valuable to app developers as it contains users' feelings, bugs, and new requirements. Due to the large volume of reviews, summarizing them is very difficult and error-prone. So far, many works have been done in the field of extractive summarization of users' reviews; However, in most researches, old methods of machine learning or natural language processing have been used, or if a model has been trained for summarizing using transformers, it has not been determined whether this model is useful for summarizing the reviews of mobile users. No? In other words, the model for summarizing texts has been presented in a general purpose form, and no investigation has been carried out for its use in special purpose summarization. In this article, first, 1000 reviews were randomly selected from the Kaggle database of user reviews, and then given to 4 pre-trained models bart_large_cnn, bart_large_xsum, mT5_multilingual_XLSum, and Falcon'sai Text_Summrization for summarization, and the criteria Rouge1, Rouge2 and RoungL were calculated separately for each of the models and finally it was found that the pre-trained Falcon's AI model with a score of 0.6464 in the rouge1 criterion, a score of 0.6140 in the rouge2 criterion and a score of 0.6346 in rougeL The best model for summarizing users' reviews is the Play Store. Manuscript profile