Assessing the Performance Quality of Google Translate in Translating English and Persian Newspaper Texts Based on the MQM-DQF Model
الموضوعات : نشریه زبان و ترجمهZahra Foradi 1 , Jalilollah Faroughi 2 , Mohammad Reza Rezaeian Delouei 3
1 - Department of English Language and Literature, University of Birjand, Birjand, Iran
2 - گروه زبان انگلیسی دانشگاه بیرجند
3 - Department of English Language and Literature, University of Birjand, Birjand, Iran
الکلمات المفتاحية: Accuracy, Fluency, Translation Quality Assessment (TQA), Machine Translation (MT), MQM-DQF Model,
ملخص المقالة :
The use of machine translation to communicate and access information has become increasingly common. Various translation software and systems appear on the Internet to enable interlingual communication. Accelerating translation and reducing its cost are other factors in the increasing popularity of machine translation. Even if the quality of this type of translation is lower than human translation, it is still significant in many ways. The MQM-DQF model provides standards of error typology for objective and quantitative assessment of translation quality. In this model, two criteria (accuracy and fluency) are used to assess machine translation quality. The MQM-DQF model was used in this study to assess the quality of Google Translate performance in translating English and Persian newspaper texts. Five texts from Persian newspapers and five texts from English newspapers were randomly selected and translated by Google Translate both at the sentence level and the whole text. The translated texts were assessed based on the MQM-DQF model. Translation errors were identified and coded at three severity levels: critical, major, and minor errors. By counting the errors and scoring them, the percentage of accuracy and fluency criteria in each of the translated texts was calculated. The results showed that Google Translate performs better in translating texts from Persian into English; furthermore, in sentence-level translation, it performs better than the translation of the whole text. Moreover, translations of different newspaper texts (economic, cultural, sports, political, and scientific) were not of the same quality.
Aabedi, F. (2017). Quality Estimation and Generated Content of Machine Translation Programs: Babylon 10 Premium Pro Review and Google Translator (Master’s thesis, Faculty of Literature and Human Science, Imam Reza International University, Mashhad, Iran). Retrieved May 8, 2021, from https://ga nj.irandoc.a c.ir/#/artic les/8 fb842d6931a485d5874ea36c95e49cb
Aiken, M., & Balan, S. (2011). An analysis of Google Translate accuracy. Translation Journal, 16(2), April. Retrieved July 12, 2021, from https://t ransla tionjou rnal.ne t/journal/56google.htm.
Aiken, M. (2019). An Updated Evaluation of Google Translate Accuracy. Studies in Linguistics and Literature, 3. p. 253. 10.22158/sll.v3n3p253.
Andishe Borujeni, F. (2020). The quality of translation of cultural elements translated by human and machine in Natour Dasht novel based on Weddington model (Master’s thesis, Faculty of Foreign Languages, Sheikh Baha'i University). Retrieved Avril 25, 2021, from http s://gan j.irando c.ac.ir /#/articles/24d46684b88c4d7010f7c2de74fb27ac
Benjamin, M. (2019). When & How to Use Google Translate, Retrieved March, 27, 2021, from https:// www.tea chyouba ckwards.com/how-to-use-google-tran slate/
Collins dictionary online. (2021). Retrieved June 9, 2021, from https:// www.coll insdictionary.com/ dictionar y/english/t ick-all-the-boxes
Doherty, S. (2017). Issues in human and automatic translation quality assessment. In D. Kenny (Ed.), Human issues in translation technology (pp. 131 – 148). London, UK: Routledge.
Dorr, B., Olive, J., Mccary, J., & Christianson, C. (2011). Machine Translation Evaluation and Optimization. Handbook of Natural Language Processing and Machine Translation, by Olive, Joseph; Christianson, Caitlin; McCary, John, ISBN 978-1-4419-7712-0. Springer Science+Business Media, LLC, 2011, p. 745. -1. 745. 10.1007/978-1-4419-7713-7_5.
Ghasemi, H & Hashemian, M. (2016). A Comparative Study of Google Translate Translations: An Error Analysis of English-to-Persian and Persian-to-English Translations. English Language Teaching, 9. 13. 10.5539/elt.v9n3p13.
Hakiminejad, A & Alaeddini, M. (2016). A Contrastive Analysis of Machine Translation (Google Translate) and Human Translation: Efficacy in Translating Verb Tense from English to Persian. Mediterranean Journal of Social Sciences, 7. 10.5901 /m jss.2 016.v7n 4S2p40.
Karimnia, Amin. (2011). Waddington’s model of translation quality assessment: a critical inquiry. Elixir Ling. & Trans. 40, 5219-5224.
Lommel, A and Melby, A. (2018). MQM-DQF: A Good Marriage (Translation Quality for the 21st Century), The 13th Conference of The Association for Machine Translation in the Americas, Retrieved December, 5, 2020 from www.conference.amtaweb.org
Lommel et al, (2018). Harmonised Metric, Qt21 Deliverable 3.1. p.23. Retrieved March13, 2021 from http://w ww.qt2 1.eu/
Maučec, M & Donaj, G. (2019). Machine Translation and the Evaluation of Its Quality. DOI: 10.5772 /intechop en.89063. retrieved 28 July 2021 from: htps://www.intechopen.com/chapters /6 8953.
Moorkens, J et al (2018). Translation Quality Assessment: From Principles to Practice. Berlin, Germany: Springer.
Moradi, S. (2015). Comparison of translation of scientific-technical texts by two web-based translation machines based on Eurometrics (Master’s thesis, Faculty of Literature and Humanities, University of Shahid Bahonar, kerman, Iran). Retrieved Feb 22, 2021 from https://ganj.irandoc.ac.ir/#/articles/4182d0197e7b33873b141664483f5fd3
O'Brien, S. (2012). Towards a dynamic quality evaluation model for translation. The Journal of Specialised Translation, 17, 55–77 Retrieved Avril 18, 2021from https://www.jostrans.org/issue17/art_ ob riNn. pNdf
Pajhooheshnia, M (2015). Manual and automatic comparative evaluation in online machine translation systems; Case study: Persian-English translation of technical texts (master’s thesis, Sheikhbahaei University, Esfahan, Iran). Retrieved Avril 25, 2021 from https://ganj. irandoc.ac.i r/#/art icles/41b 29375cf56 G6c1ae9ff607ea8029cfe
Saldanha, Gabriela and O Brien, Sharon (2013). Research Methodologies in Translation Studies. London & New York: St. Jerome Publishing.
Sharifiyan, L. (2018). Evaluating Cohesion and Comprehensibility in Persian-English Machine Translated Texts (Master’s thesis, Faculty of Persian Literature and Foreign Languages, Allameh Tabataba’i University, Tehran, Iran). Retrieved May 2, 2021 from https ://ganj.i randoc.ac .ir/#/articles/e8bb80d702dc3792ab08 e01f4c2e36ad
Torkaman, E. (2013). A comparative Study Quality Assessment of Machine (Google translate) and Human Translation of Proverbs from English to Persian (Master’s thesis, Islamic Azad University, Central Tehran Branch – Faculty of Foreign Languages – Department of English). Retrieved July 21, 2021 from https://ga nj.irandoc .ac.ir/#/articles/51d717a038e26c50aee9df9beb143c0f
Turovsky, B. (2016). Found in translation: More accurate, fluent sentences in Google Translate, Retrieved April 27, 2021 from https://blo g.google/pro ducts/translate/found-translation more-accurate-fluent-sentences-googletran slate/
Vahedi Kakhki, A. (2018). Quality Evaluation of Persian Crowdsourcing Translations of Wikipedia Articles by MQM, (Master’s thesis, Faculty of Literature and Humanities- Department of Foreign Languages, University of Shahid Bahonar, Kerman, Iran). Retrieved Feb 11, 2021 from https:/ /ganj.ira ndoc.ac.ir /#/articles/1f803e5d3 3f 98f 3de 54a 959 d92bbb2d2.
Waddington, C. (2001). Different methods of evaluating student translation: The question of validity. Meta: Translators' Journal, 46(2), 311–325 Retrieved April 18, 2021 from https://www.erudit.org /fr/revues/meta/ 2001-v46-n2-meta 159 /004583ar.pdf.