AI vs. Human Feedback in EFL Writing: Iranian Learners’ Perceptions
Subject Areas :
Elham Sharifzadeh
1
,
Ghasem Tayyebi
2
*
,
Leila Akbarpour
3
1 - PHD Candidate of TEFL, Department of Foreign Languages, Shi.C., Islamic Azad University, Shiraz, Iran
2 - Department of English, Kaz. C., Islamic Azad University, Kazerun, Iran
3 - Islamic Azad University, Shiraz Branch, Shiraz, Iran, Shiraz University, Shiraz, Iran
Keywords: AI vs. Human Feedback in EFL Writing: Iranian Learners’ Perceptions,
Abstract :
This research investigates the perceptions of Iranian EFL learners regarding AI-driven corrective feedback, specifically through Grammarly, in comparison to traditional human feedback for enhancing writing skills. Utilizing an embedded mixed-methods design, the study involved 120 participants who completed quantitative surveys, alongside qualitative interviews with 15 students from each feedback group. A questionnaire by Huang and Renandya (2020) assessed attitudes towards AI feedback, focusing on effectiveness, usability, and emotional responses. Additionally, a questionnaire adapted from Ushba Rasool et al. (2023) gathered insights on students' perceptions of written corrective feedback. Quantitative results showed a significant improvement in learners’ attitudes toward AI feedback (Z = –5.72, p < .001), with mean perception scores rising from 34.19 to 42.40 (approx. effect size r = .76). In the human feedback group, metalinguistic feedback perceptions increased (Z = –5.58, p < .001), while negative perceptions decreased (Z = –3.82, p < .001). Overall, AI feedback was valued for immediacy and motivation, while human feedback remained preferred for depth and clarity.
References
Chen, C. F. E., & Cheng, W. Y. E. C. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94–112.
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.) Sage.
D. Grimes and M. Warschauer (2010). “Utility in a fallible tool: a multi-site case study of automated writing evaluation,” Journal of Technology, Language, and Assessment, vol. 8, no. 6, pp. 1–43.
Ellis, R. (2010). Epilogue: A framework for investigating oral and written corrective feedback. Studies in Second Language Acquisition, 32, 335-349. https://doi.org/10.1017/S0272263109990544.
Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research, 18(3),382–388. https://doi.org/10.1177/002224378101800313.
Guo, Q. (2015). The effectiveness of written CF for L2 development: A mixed-method study of written CF types, error categories and proficiency levels (Doctoral dissertation, Auckland University of Technology). Retrieved from https://openrepository.aut.ac.nz/bitstream/handle/10292/9628/GuoQ.pdf?sequence=6&isAllowed.
Hair, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A primer on partial least squares structural equation modeling. Thousand Oaks, CA: Sage publications.
Huang, S., & Renandya, W. A. (2020). Exploring the integration
of automated feedback among lower-proficiency EFL learners. Innovation in Language Learning and Teaching, 14(1), 15–26. https://doi.org/10.1080/17501229.2018.1471083.
Lai, Y. h. (2010). Which do students prefer to evaluate their essays: Peers or computer program. British Journal of Educational Technology, 41(3), 432-454.
Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing, 27, 1-18.
Marrs, S., Zumbrunn, S., McBride, C., & Stringer, J. K. (2016). Exploring Elementary student perceptions of writing feed- back. Journal of Educational Psychology, 10(1), 16–28. https://eric.ed.gov/?id=EJ1131811.
Mohsen, M. A., & Abdulaziz, A. (2019). Effectiveness of using a hybrid mode of automated writing evaluation system on EFL students’ writing. Teaching English with Technology, 19(1), 118–131.
Nichols, P. D. (2004, April). Evidence for the interpretation and use of scores from an automated essay scorer. Paper presented at the Annual Meeting of the American Educational Research Association (AERA), San Diego, CA.
Purpura, J. E. (2004). Assessing grammar. Cambridge University Press.
Rowe, A., & Wood, L. (2009). Student perceptions and preferences for feedback. Asian Social Science, 4(3), 78–88. http:// citeseerx.ist.psu.edu/view doc/summary? doi=10.1.1.668.6101
Salami, F. A., & Khadawardi, H. A. (2022). Written corrective feedback in Online Writing Classrooms: EFL Students’ perceptions and preferences. International Journal of English Language Teaching, 10(2), 12–35. https://tudr.org/id/eprint/335.
Sayad Deghatkar, V., Khodareza, M. R., & Valipour, V. (2022). The impact of dynamic written corrective feedback on the accuracy of English passive voice usage in foreign language narrative writing. Biannual Journal of Education Experiences, 5(1), 173–189.
Truscott, J. (2004). Dialogue: Evidence and conjecture on the effects of correction. Journal of Second Language Writing, 13(4), 337–343.
Ushba Rasool (2023), School of Foreign Languages, Zhengzhou University, Zhengzhou, Henan, China. Email: ushba.rasool@gmail.com.
Wright, B. (1977). Solving measurement problems with the Rasch model. Journal of Educational Measurement, 14, 97-116.
Zheng, B., Warschauer, M., & Farkas, G. (2013). Digital writing and diversity: The effects of school laptop programs on literacy processes and outcomes. Journal of Educational Computing Research, 48(3), 267- 299.