Designing and Validating a Self-Assessment Tool for Improving the Speaking Skill of Iranian EFL Learners
محورهای موضوعی : Applied LinguisticsAli Fathi Karizak 1 , Shahram Afraz 2 , Fazlolah Samimi 3
1 - PhD Candidate, Department of English, Qeshm Branch, Islamic Azad University, Qeshm, Iran
2 - Ph.D. in TEFL, Department of English Language, Qeshm Branch, Islamic Azad University, Qeshm, Iran
3 - عضو هیئت علمی دانشگاه هرمزگان
کلید واژه: Iranian EFL Learners, MAXODA, Self-Assessment, Speaking Skill,
چکیده مقاله :
This study aimed to examine the different components of the speaking self-assessment instrument, and the most and least influential factors in the self- assessment instrument affecting Iranian EFL learners’ speaking skill. Finally, utilizing a grounded theory, the model of self -assessment Instrument affecting Iranian EFL learners' speaking skill was designed and validated. Accordingly, 20 language experts and 350 Iranian EFL learners were selected based on convenience sampling. The data were gathered via interviews and questionnaire. The data was analyzed by MAXODA. The findings revealed two broad categories of most influential factors in the self-assessment instrument affecting Iranian EFL learners’ speaking skill are classroom practice concerns EFL teachers’ actions in EFL speaking classrooms and conceptions referring to their beliefs about assessment in general and self-assessment in particular. In addition to teacher assessment, it was also reported using peer assessment can be influential. The participants explained that sometimes comment on their peers’ speaking productions can be conductive. Regarding the assessment techniques, the participants did not provide much information concerning the assessment procedures as no reference was made to any particular assessment techniques or tasks. As far as the least influential factors are concerned, teachers’ formative feedback, and assessment criteria were mentioned, implying that they do not have any significant impact on self- assessment instrument affecting Iranian EFL learners’ speaking skill. Consequently, the model of self-assessment instrument affecting Iranian EFL learners’ speaking skill emerges includes some characteristics and advantages, as well as disadvantages and challenges
This study aimed to examine the different components of the speaking self-assessment instrument, and the most and least influential factors in the self- assessment instrument affecting Iranian EFL learners’ speaking skill. Finally, utilizing a grounded theory, the model of self -assessment Instrument affecting Iranian EFL learners' speaking skill was designed and validated. Accordingly, 20 language experts and 350 Iranian EFL learners were selected based on convenience sampling. The data were gathered via interviews and questionnaire. The data was analyzed by MAXODA. The findings revealed two broad categories of most influential factors in the self-assessment instrument affecting Iranian EFL learners’ speaking skill are classroom practice concerns EFL teachers’ actions in EFL speaking classrooms and conceptions referring to their beliefs about assessment in general and self-assessment in particular. In addition to teacher assessment, it was also reported using peer assessment can be influential. The participants explained that sometimes comment on their peers’ speaking productions can be conductive. Regarding the assessment techniques, the participants did not provide much information concerning the assessment procedures as no reference was made to any particular assessment techniques or tasks. As far as the least influential factors are concerned, teachers’ formative feedback, and assessment criteria were mentioned, implying that they do not have any significant impact on self- assessment instrument affecting Iranian EFL learners’ speaking skill. Consequently, the model of self-assessment instrument affecting Iranian EFL learners’ speaking skill emerges includes some characteristics and advantages, as well as disadvantages and challenges
Ahmadi, A., & Sadeghi, E. (2016). Assessing English language learners’ oral performance: a comparison of monologue, interview, and group oral test. Language Assessment Quarterly,. 13, 341–358. doi: 10.1080/15434303.2016.1236797
Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and Assessment. Bloomsbury.
Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge University Press.
Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford University Press.
Bachman, L. F., & Palmer, A. S. (1996). Language assessment in practice: Designing and developing useful language tests. Oxford University Press.
Bonk, W. J., & Ockey, G. J. (2003). A many-facet Rsch analysis of the second language group oral discussion task. Lang. Test. 20, 89–110. doi: 10.1191/0265532203lt245oa
Bosker, H. R., Pinget, A.-F., Quené, H., Sanders, T., & De Jong, N. H. (2013). What makes speech sound fluent? The contributions of pauses, speed and repairs. Lang. Test. 30, 159–175. doi: 10.1177/0265532212455394
Brown, A. (2003). Interviewer variation and the co-construction of speaking proficiency. Lang. Test. 20, 1–25. doi: 10.1191/0265532203lt242oa
Carter, R., & McCarthy, M. (2017). Spoken grammar: where are we and where are we going? Appl. Linguistics 38, 1–20. doi: 10.1093/applin/amu080
Chapelle, C. A., Cotos, E., & Lee, J. (2015). Validity arguments for diagnostic assessment using automated writing evaluation. Lang. Test. 32, 385–405. doi: 10.1177/0265532214565386
Chapelle, C. A., Enright, M. K., and Jamieson, J. (2010). Does an argument-based approach to validity make a difference? Educ. Meas. Iss. Pract. 29, 3–13. doi: 10.1111/j.1745-3992.2009. 00165.x
Chapelle, C. A., Enright, M. K., & Jamieson, J. M. (2008). Building a validity argument for the test of English as a foreign language. Routledge; Taylor & Francis Group.
Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: a many-facet Rasch analysis. Lang. Assess. Q: Int. J. 2, 197–221. doi: 10.1207/s15434311laq0203_2
Eckes, T. (2011). Introduction to many-facet Rasch measurement. Peter Lang.
Ellis, R. (2015). Introduction: complementarity in research syntheses. Appl. Linguistics 36, 285–289. doi: 10.1093/applin/amv015
Fan, J., & Knoch, U. (2019). Fairness in language assessment: what can the Rasch model offer. Lang. Test. Assess. 8, 117–142. Available online at: http://www. altaanz.org/uploads/5/9/0/8/5908292/8_2_s5_fan_and_knoch.pdf
Fulcher, G. (2000). The ‘communicative’ legacy in language testing. System, 28, 483–497. doi: 10.1016/S0346-251X(00)00033-6
Fulcher, G. (2015). Assessing second language speaking. Lang. teaching, 48, 198–216. doi: 10.1017/S02614448140 00391
Fulcher, G., Davidson, F., & Kemp, J. (2011). Effective rating scale development for speaking tests: performance decision trees. Lang. Test. 28, 5–29. doi: 10.1177/0265532209359514
Galaczi, E., & Taylor, L. (2018). Interactional competence: conceptualisations, operationalisations, and outstanding questions. Lang. Assess. Q. 15, 219–236. doi: 10.1080/15434303.2018.1453816
Galaczi, E. D. (2008). Peer-peer interaction in a speaking test: the case of the First Certificate in English examination. Lang. Assess. Q. 5, 89–119. doi: 10.1080/15434300801934702
Gan, Z. (2012). Complexity measures, task type, and analytic evaluations of speaking proficiency in a school-based assessment context. Lang. Assess. 9, 133–151. doi: 10.1080/15434303.2010.516041
Hirai, A., & Koizumi, R. (2013). Validation of empirically derived rating scales for a story retelling speaking test. Lang. Assess, 10, 398–422. doi: 10.1080/15434303.2013.824973
Iwashita, N. (2006). Syntactic complexity measures and their relation to oral proficiency in Japanese as a foreign language. Lang. Assess, 3, 151–169. doi: 10.1207/s15434311laq0302_4
Jang, E. E., Wagner, M. & Park, G. (2014). Mixed methods research in language testing and assessment. Annu. Rev. Appl. Linguistics, 34, 123–153.
Kim, A. A., Chapman, M., Kondo, A., & Wilmes, C. (2020). Examining the assessment literacy required for interpreting score reports: A focus on educators of K-12 English learners. Language Testing, 37(1), 54–75.
Klebanov, B. B., Ramineni, C., Kaufer, D., Yeoh, P., & Ishizaki, S. (2019). Advancing the validity argument for standardized writing tests using quantitative rhetorical analysis. Language Testing, 36(1), 125–144.
Longabach, T., & Peyton, V. (2018). A comparison of reliability and precision of subscore reporting methods for a state English language proficiency assessment. Language Testing, 35(2), 297–317.
McNamara, T. (2006). Validity in language testing: The challenge of Sam Messick's legacy. Language Assessment Quarterly, 3(1), 31–51.
Mertler, C. A., & Campbell, C. (2005). Measuring teachers' knowledge & application of classroom assessment concepts: Development of the" assessment literacy inventory. Online Submission.
Ng, W. S., Xie, H., & Wang, F. L. (2018). Enhancing teacher assessment literacy using a blended deep learning approach. Paper presented at the International Conference on Blended Learning.
Ockey, G. J. (2014). Exploratory factor analysis and structural equation modeling. In A. J. Kunnan (Ed.). The companion to language assessment: Abilities, contexts and learners volume III (pp. 140-160). John Wiley & Sons, Inc.
Razavipour, K. (2013). Assessing assessment literacy: Insights from a high-stakes test. Research in Applied Linguistics, 4(1), 111–131.
Salimi, E. A., & Farsi, M. (2018). An Investigation of assessment literacy among native and nonnative English teachers. Journal of English Language Teaching and Learning, 10(22), 49–62.
Schils, E., van Der Poel, M., & Weltens, B. (1991). The reliability ritual. Language Testing, 8(2), 125–138.
Tommerdahl, J., & Kilpatrick, C. D. (2014). The reliability of morphological analyses in language samples. Language Testing, 31(1), 3–18.
Weideman, A. (2019). Assessment literacy and the good language teacher: four principles and their applications. Journal for Language Teaching, 53(1), 103–121.
Weideman, A. (2019a). Degrees of adequacy: the disclosure of levels of validity in language assessment. Koers, 84(1), 1–15.
Weideman, A. (2019b). Validation and the further disclosures of language test design. Koers, 84(1), 1–10.
Xu, Y. (2018). Assessment in the language classroom: teachers support learner learning. Language Assessment Quarterly, 15(4), 423–425.
Youn, S. J. (2020). Managing proposal sequences in role-play assessment: Validity evidence of interactional competence across levels. Language Testing, 37(1), 76–106.