A Gender-Based Investigation into the Relationship between Test Method and Iranian EFL Test-Takers’ Grammar Performance
Subject Areas : Journal of Studies in Learning and Teaching EnglishShokouh Rashvand Semiyari 1 , Amir Reza Aliakbar 2
1 - Department of English Language Teaching, Islamic Azad University, East Tehran Branch, Tehran, Iran
2 - Department of English Language Teaching, West Tehran Branch, Islamic Azad University, Tehran-Iran
Keywords: Test Method, Grammar Performance, Gender,
Abstract :
In the domain of educational assessment, comprehending the elements that shape test-takers' achievements is quite significant. This research delved into how test method and gender might affect grammar performance. To this end, 274 intermediate EFL learners in the 18-30 age range, studying in Qotb Ravandi Institute in Tehran took a grammar test in four different formats specified to comparatives, superlatives, and present perfect tenses. The results of the correlation analysis revealed that there was a positive correlation between total score (grammar performance) and error correction, word changing, word order, and completion scores. The results of regression analysis also indicated that gender was a significant predictor of grammar performance. There was a negative statistically significant correlation between gender and grammar performance, indicating that male students tended to score lower than their female counterparts. Furthermore, the predictor variable of grammar performance could accurately classify 63.6% of females and 31.3% of males in their groups, with the overall precision of the regression model being 50%. Therefore, it can be argued that there would be a statistically significant relationship between test-takers' gender and their grammar performance. The implications and suggestions for further studies were also highlighted.
Ackermann, N., & Siegfried, C. (2019). Does a balanced test form regarding selected-response and constructed-response items overcome gender gap in test scores? An analysis of the format-gender relation in the test of economic-civic competence. Citizenship, Social and Economics Education, 18(3), 158-176. https://doi.org/10.1177/2047173419892531
Akhavan Masoumi, G., & Sadeghi, K. (2020). Impact of test format on vocabulary test performance of EFL learners: the role of gender. Language Testing in Asia, 10(1), 1-13. https://doi.org/10.1186/s40468-020-00099-x
Aslan, O. (2009). The role of gender and language learning strategies in learning English. MA Thesis, Middle East Technical University.
https://hdl.handle.net/11511/18929
Azizmohammadi, F., & Barjesteh, H. (2020). On the relationship between EFL learners' grammar learning strategy use and their grammar performance: Learners' gender in focus. Journal of Language Teaching and Research, 11(4), 583-592. http://dx.doi.org/10.17507/jltr.1104.08
Bachman, L. (1990). Fundamental considerations in language testing. Oxford University Press.
Beller, M., & Gafni, N. (2000). Can item format (multiple choice vs. open-ended) account for gender differences in mathematics achievement? Sex Roles, 42(1), 1-21. https://doi.org/10.1023/A: 1007051109754
Bensoussan, M. (1984). A comparison of cloze and multiple- choice reading Comprehension tests of English as a Foreign Language. Language Testing, 1(1), 101-104. https://doi.org/10.1177/026553228400100109
Birenbaum, M., & Tatsuoka, K. K. (1987). Open-ended versus multiple-choice response formats—it does make a difference for diagnostic purposes. Applied Psychological Measurement, 11(4), 385-395. https://doi.org/10.1177/014662168701100404
Bleske‐Rechek, A., Zeug, N., & Webb, R. M. (2007). Discrepant performance on multiple‐ choice and short answer assessments and the relation of performance to general scholastic aptitude. Assessment & Evaluation in Higher Education, 32(2), 89-105. https://doi.org/10.1080/02602930600800763
Bridgeman, B. (1992). A comparison of quantitative questions in open‐ended and multiple‐choice formats. Journal of Educational Measurement, 29(3), 253-271. https://doi.org/10.1111/j.1745-3984.1992.tb00377.x
Brown, H. D., & Abeywickrama, P. (2018). Language assessment: Principles and classroom practices (3rd Ed.). Pearson Education. https://doi.org/10.3390/socsci7110227
Bush, M. (2001). A multiple choice test that rewards partial knowledge. Journal of Further and Higher Education, 25(2). 157-163. https://doi.org/10.1080/03098770123674
Cesur, K. (2008). Students' and teachers' perceptions of the test techniques used to assess language skills at university level (Unpublished master's thesis). Çanakkale Onsekiz Mart University, Institute of Social Sciences, Department of English Language Teaching.
https://doi.org/10.13140/RG.2.2.16501.63202
Chambers, J. K., & Schilling, N. (2013). The handbook of language variation and change. Wiley-Blackwell. https://doi.org/10.1002/9781118335598
Cheng, H. F. (2004). A comparison of multiple‐choice and open‐ended response formats for the assessment of listening proficiency in English. Foreign Language Annals, 37(4), 544-553. https://doi.org/10.1111/j.1944-9720.2004.tb02421.x
Close, R.A. (1982). English as a foreign language. George Allen and Unwin. ISBN 10: 0044250258. ISBN 13: 9780044250258
Cordeiro, P. F. (2021). Accountability evaluation in systems-of-information systems based on systems thinking. Doctoral Thesis, PPGI/UNIRIO. https://doi.org/10.13140/RG.2.2.35828.22402.
Currie, M. & Chiramanee, T. (2010). The effect of the multiple-choice item format on the measurement of knowledge of language structure. Language Testing, 27(4), 471–491.
https://doi.org/10.1177/0265532209356790
Debata, P. K. (2013). The importance of grammar in English language teaching: A reassessment. Language in India, 13(5), 482-486. ISSN 1930-2940
DeKeyser, R. M. (1993). The effect of error correction on L2 grammar knowledge and oral proficiency. The Modern Language Journal, 77(4), 501-514. https://doi.org/10.1111/j.1540-4781.1993.tb01999.x
Dikmen, M. (2023). Test anxiety in online exams: scale development and validity. Current Psychology, 42(1), 30210–30222. https://doi.org/10.1007/s12144-022-04072-0
Dolan, R. P., Goodman, J., Strain-Seymour, E., Adams, J., & Sethuraman, S. (2011). Cognitive lab evaluation of innovative items in mathematics and English language arts assessment of elementary, middle, and high school students. Pearson. https://doi.org/10.13140/RG.2.2.21857.02407
Douglas, D. (2010). Understanding language-testing (pp.2). Routledge. ISBN 9780340983430
Famularo, L. (2007). The effect of response format and test taking strategies on item difficulty: a comparison of stem-equivalent multiple-choice and constructed-response test items. Boston College ProQuest Dissertations Publishing.
Haynie, W. J. (1994). Effects of multiple-choice and short-answer tests on delayed retention learning. Journal of Technology Education, 6(1), 32-44. https://doi.org/10.21061/jte.v6i1.a.3
Hijazi, S. T., & Naqvi, S. M. M. (2006). Factors affecting students' performance: A case of private colleges. Bangladesh e-Journal of Sociology, 3(1), 1-10.
https://api.semanticscholar.org/CorpusID:17496544
Holmes, J. (2007). Social constructionism, postmodernism and feminist sociolinguistics. Gender & Language, 1(1), 51-65. https://doi.org/10.1558/genl.2007.1.1.51
Hughes, R., & McCarthy, M. (1998). From sentence to discourse: Discourse grammar and English language teaching. TESOL Quarterly, 32(2), 263. https://doi.org/10.2307/3587584
Hyde, J. S., & Linn, M. C. (1988). Gender differences in verbal ability: A meta-analysis.
Psychological Bulletin, 104(1), 53-69. https://doi.org/ 10.1037/0033-2909.104.1.53
In'nami, Y., & Koizumi, R. (2009). A meta-analysis of test format effects on reading and listening test performance: Focus on multiple-choice and open-ended formats. Language
Testing, 26(2), 219-244. https://doi.org/10.1177/0265532208101006
Izadpanah, J., Sadighi, F., & Akbarpour, L. (2023). The effect of explicit corrective feedback on EFL learners’ retention of grammar: Does the medium of feedback matter? Journal of Studies in Learning and Teaching English, 12(1), 99-122. 20.1001.1.22518541.2023.12.1.6.4
Kang, S. H., McDermott, K. B., & Roediger III, H. L. (2007). Test format and corrective feedback modify the effect of testing on long-term retention. European Journal of Cognitive Psychology, 19(4-5), 528-558. https://doi.org/10.1080/09541440601056620
Kitao, S. K., & Kitao, K. (1996). Testing grammar. The Internet TESL Journal. Retrieved from http://iteslj.org/Articles/Kitao-TestingGrammar.html
Kuechler, W. L., & Simkin, M. G. (2010). Why is performance on multiple‐choice tests and constructed‐response tests not more closely related? Theory and an empirical test. Decision Sciences Journal of Innovative Education, 8(1), 55-73. https://doi.org/10.1111/j.1540-4609.2009.00243.x
Livingston, S. A. (2009). Constructed-response test questions: Why we use them; how we score them. R&D Connections, (11). Educational Testing Service. http://www.ets.org/Media/Research/pdf/RD_Connections11.pdf
Madsen, H. S. (1983). Techniques in testing. Oxford University Press.
https://doi.org/10.1177/026553228500200109
Mao, A.M. (2022) Literature review of language testing theories and approaches. Open Access Library Journal, 9(5), 1-5. https://doi.org/10.4236/oalib.1108741.
Mauldin, R. K. (2009). Gendered perceptions of learning and fairness when choice between exam types is offered. Active Learning in Higher Education, 10(3), 253-264. https://doi.org/10.1177/1469787409343191
McNamara, F. (2000). Language testing. Oxford University Press. ISBN-10, 0194372227.
McNamara, D. S. (2010). Strategies to read and learn: Overcoming learning by consumption. Medical education, 44(4), 340-346. https://doi.org/10.1111/j.1365-2923.2009.03550.x
Mozaffari, F., Alavi, S. M., & Rezaee, A. (2017). Investigating the impact of response format on the performance of Grammar tests: Selected and constructed. Teaching English as a Second Language (Formerly Journal of Teaching Language Skills), 36(2), 103-128. https://doi.org/10.22099/jtls.2017.23918.2154
Ölmezer-Öztürk, E., & Aydin, B. (2018). Toward measuring language teachers’ assessment knowledge: development and validation of Language Assessment Knowledge Scale (LAKS). Language Testing in Asia, 8(20), 1-15. https://doi.org/10.1186/s40468-018-0075-2
Onaiba, A., & Jannat, F. (2019). Test method effect and test-takers' scores: a critical review of the pertinent literature. Scientific Journal of Faculty of Education, Misurata University-Libya, 1(14), 3-22. http://mdr.misuratau.edu.ly/handle/123456789/1084
Otayf, Y. A. (2019). The role of gender in language learning strategies among male and female students at Jazan secondary schools. مجلة کلية التربية (أسيوط), 35(9.2), 1-28. https://doi.org/10.21608/mfes.2019.102838
Ozan, C., & Kincal, R.Y. (2018). The effects of formative assessment on academic achievement, attitudes toward the lesson, and self-regulation skills. Educational Sciences: Theory & Practice, 18(1), 85-118. https://doi.org/10.12738/estp.2018.1.0216
Paudel, P. (2018). Use of test-teach-test method in English as a foreign language classes. Journal of NELTA Surkhet, 5(15), 15–27. https://doi.org/10.3126/jns.v5i0.19482
Pienemann, M., Johnston, M., & Brindley, G. (1989). Constructing an acquisition-based procedure for second language assessment. Studies in Second Language Acquisition, 10(2), 217–243. https://doi.org/10.1017/S0272263100007324
Pope, G. A., Wentzel, C., Braden, B., & Anderson, J. (2006). Relationships between gender and Alberta achievement test scores during a four-year period. Alberta Journal of Educational Research, 52(1), 4-15. https://api.semanticscholar.org/CorpusID:201024355
Pomplun, M., & Capps, L. (1999). Gender differences for constructed-response mathematics items. Educational and Psychological Measurement, 59(4), 597-614. https://doi.org/10.1177/00131649921970044
Rea-Dickins, P., & Germaine, K. (1992). Evaluation. Oxford University Press. ISBN 0194371387
Roediger, H., Putnam, A., & Sumeracki, M. (2011). Ten benefits of testing and their applications to educational practice. Psychology of Learning & Motivation: Cognition in Education, 55(1), 1-36. https://doi.org/10.1016/B978-0-12-387691-1.00001-6
Schmitt, N., & Carter, R. (2004). Formulaic sequences in action. Formulaic sequences:
Acquisition, processing and use. John Benjamins. https://doi.org/10.1075/lllt.9.02sch
Shohamy, E. (1984). Does the testing method make a difference? The case of reading comprehension. Language Testing, 1(2), 147-170. https://sid.ir/paper/599667/en
Simkin, M. G., & Kuechler, W. L. (2005). Multiple‐choice tests and student understanding: What is the connection? Decision Sciences Journal of Innovative Education, 3(1), 73-97. http://dx.doi.org/10.1111/j.1540-4609.2005.00053.x
Sireci, G. S., & Zenisky, L. A. (2016). Computerized innovative item formats: Achievement and credentialing. In S. Lane, M. R. Raymond & T. M. Haladyna (Eds.), Handbook of test development (2nd ed., pp. 313-334). Routledge. ISBN 9780415626026
Sumarni, S. & Rachmawaty, N. (2019). Gender differences in language learning strategies. Ethical Lingua: Journal of Language Teaching and Literature. 6(1), 13-22. 10.30605/ethicallingua.v6i1.1169.
Weaver, A. J., & Raptis, H. (2001). Gender differences in introductory atmospheric and oceanic science exams: multiple choice versus constructed response questions. Journal of Science Education and Technology, 10(2), 115-126. https://doi.org/10.1023/A:1009412929239
Zhang, J. (2009). Necessity of grammar teaching. International Education Studies, 2(2), 78–81. https://doi.org/10.5539/ies.v2n2p184
Zoghi, M., Kazemi, S. A., & Kalani, A. (2013). The effect of gender on language learning. Journal of Novel Applied Sciences, 2(4), 1124-1128. https://api.semanticscholar.org/CorpusID:5866518