Second Language Reading Comprehension: A Reading Issue or a Language Problem--A Partial Least Square Modeling Analysis
محورهای موضوعی : TeachingRoshanak Rezaei 1 , فرامرز عزیز ملایری 2 , Abbas Bayat 3 , Hossein Ahmadi 4
1 - Foreign Languages, Teaching English as a Foreign Language, Islamic Azad University, Malayer Branch
2 - دانشگاه آزاد اسلامی واحد ملایر
3 - Department of English Language, Malayer Branch, Islamic Azad University, Malayer, Iran
4 - Assistant Professor in TEFL, Department of English Language, Malayer Branch, Islamic Azad University, Malayer, Iran
کلید واژه: General Cognitive Abilities, Linguistic competence, Reading Comprehension Sub-skills,
چکیده مقاله :
The application of comprehension skills as a general cognitive ability has long been discussed in the literature on second language reading comprehension. To trace second language reading comprehension difficulties back to the text or reader attributes, the present study investigates the optimum load of linguistic and non-linguistic components of second language reading ability. The study followed a quantitative method of research, and the data were collected on the performance of 164 Iranian foreign language learners with different educational backgrounds. A group of Teaching English as a Foreign Language (TEFL) experts provided feedback on the specifications of the sample test items based on the Comprehensive Taxonomy of Reading Sub-skills (CTRS) derived from major taxonomies in the literature. Applying exploratory factor analysis and correlational computations, the results revealed that although the items were all convergent to measure the same latent construct, items aimed at measuring general cognitive comprehension skills had more contribution to the overall test scores. Teachers at both language institutes and ministry of education, test developing organizations, and students who seek to gain more success in reading comprehension examinations can benefit from this study feedback.
The application of comprehension skills as a general cognitive ability has long been discussed in the literature on second language reading comprehension. To trace second language reading comprehension difficulties back to the text or reader attributes, the present study investigates the optimum load of linguistic and non-linguistic components of second language reading ability. The study followed a quantitative method of research, and the data were collected on the performance of 164 Iranian foreign language learners with different educational backgrounds. A group of Teaching English as a Foreign Language (TEFL) experts provided feedback on the specifications of the sample test items based on the Comprehensive Taxonomy of Reading Sub-skills (CTRS) derived from major taxonomies in the literature. Applying exploratory factor analysis and correlational computations, the results revealed that although the items were all convergent to measure the same latent construct, items aimed at measuring general cognitive comprehension skills had more contribution to the overall test scores. Teachers at both language institutes and ministry of education, test developing organizations, and students who seek to gain more success in reading comprehension examinations can benefit from this study feedback.
Afflerbach, P. (2017) Understanding and using reading assessment. Alexandria, Virginia, USA: ASCD
Ahmed, Y., Francis, D. J., York, M., Fletcher, J. M., Barnes, M., & Kulesz, P. (2016). Validation of the direct and inferential mediation (DIME) model of reading comprehension in grades 7 through 12. Contemporary Educational Psychology, 44, 68-82.
Alderson, J. C. (2000). Assessing reading. Cambridge: Cambridge University Press.
Anderson, P. C. & Pearson, P. D. (1984). A schematic-theoretic view of basic processes in reading. In P. D. Pearson (Ed.), Handbook of reading research (pp.255-292). White Plains, NY: Longman.
Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford university press.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.
Bachman, L., & Palmer, A. S. (2010). Language testing in practice. Oxford: Oxford University Press.
Baker, L., & Brown, A. L. (1984). Metacognitive skills and reading. In P. D. Pearson, R. Barr, M. L. Kamil, & P. Mosenthal (Eds.) Handbook of research in reading (pp. 353-394). White Plains, NY: Longman.
Bernhardt, E. B. (1993). Reading development in a second language: Theoretical, empirical, & classroom perspectives. Norwood, NJ: Ablex Publishing Corporation.
Bernhardt, E. B. (1991). A psycholinguistic perspective on second language literacy. In J. H. Hulstijin & J. F. Matter (Eds.), Reading in two languages, AILA Review 8 (pp.31-44). Amesterdam.
Bernhardt, E., & Kamil, M. (1995). Interpreting relationships between L1 and L2 reading: Consolidating the linguistic threshold and the linguistic interdependence hypotheses. Applied Linguistics, 16, 15-34.
Birch, B. M. (2002). English L2 reading: Getting to the bottom. London: Routledge.
Brimo, D., Apel, K., & Fountain, T. (2017). Examining the contributions of syntactic awareness and syntactic knowledge to reading comprehension. Journal of Research in Reading, 40(1), 57-74.
Buck, G., Tatsuoka, K. K., & Kostin, I. (1997). The sub-skills of reading: Rule-space analysis of a multiple –choice test of second language reading comprehension. Language Learning, 47, 423-466.
Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied linguistics, 1(1), 1-47.
Carrell, P. L. (1983). Three components of background knowledge in reading comprehension 1. Language learning, 33(2), 183-203.
Carroll, J. B. (1993). Human cognitive abilities. Cambridge: Cambridge University Press.
Cheng, L. (2005). Changing language teaching through language testing: A wash back study (Vol. 21). Cambridge University Press.
Clarke, M. (1979). Reading in Spanish and English: Evidence from adult ESL students. Language Learning, 29, 121-150.
Clarke, M. (1980). The short circuit hypothesis of ESL reading: Or when language competence interferes with reading performance. Modern Language Journal, 64, 203-209.
Clymer, T. (1968). What is reading? Some current concepts. Innovation and Change in Reading Instruction. National Society for the Study of Education, Chicago.
Council of Europe. Council for Cultural Co-operation. Education Committee. Modern Languages Division. (2001).
Common European Framework of Reference for Languages: learning, teaching, assessment. Cambridge University Press.
Cummins, J. (1979). Linguistic interdependence and the development of bilingual children. Review of Educational Research, 49, 222-251.
Coady, J. (1979). A psycholinguistic model for the ESL reader. In R. MacKay, B. Barkman, & R. R. Jordan (Eds.), Reading in a second language: Hypothesis, organization and practice (pp. 5- 12). Rowley, MA: Newbury House.
Cervetti, G. N., Hiebert, E. H., Pearson, P. D., & McClung, N. A. (2015). Factors that influence the difficulty of science words. Journal of Literacy Research, 47(2), 153-185.
Danuwijaya, A. A. (2018). Item Analysis Of Reading Comprehension Test For Post-Graduate Students. English Review: Journal of English Education, 7(1), 29-40.
Davis, A. (1968). Language testing symposium: A psycholinguistic approach. London: Oxford University Press.
Davidson, F. (2008). The straightjacket and the blessing of the canon. Language Assessment Quarterly, 5(3), 267-274.
Davidson, F., & Lynch, B. K. (2002). Testcraft: A teacher’s guide to writing and using language test specifications. New Heaven. CT: Yale University Press.
Day, R. R., & Bamford, J. (1998). Reading in the second language classroom. Cambridge: Cambridge University Press.
Day, R. R., & Park, J. S. (2005). Developing Reading Comprehension Questions. Reading in a foreign language, 17(1), 60-73.
Earl, L. M. (2012). Assessment as learning: Using classroom assessment to maximize student learning. Corwin Press.
Ellis, R., Tanaka, Y., & Yamazaki, A. (1994). Classroom interaction, comprehension, and the acquisition of L2 word meanings. Language learning, 44(3), 449-491.
Estaji, M., & Zhaleh, K. (2020). Does Field of Study Matter in Academic Performance: Differential Item Functioning Analysis of a High-Stakes Test Using One-Parameter and Two-Parameter Item Response Theory Models. Iranian Journal of English for Academic Purposes, 9(3), 14-31.
Farhady. H., & Hessamy, G. R. (2005). Construct validity of L2 reading comprehension skills. Iranian Journal of Applied Linguistics, 8(2), 29-53.
Fletcher, J. M. (2006). Measuring reading comprehension. Scientific Studies of Reading, 10(3), 323–330.
Garson, G. D. (2016). Partial least squares: Regression and structural equation models. Asheboro, NC: Statistical Associates Publishers.
Gernsbacher, M.A. (1985) Surface information loss in comprehension. Cognitive Psychology, 17, 324–363.
Grabe, W. (2009). Reading in a second language: Moving from theory to practice. Cambridge: Cambridge University Press.
Grabe, W. (1991). Current development in second language reading research. TESOL Quarely, 25(3), 375-406.
Grabe, W. & Stoller, F. L. (2002). Teaching and researching reading. London: Longman.
Gray, W. S. (1960). The major aspects of reading. In H. Robinson (Ed.), sequential development of reading abilities (Vol. 90, pp.8-24). Chicago: Chicago University Press.
Gottardo, A., Mirza, A., Koh, P. W., Ferreira, A., & Javier, C. (2018). Unpacking listening comprehension: The role of vocabulary, morphological awareness, and syntactic knowledge in reading comprehension. Reading and Writing, 31(8), 1741-1764.
Hair Jr, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM). Sage publications.
Hemmati, S. J., Baghaei, P., & Bemani, M. (2016). Cognitive diagnostic modeling of L2 reading comprehension ability: Providing feedback on the reading performance of Iranian candidates for the university entrance examination. International Journal of Language Testing, 6(2), 92-100.
Hoover, W. A., & Tunmer, W. E. (1993). The components of reading. In G. G., Thompson, W. E. Tunmer, & T. Nicholson (Eds.), Reading acquisition processes (pp. 1-19). Clevedon: Multilingual Matters Ltd.
Hu, M., & Nation, I. S. P. (2000). Vocabulary density and reading comprehension. Reading in a Foreign Language, 23, 403–430.
Jiang, X. (2011). The role of first language literacy and second language proficiency in second language reading comprehension. The Reading Matrix, 11(2), 177-190.
Jiang, X., & Grabe, W. (2011). Skills and strategies in foreign language reading. La lectura en lengua extranjera, 2-31.
Kintsch, W. (1974). The representation of meaning in memory. Hillsdale, NJ: Erlbaum.
Kent State University. (2020, August, 11). Three Level Comprehension Guide for Active Reading. https://www-s3-live.kent.edu/s3fs-root/s3fs public/file/Three%20Level%20Comprehension%20Guide%20for%20Active%20Reading.pdf
Koda, K. (2005). Insights into second language reading. New York: Cambridge University Press.
Koda, K. (2007). Reading and language learning: Crosslinguistic constraints on second language reading development. In K. Koda (Ed.), Reading and language learning (pp. 1-44). Special issue of Language Learning Supplement, 57, 1-44.
Laufer, B. (1989). What percentage of text-lexis is essential for comprehension? In C. Lauren & M. Nordman (Eds.),
Special language: From humans to thinking machines (pp. 316–323). Clevedon, England: Multilingual Matters.
Lumley, T. (1993). The notion of sub-skills in reading comprehension test: An EAP example. Language Testing, 10(3), 211–234.
Lunzer, E., Waite, M., & Doltan, T. (1979). Comprehension and comprehension test. In E. Lunzer & K. Gardner (Eds.), The effective use of reading (pp. 37-71). London: Heinemann Educational Books Ltd.
Moeini Asl, H. R. (2002). Construct validation of reading comprehension tests. Unpublished MA thesis, University for Teacher Education, Tehran.
Munby, J. (1978). Communicative syllabus design. Cambridge: Cambridge University Press.
OECD (2019), PISA 2018 results (Volume I): What students know and can do, PISA, OECD Publishing, Paris, https://doi.org/10.1787/5f07c754-en.
Paris, S. G., & Hamilton, E. E. (2009). The development of children’s reading comprehension. In S. E. Israel & G. G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 32- 53). New York: Routledge.
Pearson, P. D., & Cervetti, G. N. (2015). Fifty years of reading comprehension theory and practice. Research-based practices for teaching Common Core literacy, 1-24.
Pearson, P. D., & Johnson, D. D. (1978). Teaching reading comprehension. New York: Rinehart and Winston.
Perfetti, C. (1985). Reading ability. New York:Oxford University Press.
Perfetti, C. (1992). The representation problem in reading acquisition. In P. Gough, L. Ehri, & R. Treiman (Eds).
Reading acquisition. Hillsdate, NJ: Lawrence Erlbaum.
Perfetti, C. (2007). Reading ability to comprehension. Scientific Studies of Reading, 8, 357-383.
Perfetti, C., & Hart, L. (2001). The lexical basis of comprehension skill. In D. Gorfien (Ed.), On the consequences of meaning selection (pp. 67-86). Washington, DC: American Psychological Association.
Praveen, S. D., & Rajan, P. (2013). Using Graphic Organizers to Improve Reading Comprehension Skills for the Middle School ESL Students. English Language Teaching, 6(2), 155-170.
Ramezaney, M. (2014). The wash back effects of university entrance exam on Iranian EFL teachers’ curricular planning and instruction techniques. Procedia-Social and Behavioral Sciences, 98, 1508-1517.
Ranjbaran, F., & Alavi, S. M. (2017). Developing a reading comprehension test for cognitive diagnostic assessment: A RUM analysis. Studies in Educational Evaluation, 55, 167-179.
Rosenblatt, L. M. (1938, 1968). Literature as exploration. New York: Noble and Noble, Publishers.
Rumelhart, D. E. (1985). Towards an interactive model of reading. In H. Singer & R.B. Ruddell (Eds.), Theoretical models and processes of reading. Newark, Delaware: International Reading Association.
Rumelhart, D. E. (1977). Understanding the summarizing stories. In D. LaBerge & S. J. Samuels (Eds.) Basic processes in reading perception and comprehension (pp. 265-303). Hillsdale, NJ: Lawrence Erlbaum.
Saville, N. (2012). Quality management in test production and administration. In G. Fulcher and F. Davidson: Routledge handbook of language testing (pp. 395-412). London: Routledge.
Schmitt, N., Jiang, X., & Grabe, W. (2011). The percentage of words known in a text and reading comprehension. The Modern Language Journal, 95(1), 26-43.
Shahmirzadi, N., Siyyari, M., Marashi, H., & Geramipour, M. (2020). Selecting the Best Fit Model in Cognitive Diagnostic Assessment: Differential Item Functioning Detection in the Reading Comprehension of the PhD Nationwide Admission Test. Journal of Language and Translation, 10(3), 1-15.
Stein, N. L., & Glenn, C. G. (1979). An analysis of story comprehension in elementary school children. New Directions in Discourse Pocessing, 2, 53-120.
Shiotsu, T., & Weir, C. J. (2007). The relative significance of syntactic knowledge and vocabulary breadth in the prediction of reading comprehension test performance. Language Testing, 24(1), 99-128.
Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford Press.
Tiwari, P. R. (2021). Reading Comprehension of Grade 8 Students: A Glimpse of Item Piloting. Educational Assessment, 81.80-96.
Urquhart, A. H., Weir, C. J. (1998). Reading in a second language: process, product, and practice. New York: Longman.
Vandergrift, L., & Goh, C. C. M. (2012). Teaching and learning second language listening: Metacognition in action. New York: Routledge.
Walter, C. (2007). First‐to second‐language reading comprehension: not transfer, but access. International Journal of Applied Linguistics, 17(1), 14-37.
Weir, C., Huizhong, Y., & Yan, J. (2000). An empirical investigation of the componentiality of L2 reading in English for academic purposes (Vol. 12). Cambridge University Press.
Weir, C. J. (2005). Language testing and validation. Hampshire: Palgrave McMillan.
Williams, E. & Moran, C. (1989). Reading in a foreign language at intermediate and advanced levels with particular reference to English.Language Teaching, 22 (4), 217-228.
Yamasaki, B. L., & Prat, C. S. (2021). Predictors and consequences of individual differences in cross-linguistic interactions: A model of second language reading skill. Bilingualism: Language and Cognition, 24(1), 154-166.
Zandi, H., Kaivanpanah, S., & Alavi, S. M. (2014). The Effect of Test Specifications Review on Improving the Quality of a Test. Iranian Journal of Language Teaching Research, 2(1), 1-14.
Zhang, L. (2018). Metacognitive and cognitive strategy use in reading comprehension: A structural equation modelling approach. Singapore: Springer.
Zwaan, R., & Rapp, D. (2006). Discourse comprehension. In M. A. Traxler & M. A. Gernsbacher (Eds.), Handbook of psycholinguistics (2nd ed. Pp. 725-764). Burlington, MA: Academic Press.
International Journal of Foreign Language Teaching and Research ISSN: 2322-3898-http://jfl.iaun.ac.ir/journal/about © 2024- Published by Islamic Azad University, Najafabad Branch |
|
|
Research Paper
|
Second Language Reading Comprehension: A Reading Issue or a Language Problem--A Partial Least Square Modeling Analysis
Roshanak Rezaei1, Faramarz Aziz Malayeri2*, Abbas Bayat3, Hossein Ahmadi4
1Ph.D. Candidate, Malayer Branch, Islamic Azad University, Malayer, Iran
rezaei.roshanak@gmail.com
2Assistant Professor*, Malayer Branch, Islamic Azad University, Malayer, Iran
faramarzazizmalayerie@gmail.com
3Assistant Professor, Malayer Branch, Islamic Azad University, Malayer, Iran
abbasbayat_305@yahoo.com
4Assistant Professor, Malayer Branch, Islamic Azad University, Malayer, Iran
h.ahmadi@malayeriau.ac.ir
Received: June 17, 2024 Accepted: July 25, 2024
|
Abstract The application of comprehension skills as a general cognitive ability has long been discussed in the literature on second language reading comprehension. To trace second language reading comprehension difficulties back to the text or reader attributes, the present study investigates the optimum load of linguistic and non-linguistic components of second language reading ability. The study followed a quantitative method of research, and the data were collected on the performance of 164 Iranian foreign language learners with different educational backgrounds. A group of experts in Teaching English as a Foreign Language (TEFL) provided feedback on the specifications of the sample test items based on the Comprehensive Taxonomy of Reading Sub-skills (CTRS) derived from major taxonomies in the literature. Applying exploratory factor analysis and correlational computations, the results revealed that although the items were all convergent to measure the same latent construct, items aimed at measuring general cognitive comprehension skills had more contribution to the overall test scores. Teachers at both language institutes and the Ministry of Education, test-developing organizations, and students who seek to gain more success in reading comprehension examinations can benefit from this study feedback. Keywords: General Cognitive Abilities, Linguistic competence, Reading Comprehension, Sub skills
|
درک مطلب زبان دوم: مشکل خواندن یا مشکل زبان--تحلیل مدلسازی حداقل مربعات جزیی کاربرد مهارت های درک مطلب به عنوان یک توانایی شناختی کلی مدت هاست که در ادبیات درک مطلب خواندن زبان دوم مورد بحث قرار گرفته است. برای ردیابی مشکلات درک خواندن زبان دوم به متن یا ویژگی های خواننده، مطالعه حاضر به بررسی بار بهینه مولفه های زبانی و غیرزبانی توانایی خواندن زبان دوم می پردازد. روش تحقیق کمی است و داده ها در مورد عملکرد 164 زبان آموز ایرانی با سوابق تحصیلی متفاوت جمع آوری شده است. گروهی از متخصصان در آموزش زبان انگلیسی به عنوان یک زبان خارجی (TEFL) بازخورد خود را در مورد مشخصات نمونههای آزمایشی بر اساس طبقهبندی جامع مهارتهای فرعی خواندن (CTRS) برگرفته از طبقهبندی اصلی در ادبیات ارائه کردند. با استفاده از تحلیل عاملی اکتشافی و محاسبات همبستگی، نتایج نشان داد که اگرچه همه آیتمها برای اندازهگیری ساختار نهفته یکسانی همگرا هستند، اما آیتمهایی که با هدف اندازهگیری مهارتهای درک شناختی عمومی مشارکت بیشتری در نمرات آزمون کلی داشتند. معلمان موسسات زبان و وزارت آموزش و پرورش، سازمانهای توسعهدهنده آزمون و دانشآموزانی که به دنبال موفقیت بیشتر در امتحانات درک مطلب هستند، میتوانند از این بازخورد مطالعه بهرهمند شوند. کلید واژه ها: توانایی های شناختی عمومی، شایستگی زبانی، خرده مهارت های درک مطلب |
Introduction
Significant numbers of adolescents and young adults do not understand academic texts adequately in their first language (Ahmed, Francis, York, Fletcher, Barnes, & Kulesz, 2016) or second language (Hemmati, Baghaei, & Bemani, 2016). This impedes their future academic success as reading is one of the most significant factors to literacy. Besides, most materials presented to students aiming to get entry into education institutes are in the form of texts (Ellis, Tanaka, & Yamazaki, 1994). Reading also can contribute to foster learning of other language sub-skills (Cheng, 2005). As Ramezaney (2014) asserted this significance is enough to enhance the reading ability of language learners. Since in a multilingual setting, English continues to spread around the world as the language of science and research, most people need to read at a relatively high level of English proficiency to achieve their substantial goals (Grabe & Stoller, 2002). Therefore, investigating the source of difficulties in comprehending texts for second and even first-language learners seems unavoidable.
Having considered the difficulties most second language learners have in reading academic texts, a second language teacher needs to investigate the areas of strengths and weaknesses of his students to help them read more efficiently. Furthermore, National and international studies have revealed that more than half of the students taking part in high-stakes international examinations do not adequately answer the reading comprehension section of the English sub-test. This is despite the fact that a large amount of money is being spent by the Ministry of Education in Iran on teacher training every year to prepare students for spontaneous reading to have a better life. What is, then, the source of this difficulty: L2 language proficiency or the lack of general comprehension skills supposed to have been acquired during L1 language literacy?
Since most of the summative assessments are product-oriented and hence divisible into a set of component skills, analyzing each individual task in the test for the sake of the form and number of the sub-skills that are included in the test can lead teachers and language practitioners to meet the needs of their students more precisely.
Based on the product-oriented (Urquhart & Weir, 1998) perspective, theoretical components of reading ability can be separated. These distinguishable underlying components (Hoover & Tunmer, 1993) can be operationalized across the test items. Furthermore, the pedagogical rationale behind the study of these sub-skills (Farhady & Hessamy, 2005) has made them an important issue in the literature of language testing. Following the communicative approach proposed by Canale and Swain (1980), discrete point tests may be more effective than integrative tests. In addition to making the learners control their weaknesses in separate components (Canale & Swain, 1980), discrete point tests beak down the reading process into its components or sub-skills in a way that they can be taught systematically (Farhady & Hessamy, 2005). That’s why the present study concentrated on multiple choice test format to provide the load of linguistic and non-linguistic components of second language reading ability.
In order to develop an L2 reading comprehension test, teachers can design item specifications (Bachman & Palmer, 2010; Saville, 2012) and predetermine the kinds and number of sub-skills to include in the test (Zandi, Kaivanpanah, & Alavi, 2014). This is because, as Davidson and Lynch (2002) pointed out, many equivalent items can be adapted from these specifications for further use and purposes. Item specifications can also be the subject of successive revisions to provide intended feedback from test-takers. Mostly, specs end up with a satisfactory interpretation of the construct under assessment. Zandi et al. (2014) revealed the efficiency of specifications review as a priori validation of tests in small-scale assessments and highlighted the potentiality of detecting problems with items of the tests. Here, we made use of a standard summative test and interpreted it formatively.
As a classroom teacher needs to assist his students in the context of AAL, the present study seeks to investigate the duality nature of the L2 reading ability construct based on a sample test developed out of the English Proficiency Test (EPT). English Proficiency Test is one of the national proficiency tests held by Islamic Azad University in Iran to meet the graduate requirements of its Ph.D. candidates. To mention the matter more specifically, the study tries to investigate the importance of the readers’ ability to read along with the knowledge of the target language. While the latter is relevant to the construct of second language proficiency, the former is less relevant (Alderson, 2000). The absence of background knowledge as one of the readers’ attributes should not inhibit the test taker’s performance. Nevertheless, it may lead to underestimation of the construct (Alderson, 2000). The findings can be used to increase the quality of reading comprehension multiple-choice tests and support a valid argument for the informal classroom setting tests.
Moreover, as most reported work in language testing focused on large-scale assessment, including G theory, ITR, and Cognitive Diagnostic Assessment (CDA) in the contexts of AOL and AFL, their results cannot be applicable to small-scale, informal classroom tests (Davidson, 2008). This is despite the fact that the majority of the educational tests are prepared and used in the context of the assessment as learning (AAL), in small language classrooms, for washback effects, and by language teachers (Zandi et al., 2014). Therefore, determining which aspect of L2 reading ability is going to be tested in small-scale assessments can improve the construct validity of the tests and help language teachers and instructors to decide on future tasks and activities for the class efficiently.
Literature Review
Reading Comprehension in L1 and L2
The process of receiving and interpreting information( Urquhart & Weir, 1998); extracting and integrating various kinds of information from the text and combining with what is already known (Koda, 2005); and the interaction of the author, the content of the text, plus the abilities and purpose of the reader in a particular setting ( Paris, & Hamilton, 2009) are defined as reading comprehension. These very brief perspectives on reading comprehension reveal two dominant separate aspects of reading comprehension in both L1 and L2: the text and the reader.
Generally speaking, reading comprehension has been perceived as a constructive process (Zhang, 2018) and a final product of various components of a latent trait (Hoover & Tunmer, 1993) throughout the research. The process-oriented perspective explores the mind of the reader during reading (Faraday & Hessamy, 2005), whereas the product-oriented view examines the readers’ performance for the underlying latent abilities. These process view of reading combines the deciphering of the written marks_ and decides what they mean by relating them to each other (Alderson, 2000). These processes are formed through thinking constantly while reading. As every process is potentially dynamic and variable across even the same readers, understanding and assessing reading processes are too difficult to do. The alternative view of reading is interested in the product of reading. The rationale behind it is that whatever the process is, the understanding the readers end up with is similar (Alderson, 2000).
The relationship between L1 literacy and L2 reading development has been discussed in the literature under two main positions: the Linguistic Interdependence Hypothesis (Bernhardt & Kamil, 1995) and the Linguistic Threshold Hypothesis (Cummins, 1979; Clarke, 1979, 1980). While the first perspective advocates fundamental similarities between L1 and L2, the second perspective necessitates a certain level of language proficiency in L2 before L1 reading skills and strategies can be transferred to improve L2 reading comprehension.
Grabe (2009) enumerated many differences in reading comprehension in L1 and L2: linguistic, processing, developmental, and sociocultural. Despite these differences, many L1 research findings are applicable in the L2 reading field (Buck, Tatsuoka, & Kostin, 1997). The linguistic difference is that executive resources and processes in L2 readers are not the same as those in L1. While L1 readers with a word stock of 5000 to 8000 words are considered elementary readers, L2 readers with the same supply of words are regarded as advanced readers. Therefore, L2 readers have to develop their linguistic resources and reading skills simultaneously (Grabe, 2009). Furthermore, regarding general cognitive abilities, L2 reading ability is more prone to transfer effects between L1 and L2. Thus, it is cross-linguistic and intrinsically more cumbersome than L1 reading (Koda, 2007).
L2 readers are also slower readers due to their slow and less accurate word recognition processing (Perfetti, 1992, 2007). However, they are better inference makers due to their age. That is because of their well-developed conceptual sense of the world (Zhang, 2018).
Other studies explored the effects of different components of background knowledge (e.g., familiar vs novel, context vs no-context, and transparent lexicon vs opaque) on the overall comprehension of both L1 and L2 readers. Carrel (1983) examined the issue in both groups and concluded that, unlike L1 readers, for whom all components of background knowledge play a significant role, L2 readers do not enjoy a significant effect of all different components of background knowledge.
In summary, L2 readers’ experience with their native language and the world gives them a head start over L1 readers to compensate for their insufficient linguistic resources (Zhang, 2018). Accordingly, examining the differences and similarities between L1 and L-2 reading can inform L2 reading classroom instruction. Moreover, the application of L1 reading research findings to L2 reading research and instruction should be carefully examined beforehand (Grabe, 2009).
Componential Models of Reading Comprehension
Multi-componential taxonomies of reading sub-skills have been utilized for teaching and testing so far. The areas of skills or knowledge involved in the process of reading have been examined by componential models of reading comprehension. Instead of describing the process of comprehension, componential models describe reading ability. To mention they isolate components of reading ability and perceive them as distinct areas of latent reading ability (Hoover & Tunmer, 1993). Although a componential approach to reading ability has valuable implications for teaching and testing practices, research regarding the separately identifiable components of the construct of reading is inconclusive. (Farhady & Hessamy, 2005). Researchers have provided different taxonomies of skills for reading comprehension following their own experimental studies. Some posited more than four components of reading skills (e. g. Davis, 1968; Lunzer, Waite, & Dolan, 1979; Munby, 1978; Grabe, 1991; Carroll, 1993; Moeini, 2002), while others postulate fewer variables for reading comprehension ability (e.g. Weir, Huizhong, & Yan, 2000; Coady (1979); Bernhardt, 1991, 1993; Hoover & Tunmer, 1993; Perfetti, 1985).
Different terminologies have been used by a vast number of researchers to describe sub-skills of reading ability. As an example Davis (1968) identified five components for reading comprehension ability: Identifying word meaning, drawing inferences, identifying writer’s technique, recognizing the mood of the passage, and finding answers to questions. In some other studies, the components, were arranged in an ascending way from the lowest-level component that is the word meaning to the highest level of forming judgments (Lunzer et al., 1979).
Variables affecting the nature of reading are a combination of reader and text variables (Alderson, 2000). Formal schemata, i.e., knowledge of language along with content schemata enables readers to approach and distinguish different levels of understanding of a text. The effect of background knowledge and knowledge of the world on comprehending a text is undeniable (Rumelhart, 1985; Stanovich, 2000; Zwaan & Rapp, 2006). Thus, it can be concluded that “much of reading is a general cognitive, problem-solving ability” (Alderson, 2000, p. 48).
In summary, as pointed out by Zhang (2018) the vital component skills of reading comprehension based on multi-componential models to reading comprehension include word recognition, syntactic knowledge, discourse structure, background knowledge, metacognitive knowledge, and strategy use. The most specific component of linguistic knowledge for reading comprehension ability is word recognition. To read fluently, readers need to recognize words quickly and automatically. According to Perfetti and Hart (2001) automaticity, accurate recognition of the words, and well-developed lexical entries are critical in word recognition. Syntactic knowledge and discourse structure are two other linguistic variables influencing reading comprehension as stated by Zhang (2018). World knowledge is needed for using metacognitive knowledge and reading strategies, which are widely recognized as critical components of skilled reading (Vadergrift & Goh, 2012).
Assessing Reading Comprehension
Three decisive questions should be answered before designing any test to assess reading comprehension: why, what, and how do we assess reading? (Afflerbach, 2017). A teacher may assess her students for different formative (AFL) and summative purposes (AOL). To understand the nature of students’ reading, the classroom teacher is more likely to use the process-oriented inventory to open a window into the strategies and skills the students need or use. In contrast, summative assessment is formed from items describing the students’ vocabulary and whole text comprehension. Here, the focus is on the product of reading. The result would indicate the percentage of students who meet the curriculum reading benchmarks. The need for a clear definition of reading would help us construct an accurate instrument to measure and judge the students’ reading ability. Besides, we need to assess our assessments to ensure whether they measure what they are intended to.
There are two approaches to test design: the classical approach (Afflerbach, 2017) and the target language use situation approach (Bachman, 1990). The classical approach prescribes writing test specifications based on a theory of reading. In such a way, we can develop our test’s construct. We realize the specifications through the kind of texts and tasks we include in the test and the inferences we make from students’ understandings, which are typically reflected in their scores. The alternative approach seeks to duplicate the features of real-world reading in the assessment procedures following Bachman and Palmer’s (1996) framework (Alderson, 2000).
L1 Reading comprehension development is believed to be based on different stages. There are two popular taxonomies used for teaching and assessing L1 reading comprehension ability: Bloom’s taxonomy of reading comprehension and Barrett’s taxonomy of reading comprehension (as cited by Clymer, 1968). Since Barrett’s taxonomy was originally written to extrapolate the sole reading comprehension, it is adjusted to analyze students' L2 reading comprehension compared to Bloom’s taxonomy, which was not originally coined to mirror reading comprehension (Tiwari, 2021).
Five categories of reading comprehension, according to Barrett’s taxonomy, are literal, reorganization, inferential, evaluation, and appreciation. The first two categories are related to Kent State University’s (2020) description of reading the lines. Along the same line, Common European Framework Reference’s (CEFR) reception strategies (2018) for mid band intermediate level consist of a literal understanding of the written materials along with metacognitive strategies to guess the meaning of unknown words, find the main idea and search the explicitly-stated details. Inferencing is just to follow the titles, headings, and sub-headings to predict what is going to be stated next.
To have a good inferential understanding, readers should read between the lines and equip with a more general cognitive ability of imagination. Learners’ ability of reasoning and making judgments via predetermined criteria are categorized under the term evaluation, while responding to the text emotionally in person is termed as appreciation in Barrett’s taxonomy (Day & Park, 2005).
International assessment organizations such as the Program for International Student Assessment (PISA), and Progress in International Reading Literacy Study (PRILS) have defined reading literacy based on the Organization for Economic Co-operation and Development (OECD) (2019).” Understanding” as a well-accepted element of the reading refers to reading comprehension in this definition. According to PISA, in the first stage of reading process, readers focus on the words, phrases, and sentences to construct meaning. They may also retrieve pieces of data from different parts of the text. Accordingly at this stage, they can answer questions based on explicitly stated information and make word-based inferences. Interpreting and integrating ideas happen at the third stage of understanding. Afterward, readers evaluate and integrate content and textual elements. At the first and second stages of the reading process, readers are able to sequence events, identify main ideas, search for facts and specific details, and understand the relationships between characters. Readers enter the third phase of understanding when they not only infer implicit information but also interfere with their own perspective in decoding the text.
Finally at the last stage of understanding, evaluation and judgment of the text itself take place. Readers move from constructing meaning to judging the clarity of the text information. PISA's (2018) framework for reading comprehension assessment identifies four cognitive processes that readers need to activate when they read: locating information, understanding, evaluating and reflecting, and reading fluently. To test whether a reader reads fluently or not, PISA underpins 25% of locating information, 45% of understanding, and 30% of evaluating and reflecting.
Studies on Reading Comprehension Assessment
A review of the literature on product-oriented reading comprehension assessment shows some controversial issues in this regard. Williams and Moran (1989), cited in Urquhart and Weir (1998), stated that the number and kind of sub-skills used in popular international proficiency tests are not consensual. International test organizations claim to use componential-based taxonomies for designing test items; however, the terminology used by most of them is not the same. Sometimes, reading components overlap under different labels and cause confusion (Farhady & Hessamy, 2005). Besides, some emphasize more general areas of knowledge instead of specific language skills (Grabe, 1991). The plausibility of each individual test item characterizing to measure a specific reading sub-component, has been put under question by Alderson (2000). That is because of the lack of possibility to separate which sub-components are operationalized by which items of the test, he argued.
Many empirical research investigations sought test fairness and validity in high-stakes tests. To name a few, Estatji and Zhaleh (2020) investigated the connection between the field of study and the reading section of the English subtest of the Iranian University Entrance Examination (IUEE) for MA in English majors. Using models of Item Response Theory, the researchers reported various sources of potential DIF in terms of academic background knowledge. They recommended the use of IRT models with different levels of precision to produce test items void of any bias in this regard. Additionally, Shahmirzadi, Siyyari, Marashi, and Geramipour (2020) scrutinized reading comprehension test items for the Ph.D. national admission test in Iran under the Cognitive Diagnostic Assessment (CDA). Employing the GDINA model, the researchers stated that test items suspected DIF against females. In 2018, Danuwijaya ran an item analysis based on Classical Test Theory on one hundred multiple-choice questions as a part of the test development process in Indonesia. Analyzing responses of 50 postgraduate students by using psychometric software Lertap revealed that although item difficulty level was average for most items, more than half of the total items were marginal and required modification.
Researchers also examined the quality of teaching practices for reading comprehension in Iran at the national level. Hemmati, Baghaei, and Bemani (2016) investigated the reading attributes underlying items of the reading comprehension section of the National University Entrance Examination in Iran. The general cognitive diagnostic model, G-DINA was applied. The outcome of the study showed problematic areas in reading comprehension at a national level. More than half of the candidates had not mastered any of the required attributes. This showed that the goals of reading instruction in English as a foreign language in Iranian high schools were not adequately reflected in the item specifications of INUEE. This may be due to the fact that our conceptualization of EFL reading achievement at high schools does not match that of national standards reflected in the national EFL reading assessment.
Other studies focused on the role of the first language in the comprehension of second-language texts. Walter (2007) challenged the metaphor of transfer to refer to the effect of the readers’ first language on the comprehension of L2 reading texts. The researcher used Gernsbacher’s (1985) Structure Building Framework (SBF) to argue that literate L2 readers already have comprehension skills; they just need to activate these skills from the L2. She strongly believed that the source of difficulty for lower-intermediate L2 readers is L2-based working memory capacity. Despite Walter’s (2007) assertion, Jiang’s (2011) findings demonstrated a moderate correlation between L1 literacy and L2 language proficiency and consequently with L2 reading comprehension.
Managing cross-linguistic interaction (CLI) conflict is confirmed to result in stronger L2 reading skills by Yamasaki and Part (2021). Yamasaki and Part supported the notion that non-linguistic conflict management and L1 dominance are reliably correlated to CLI, and that both cognitive skills and language experience create such interactions.
Zandi, Kaivanpanah, and Alavi (2014) reviewed diagnostic reading comprehension specifications in the context of assessment for learning (AFL). The findings revealed that test specs were endowed with the potential to improve test construct validity in small-scale assessments where conducting statistical analysis is not feasible.
Purpose of the Study
The present study investigates the optimum load of linguistic and non-linguistic components of the reading ability to trace second language reading comprehension difficulties back to the text or to the reader attributes.
To do so, a sample test based on the EPT reading comprehension section was designed to answer the following research questions:
RQ1: Which linguistic and non-linguistic components of the reading ability construct are responsible for the score obtained in the measurement of reading competence based on the gathered data?
RQ2: Which linguistic and non-linguistic components have the strongest correlation with the value score obtained in the measurement of reading competence?
Method
Participants
The participants of the study were 164 non-English-major female high school graduates between 18 to 22 years old (mean age=19.5, SD= 1.51) from three different fields of study (mathematical sciences, experimental science, and humanities) in the city of Malayer, Iran. There was almost an equal proportion of the participants' field of study (mathematical science= 32%, experimental science= 37%, humanities= 31%). These participants agreed to cooperate with the researcher at a popular language institute in Malayer. They studied English for at least six years at public state schools and were classified as intermediate L2 learners based on their Oxford Placement Test score (51 to 70 out of 120 on the CEFR scale).
Instrumentation
The utilized material in this study consisted of 17 selected reading passages adapted from the standard national examinations of English Proficiency Test (EPT). EPT is an official English proficiency exam designed by the Islamic Azad University for the candidate and students of Ph.D. at this university. The test is considered as an alternative to the international English exams, such as IELTS, and TOEFL.
These passages were randomly chosen out of 24 EPT examinations administered in recent years. Each EPT test contains 100 items: 25 items on vocabulary, 40 on structure, and 35 on reading comprehension. The reading comprehension section itself consists of four reading passages followed by five items for each and a fifteen-slot cloze passage to fill. The study focused on the first part of the reading comprehension section of the test to minimize the effect of the test method on the findings.
Afterward, the passages went through the process of item specifications. Seven university professors with Ph.D. in Teaching English as a Foreign Language (TEFL) evaluated the passages to confirm that they were at the CEFR intermediate level of difficulty. The corresponding items, then, were analyzed based on the language components they had been identified to measure. For example, items focusing on the ability to find references for the pronouns are labeled as "using context," and they are classified as linguistic components of the reading ability, while those accounting for using other cognitive abilities such as inferencing, summarizing, or synthesizing are encoded as non-linguistic components of the reading ability. It should be mentioned that the passages were also examined through CEFR Text Analyzer Software and all passages with lower or higher than CEFR intermediate level of proficiency were omitted.
“Text Analyzer" software is a tool to decide on the level of the reading comprehension materials. This software, which is designed and functioned by the Common European Framework Reference (CEFR), is used online to analyze the texts and determine the band Level of the reading materials. Word count per unit (free and bound morphemes), sentence lengths, and also the frequency of lexicon employed in every text, were the basic measurement of the program. The software assigned the levels to the texts based on their previous analytical statistics.
The final test consisted of 85 (17*5) items. There were 16 item types corresponding to two sets of eight linguistic and eight non-linguistic components to be considered for investigation.
Data Collection Procedure
This study was performed in three stages. In the first stage, a group of seven university language professors reviewed twenty-four series of EPT examinations held monthly in Iran. These experts selected intermediate-level texts. To be sure, the selected texts were also observed with Text Analyzer software and, 17 out of 96 texts were selected and subjected to content analysis. Depending on the purpose for which the items were identified, each question was scrutinized by the experts and placed under two broad categories of text-based and reader-based components. This classification was done based on the Communicative Language Ability Model proposed by Bachman and Palmer (1996): knowledge of the world versus language competence. The text-based component was grouped into sub-components of understanding vocabulary, using context, finding the main idea of the text, identifying details, getting facts and distinguishing facts from ideas, and sequencing events. The other component _reader-based component_ was divided into the sub-components of making inferences, understanding implicit cause-and-effect, understanding figurative language, understanding the author's point of view, visualizing ideas, understanding the author's purpose, drawing conclusions, and the need for the prior knowledge. In the next stage, a group consisting of 164 female examinees with intermediate language levels from the three disciplines of experimental sciences, mathematics, and humanities responded to the items in eleven ten-minute sessions. The answers of these testees to each question were finally examined and analyzed.
Results
The results of the study are as follows:
Exploring the First Research Question
Table 1 displays the item-total correlations for the linguistic components of the test. Based on these results, it can be concluded that “understanding vocabulary” with an item-total correlation of .70 had the highest contribution to the linguistic component of the reading test. This was followed by “using context” (.698), “summarizing concepts” (.605), “getting facts” (.584), and “identifying details” (.522). The interpretation of item-total correlations follows the same criteria as Pearson correlation; i.e., values below .30 (Pallant, 2016; Field, 2018) indicate that the item has a low contribution to the total score. As displayed in Table 1, only one of the linguistic components of reading ability, “distinguishing between facts and opinions” (.234), showed a weak contribution to the linguistic component of reading ability. All other constructs had moderate to large; i.e. > = .30, contributions to the linguistic component of reading ability.
Table 1
Item-Total Statistics; Linguistic Component of Reading Ability
| Corrected Item-Total Correlation |
Facts and Opinions Sequencing Events Discovering Main Idea Getting Facts Identifying Details Summarizing Concepts Understanding Vocabulary Using context | .234 .395 .459 .584 .522 .605 .700 .698 |
Table 2 displays the item-total correlations for the non-linguistic components of reading ability. Based on these results, it can be concluded that “drawing conclusions” (.719) made the highest contribution to the non-linguistic components of reading ability. This was followed by “identifying inferences” (.699), “determining authors’ purposes” (.606), and “understanding cause and effects” (.593). “Visualizing ideas” (.156) was the only non-linguistic component of reading ability, which had a weak contribution to the total score. All other variables had moderate to large; i.e. >= .30, contributions to the non-linguistic components of reading ability.
Table 2
Item-Total Statistics; Non-Linguistic Component of Reading Ability
| Corrected Item-Total correlation |
Understanding Point of View Visualizing Ideas Determining Authors Purpose Drawing Conclusions Identifying Figurative Language Identifying Inferences Understanding Cause and Effects Using Prior Knowledge
| .410 .156 .606 .719 .547 .699 .593 .561 |
Unlike Table 1 and Table 2 which display the contributions of the linguistic and non-linguistic components to their total scores, Table 3 displays the contributions of both of the components to overall reading ability. On average, the non-linguistic components had a higher item-total correlation than the linguistic components; i.e. .564 for non-linguistic and .551 for linguistic components.
The linguistic components with the highest and lowest item-total correlations were “understanding vocabulary” (.771) and “distinguishing between facts and opinions” (.267). The non-linguistic components with the highest and lowest item-total correlations were “identifying inferences” (.721) and “visualizing ideas” (.160).
Table 3
Item-Total Statistics of Linguistic and Non-Linguistic Components to Overall Reading Ability
| Corrected Item-Total Correlation |
Facts and Opinions Sequencing Events Discovering Main Idea Getting Facts Identifying Details Summarizing Concepts Understanding Vocabulary Using context Average Linguistic Understanding Point of View Visualizing Ideas Determining Authors Purpose Drawing Conclusions Identifying Figurative Language Identifying Inferences Understanding Cause and Effects Using Prior Knowledge Average Non-Linguistic
| .267 .429 .499 .523 .501 .695 .771 .726 .551 .391 .160 .648 .711 .605 .721 .688 .595 .564 |
Based on the results discussed above, the first research question can be answered. All components of linguistic and non-linguistic constructs had significant contributions to reading ability, except for the non-linguistic component of visualizing ideas, which had a weak and non-significant contribution to reading ability.
Exploring the Second Research Question
The Pearson correlation between the linguistic and non-linguistic components of reading ability is shown in table 4. The correlation matrix is divided into three areas and shaded by different colors; gray, blue, and pink. The gray and blue areas show the Pearson correlations among the linguistic components of reading ability, while the pink area shows the correlations between the two components. The HTMT ratio assumes that if the linguistic component of reading ability enjoys discriminant validity, the correlations among its components should be stronger than their correlations with the non-linguistic components.
The basic idea behind the HTMT ratio is that the ratio of average shared correlations (pink) should be lower than the square root of the product of the correlations among components, i.e. the product of the gray and blue areas. The average Pearson correlations for the gray, blue, and pink areas were .328, .336, and .341, respectively. The HTMT ratio is computed as;
.341 / Sqrt (.325 * .336) = 1.02
As noted by Garson (2016) and Hair, Hult, Ringle, and Sarstedt (2017), an HTMT ratio higher than .90 indicates that the test does not enjoy discriminant validity. In other words, the shared correlations between the components are stronger than the correlations within each component. Based on these results, it can be concluded that the correlations among the linguistic components and non-linguistic components are weaker than their shared correlations.
Exploratory Factor Analysis on Components of Reading Ability
For a maximum dispersion of factor loadings within factors, an exploratory factor analysis (EFA) using principal axis factoring and varimax rotation method (to reach a smaller number of variables highly onto each factor resulting in more interpretable clusters of factors) was run. The analysis was done to explore the underlying constructs of the 16 components of the reading ability test.
The SPSS extracted four factors which accounted for 49.50 percent of total variances. In other words, the components of reading ability measured four constructs with an accuracy of 49.50 percent.
Table 5 Rotated Factor Matrixa; Components of Reading Ability | ||||
| Factor | |||
1 | 2 | 3 | 4 | |
Using context | .839 | .372 |
|
|
Getting Facts | .634 |
|
|
|
Understanding Cause and Effects | .580 |
| .391 |
|
Identifying Figurative Language | .572 |
| .462 |
|
Understanding Vocabulary | .565 | .464 | .346 |
|
Identifying Details | .561 |
|
|
|
Determining Authors Purpose | .544 | .307 |
|
|
Identifying Inferences |
| .738 | .336 |
|
Understanding Point of View |
| .654 |
|
|
Drawing Conclusions | .386 | .626 |
|
|
Sequencing Events |
| .517 |
|
|
Summarizing Concepts | .338 | .476 | .458 |
|
Using Prior Knowledge | .369 | .375 |
|
|
Distinguishing between Facts and Opinions |
|
| .463 |
|
Visualizing Ideas |
|
|
|
|
Discovering Main Idea |
| .329 |
| .627 |
Extraction Method: Principal Axis Factoring. Rotation Method: Varimax with Kaiser Normalization. | ||||
a. Rotation converged in 5 iterations. |
Table 5 displays the rotated factor matrix of linguistic and non-linguistic components of reading ability. The first factor included four linguistic, i.e., Using context, Getting Facts, Understanding Vocabulary, and Identifying Details, and three non-linguistic components, i.e., Understanding Cause and Effects, Identifying Figurative Language, and Determining Author's Purpose.
The second factor included two linguistic, i.e., Sequencing Events and Summarizing Concepts; and five non-linguistic components, i.e., Identifying Inferences, Understanding Point of View, Drawing Conclusions, and Using Prior Knowledge. The results also showed that Distinguishing between Facts and Opinions and Discovering the Main Idea loaded under the third and fourth factors respectively. Visualizing Ideas did not have any meaningful (>=.30) loading on any factor. It should be noted that a large number of components loaded under more than one factor. From the results, it can be inferred that scanning is one of the factors that leads to a successful performance due to the first factor. The fourth factor, discovering the main idea, indicates the importance of skimming, which is the result of literal and reorganizational processing in comprehending second language intermediate texts.
Partial Least Square Model
A partial least square model was run in order to fulfill the following objectives; first, to what extent do components of linguistic and non-linguistic constructs contribute to their total scores, and to what extent do overall linguistic and non-linguistic constructs contribute to reading ability. Figure I displays the model of the standardized regression weights between each component and the overall linguistic and non-linguistic constructs, as well as the latter two constructs’ contributions to reading ability. Before discussing the results, it should be noted that the standardized regression weights (b) can be interpreted using the same criteria as the Pearson correlations, i.e., .1 and below = weak, .3 = moderate, and .5 and above large. The t-values and their probabilities indicate the statistical significance of the standardized regression weights; and finally, the 97.5 confidence intervals show the ranges within which the 1000-time bootstrapped results may fluctuate. If the lower bound of the 97.5 confidence interval is negative or zero, it can be concluded that the standardized regression weight was obtained by chance; i.e., it could have been zero.
Figure 1
Relationships between components of constructs (standardized regression weights)
Based on these results and results displayed in Table 6 and Figure 2, it can be concluded that the linguistic components of reading tests had significant contributions to their constructs. To summarize the results, it can be mentioned that all linguistic components had significant contributions to their construct. None of the lower bounds of 97.5 confidence intervals were negative or zero, and all standardized regression weights enjoyed moderate to large effect sizes; i.e. >=.30.
Similarly, it can be mentioned that all non-linguistic components had significant contributions to their construct, except for visualizing ideas, which had a weak contribution to non-linguistic constructs, and its lower bound confidence interval was negative and close to zero; i.e. - .005.
Table 6 Standardized Regression Weights; Contributions of Linguistic Components to their Constructs | |||||||
| B | Mean | SD | t-value | p-value | 2.5 % | 97.5% |
Cause <- Non-Linguistic | 0.741 | 0.739 | 0.038 | 19.561 | 0.000 | 0.654 | 0.802 |
Concept <- Linguistic | 0.744 | 0.745 | 0.035 | 21.458 | 0.000 | 0.667 | 0.804 |
Conclude <- Non-Linguistic | 0.811 | 0.810 | 0.030 | 27.248 | 0.000 | 0.746 | 0.861 |
Context <- Linguistic | 0.810 | 0.812 | 0.030 | 26.812 | 0.000 | 0.747 | 0.863 |
Detail <- Linguistic | 0.638 | 0.638 | 0.050 | 12.810 | 0.000 | 0.529 | 0.721 |
Facts <- Linguistic | 0.322 | 0.316 | 0.083 | 3.889 | 0.000 | 0.129 | 0.463 |
Figurative <- Non-Linguistic | 0.669 | 0.668 | 0.052 | 12.932 | 0.000 | 0.559 | 0.761 |
Get-Facts <- Linguistic | 0.678 | 0.677 | 0.045 | 14.967 | 0.000 | 0.573 | 0.753 |
Inference <- Non-Linguistic | 0.815 | 0.813 | 0.029 | 28.450 | 0.000 | 0.753 | 0.863 |
Knowledge <- Non-Linguistic | 0.685 | 0.686 | 0.045 | 15.390 | 0.000 | 0.590 | 0.764 |
Main-Idea <- Linguistic | 0.594 | 0.593 | 0.065 | 9.132 | 0.000 | 0.445 | 0.708 |
Point-of-view <- Non-Linguistic | 0.459 | 0.460 | 0.072 | 6.354 | 0.000 | 0.306 | 0.588 |
Purpose <- Non-Linguistic | 0.730 | 0.730 | 0.039 | 18.525 | 0.000 | 0.644 | 0.795 |
Sequence <- Linguistic | 0.493 | 0.494 | 0.066 | 7.477 | 0.000 | 0.347 | 0.608 |
Visualize <- Non-Linguistic | 0.213 | 0.212 | 0.106 | 2.001 | 0.046 | -0.005 | 0.407 |
Vocab <- Linguistic | 0.816 | 0.815 | 0.023 | 34.749 | 0.000 | 0.766 | 0.858 |
Figure 2
Relationships between components of the construct (t-values)
Based on the results displayed in Table 7 it can be concluded that;
- Overall, the non-linguistic component has a significant and large contribution to reading ability (b = .562, t= 38.36, p = .000, 97.5 % CI [.530, .589].
- Overall linguistic component has a significant but moderate contribution to reading ability (b = .472, t= 30.36, p = .000, 97.5 % CI [.442, .504].
Table 7 Standardized Regression Weights; Contributions of Linguistic and Non-Linguistic Components to Reading Ability | |||||||
| B | Mean | SD | t-value | p-value | 2.5 % | 97.5% |
Linguistic <- Reading-Ability | 0.472 | 0.473 | 0.016 | 30.361 | 0.000 | 0.442 | 0.504 |
Non-Linguistic <- Reading-Ability | 0.562 | 0.560 | 0.015 | 38.366 | 0.000 | 0.533 | 0.589 |
Based on these results the second research question can be answered. The non-linguistic component had a higher contribution to reading ability.
Discussion
Throughout the last 50 years of research, reading is believed to be an interaction between the text, the reader, and the context, with one being the prominent factor (Pearson & Cervetti, 2015). Theoretically, these three factors are assumed to affect reading comprehension to about the same degree (Cervetti, Hiebert, Pearson, & McCluny, 2015). The text ruled the comprehension process in a behaviorism-dominated period, and the term close reading implied the readers’ dependency on texts to generate understanding (Pearson & Johnson, 1978). So, literal comprehension received the most emphasis. The first factor, which highlights the importance of scanning as well as a reorganizational understanding of the text, is in agreement with a behaviorism-dominated period. During the dominance of cognitive psychology, “inside out” knowledge of the readers received more emphasis, and reading was defined as the semantic product of prior knowledge (Rosenblatt, 1938, 1968). Readers’ story schemata (Kintsch, 1974; Rumelhart, 1977; Stein & Glenn, 1979), schema theory (Anderson & Pearson, 1984), and readers’ metacognitive knowledge (Baker & Brown, 1984) emerged from this era when non-linguistic knowledge of the readers predicted comprehension success and linguistic knowledge of the text was perceived as lower-order resources of the text comprehension. A glance at the fourth factor of the matrix confirms cognitive psychology claims. Knowing the subject and extracting the main idea of the texts, by itself can help second-language readers to overcome even their language proficiency weaknesses.
The present study aimed at determining the load of text-based linguistic and reader-based non-linguistic components of an EPT- based sample multiple choice test based on the priori product-oriented content analysis of the items. Correlational item analysis revealed that among text-based components of the reading construct, understanding vocabulary had the highest contribution to text understanding (0.7), while distinguishing between facts and opinions showed a weak contribution (0.2) to the linguistic components of reading ability. Different researchers confirmed the importance of vocabulary knowledge to better understand academic L2 reading texts (Laufer, 1989; Hu &Nation, 2000; Schmitt, Jiang, & Grabe, 2011). On the other hand, Praveen and Rajan (2013) suggested that raising students’ awareness of a reading text would help them distinguish facts from opinions. Our findings on the text-based subcomponent of distinguishing between facts and opinions at least hint that Praveen and Rajan's findings can be the reason.
Another promising finding was that using context played an important role in the understanding of the written materials. Other researchers such as Brimo, Apel, and Fountain (2017), have shown that syntactic knowledge and syntactic awareness have a significant relationship with reading comprehension among adolescent students. Contrary to these findings, Shiotsue and Weir (2007) asserted the relative superiority of syntactic knowledge even over vocabulary knowledge. Additionally, Gottardo, Mirza, Koh, Ferreirra, and Javier’s (2018) findings emphasized the impact of syntactic knowledge on the vocabulary knowledge of the Spanish-speaking adolescents’ L2 reading comprehension.
From the results of the present study, it is clear that discovering main ideas as well as identifying details and facts contribute to an effective L2 reading, but only after enough knowledge of grammar, vocabulary, and discourse is acquired (Birch, 2002). This result ties well with previous studies wherein skimming involves quickly understanding the prepositional meaning of the text (Urquhart & Weir, 1998) and hence a less challenging metacognitive strategy. Metacognitive strategies are believed to monitor reading comprehension break downs and enhance overall understanding of the texts (Cervetti et al., 2015).
Among reader-based components of reading comprehension ability, it is worth discussing the slight superiority of drawing conclusions over identifying inferences. The result is equal to the current result of Ranjbaran and Alavi’s (2017) research in that synthesizing to draw a conclusion seems to be a more challenging reading attribute than drawing inferences. A similar conclusion was reached by Fletcher (2006). Likewise, understanding cause and effect, which is due to identifying implicit information, is revealed to enjoy a higher correlation with overall reading comprehension of the text. This is directly in line with previous findings of Lumley (1993). The explanation could be related to the fact that inferencing involves more complex cognitive processing than the other reading comprehension strategies (Ranjbaran &Alavi, 2017).
Besides, according to Barrett’s taxonomy of reading comprehension (as cited in Clymer, 1968), the lowest cognitive process is literal comprehension combined with reorganization. This is equal to CEFR A2 and B1 level of reading proficiency (2018). Reading between the lines (Gray, 1960), which is at the same level of Inference in Barrett’s taxonomy, in addition to reading beyond the lines are referred to higher-level processing. Conversely, lower-level processing covers explicitly stated information in the text. From the results, it is clear that determining the author’s purpose, understanding the point of view, and identifying figurative language need more inferential comprehension. A popular explanation is that they are considered to have a higher level cognitive processing, according to Barrett’s taxonomy of reading comprehension.
Conclusion
As the results of the present study indicate, text-based linguistic items are moderately significant to the overall score of the test while reader-based top-down processing items show a strong significance to the reading ability construct. As discussed earlier, L2 reading comprehension differs from L1 reading comprehension in that L2 readers need more time and concentration in the word recognition phase of the reading process especially when their first language alphabet system is different from the second. This suggests that they may lag behind their peer first language intermediate readers. Considering the difference between L1 and L2 reading, it is convenient to consider L2 intermediate readers somewhere between the second and third phases of reading comprehension development. Furthermore, as L2 reading comprehension is both a matter of reading ability and a matter of L2 language proficiency, students’ L1 reading ability can help them overcome their problems in L2 reading comprehension.
The present study was an attempt to answer this controversial issue in the literature by determining loads of different text-based and reader-based components of L2 reading ability based on a standard componential test design. The results showed that reader-based components of L2 reading ability were even more significant to the students’ overall performance. This answers whether helping students improve their general cognitive ability can be effective in deriving more information from L2 written materials.
We speculate that the superiority of reader-based components of the L2 reading ability might be due to the poor conflict management skills. As discussed previously, poor L2 readers might be less proficient in their L2 comparing to their L1, or they might have problems with executive attention supporting conflict management.
The implication of the study, for L2 classes where L1 literate learners are embarking on L2 reading texts can be conductive. Since lower-intermediate learners’ problems with L2 comprehension may reflect the problems of their L1 reading ability, it’s advisable to spend a portion of the class time in the teaching of more general higher-level thinking abilities such as inferencing, synthesizing, and evaluating. Activities that promote such argumentations even in L1 may solve parts of their L2 reading struggles with the texts. Time limits due to L2 readers’ slower language processing can also create difficulties for them. Thus, L2-based sentence parsing to improve the readers’ working memory by promoting automatization may compensate for their time-related problems.
Extensive reading has also been suggested by several researchers (e.g., Day & Bamford, 1998) as the most beneficial activities for L2 intermediate readers to enhance their reading abilities. Texts of around 100 words may help lower-intermediate readers bootstrap their way to a better performance in L2 reading comprehension (Walter, 2007).
In summary, the possibility of applying the taxonomies of reading ability to teaching, testing, and material development practices has made it a promising task for researchers to investigate their cognitive and psychological contributions to second language reading development. Despite all the limitations that potentially threaten the reliability and validity of research on the nature of reading ability, validated models can help educational practitioners to make informed choices on the content they can include in textbooks, language classes, and language tests.
References
Afflerbach, P. (2017). Understanding and using reading assessment. Alexandria, Virginia, USA: ASCD
Ahmed, Y., Francis, D. J., York, M., Fletcher, J. M., Barnes, M., & Kulesz, P. (2016). Validation of the direct and inferential mediation (DIME) model of reading comprehension in grades 7 through 12. Contemporary Educational Psychology, 44, 68-82.
Alderson, J. C. (2000). Assessing reading. Cambridge: Cambridge University Press.
Anderson, P. C. & Pearson, P. D. (1984). A schematic-theoretic view of basic processes in reading. In P. D. Pearson (Ed.), Handbook of reading research (pp.255-292). White Plains, NY: Longman.
Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford university press.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.
Bachman, L., & Palmer, A. S. (2010). Language testing in practice. Oxford: Oxford University Press.
Baker, L., & Brown, A. L. (1984). Metacognitive skills and reading. In P. D. Pearson, R. Barr, M. L. Kamil, & P. Mosenthal (Eds.) Handbook of research in reading (pp. 353-394). White Plains, NY: Longman.
Bernhardt, E. B. (1993). Reading development in a second language: Theoretical, empirical, & classroom perspectives. Norwood, NJ: Ablex Publishing Corporation.
Bernhardt, E. B. (1991). A psycholinguistic perspective on second language literacy. In J. H. Hulstijin & J. F. Matter (Eds.), Reading in two languages, AILA Review 8 (pp.31-44). Amesterdam.
Bernhardt, E., & Kamil, M. (1995). Interpreting relationships between L1 and L2 reading: Consolidating the linguistic threshold and the linguistic interdependence hypotheses. Applied Linguistics, 16, 15-34.
Birch, B. M. (2002). English L2 reading: Getting to the bottom. London: Routledge.
Brimo, D., Apel, K., & Fountain, T. (2017). Examining the contributions of syntactic awareness and syntactic knowledge to reading comprehension. Journal of Research in Reading, 40(1), 57-74.
Buck, G., Tatsuoka, K. K., & Kostin, I. (1997). The sub-skills of reading: Rule-space analysis of a multiple –choice test of second language reading comprehension. Language Learning, 47, 423-466.
Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied linguistics, 1(1), 1-47.
Carrell, P. L. (1983). Three components of background knowledge in reading comprehension 1. Language learning, 33(2), 183-203.
Carroll, J. B. (1993). Human cognitive abilities. Cambridge: Cambridge University Press.
Cheng, L. (2005). Changing language teaching through language testing: A wash back study (Vol. 21). Cambridge University Press.
Clarke, M. (1979). Reading in Spanish and English: Evidence from adult ESL students. Language Learning, 29, 121-150.
Clarke, M. (1980). The short circuit hypothesis of ESL reading: Or when language competence interferes with reading performance. Modern Language Journal, 64, 203-209.
Clymer, T. (1968). What is reading? Some current concepts. Innovation and Change in Reading Instruction. National Society for the Study of Education, Chicago.
Council of Europe. Council for Cultural Co-operation. Education Committee. Modern Languages Division. (2001). Common European Framework of Reference for Languages: learning, teaching, assessment. Cambridge University Press.
Cummins, J. (1979). Linguistic interdependence and the development of bilingual children. Review of Educational Research, 49, 222-251.
Coady, J. (1979). A psycholinguistic model for the ESL reader. In R. MacKay, B. Barkman, & R. R. Jordan (Eds.), Reading in a second language: Hypothesis, organization and practice (pp. 5- 12). Rowley, MA: Newbury House.
Cervetti, G. N., Hiebert, E. H., Pearson, P. D., & McClung, N. A. (2015). Factors that influence the difficulty of science words. Journal of Literacy Research, 47(2), 153-185.
Danuwijaya, A. A. (2018). Item Analysis Of Reading Comprehension Test For Post-Graduate Students. English Review: Journal of English Education, 7(1), 29-40.
Davis, A. (1968). Language testing symposium: A psycholinguistic approach. London: Oxford University Press.
Davidson, F. (2008). The straightjacket and the blessing of the canon. Language Assessment Quarterly, 5(3), 267-274.
Davidson, F., & Lynch, B. K. (2002). Testcraft: A teacher’s guide to writing and using language test specifications. New Heaven. CT: Yale University Press.
Day, R. R., & Bamford, J. (1998). Reading in the second language classroom. Cambridge: Cambridge University Press.
Day, R. R., & Park, J. S. (2005). Developing Reading Comprehension Questions. Reading in a foreign language, 17(1), 60-73.
Earl, L. M. (2012). Assessment as learning: Using classroom assessment to maximize student learning. Corwin Press.
Ellis, R., Tanaka, Y., & Yamazaki, A. (1994). Classroom interaction, comprehension, and the acquisition of L2 word meanings. Language learning, 44(3), 449-491.
Estaji, M., & Zhaleh, K. (2020). Does Field of Study Matter in Academic Performance: Differential Item Functioning Analysis of a High-Stakes Test Using One-Parameter and Two-Parameter Item Response Theory Models. Iranian Journal of English for Academic Purposes, 9(3), 14-31.
Farhady. H., & Hessamy, G. R. (2005). Construct validity of L2 reading comprehension skills. Iranian Journal of Applied Linguistics, 8(2), 29-53.
Fletcher, J. M. (2006). Measuring reading comprehension. Scientific Studies of Reading, 10(3), 323–330.
Garson, G. D. (2016). Partial least squares: Regression and structural equation models. Asheboro, NC: Statistical Associates Publishers.
Gernsbacher, M.A. (1985) Surface information loss in comprehension. Cognitive Psychology, 17, 324–363.
Grabe, W. (2009). Reading in a second language: Moving from theory to practice. Cambridge: Cambridge University Press.
Grabe, W. (1991). Current development in second language reading research. TESOL Quarely, 25(3), 375-406.
Grabe, W. & Stoller, F. L. (2002). Teaching and researching reading. London: Longman.
Gray, W. S. (1960). The major aspects of reading. In H. Robinson (Ed.), sequential development of reading abilities (Vol. 90, pp.8-24). Chicago: Chicago University Press.
Gottardo, A., Mirza, A., Koh, P. W., Ferreira, A., & Javier, C. (2018). Unpacking listening comprehension: The role of vocabulary, morphological awareness, and syntactic knowledge in reading comprehension. Reading and Writing, 31(8), 1741-1764.
Hair Jr, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM). Sage publications.
Hemmati, S. J., Baghaei, P., & Bemani, M. (2016). Cognitive diagnostic modeling of L2 reading comprehension ability: Providing feedback on the reading performance of Iranian candidates for the university entrance examination. International Journal of Language Testing, 6(2), 92-100.
Hoover, W. A., & Tunmer, W. E. (1993). The components of reading. In G. G., Thompson, W. E. Tunmer, & T. Nicholson (Eds.), Reading acquisition processes (pp. 1-19). Clevedon: Multilingual Matters Ltd.
Hu, M., & Nation, I. S. P. (2000). Vocabulary density and reading comprehension. Reading in a Foreign Language, 23, 403–430.
Jiang, X. (2011). The role of first language literacy and second language proficiency in second language reading comprehension. The Reading Matrix, 11(2), 177-190.
Jiang, X., & Grabe, W. (2011). Skills and strategies in foreign language reading. La lectura en lengua extranjera, 2-31.
Kintsch, W. (1974). The representation of meaning in memory. Hillsdale, NJ: Erlbaum.
Kent State University. (2020, August, 11). Three Level Comprehension Guide for Active Reading. https://www-s3-live.kent.edu/s3fs-root/s3fspublic/file/Three%20Level%20 Comprehension %20 Guide%20for%20Active%20Reading.pdf
Koda, K. (2005). Insights into second language reading. New York: Cambridge University Press.
Koda, K. (2007). Reading and language learning: Crosslinguistic constraints on second language reading development. In K. Koda (Ed.), Reading and language learning (pp. 1-44). Special issue of Language Learning Supplement, 57, 1-44.
Laufer, B. (1989). What percentage of text-lexis is essential for comprehension? In C. Lauren & M. Nordman (Eds.), Special language: From humans to thinking machines (pp. 316–323). Clevedon, England: Multilingual Matters.
Lumley, T. (1993). The notion of sub-skills in reading comprehension test: An EAP example. Language Testing, 10(3), 211–234.
Lunzer, E., Waite, M., & Doltan, T. (1979). Comprehension and comprehension test. In E. Lunzer & K. Gardner (Eds.), The effective use of reading (pp. 37-71). London: Heinemann Educational Books Ltd.
Moeini Asl, H. R. (2002). Construct validation of reading comprehension tests. Unpublished MA thesis, University for Teacher Education, Tehran.
Munby, J. (1978). Communicative syllabus design. Cambridge: Cambridge University Press.
OECD (2019), PISA 2018 results (Volume I): What students know and can do, PISA, OECD Publishing, Paris, https://doi.org/10.1787/5f07c754-en.
Paris, S. G., & Hamilton, E. E. (2009). The development of children’s reading comprehension. In S. E. Israel & G. G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 32- 53). New York: Routledge.
Pearson, P. D., & Cervetti, G. N. (2015). Fifty years of reading comprehension theory and practice. Research-based practices for teaching Common Core literacy, 1-24.
Pearson, P. D., & Johnson, D. D. (1978). Teaching reading comprehension. New York: Rinehart and Winston.
Perfetti, C. (1985). Reading ability. New York:Oxford University Press.
Perfetti, C. (1992). The representation problem in reading acquisition. In P. Gough, L. Ehri, & R. Treiman (Eds). Reading acquisition. Hillsdate, NJ: Lawrence Erlbaum.
Perfetti, C. (2007). Reading ability to comprehension. Scientific Studies of Reading, 8, 357-383.
Perfetti, C., & Hart, L. (2001). The lexical basis of comprehension skill. In D. Gorfien (Ed.), On the consequences of meaning selection (pp. 67-86). Washington, DC: American Psychological Association.
Praveen, S. D., & Rajan, P. (2013). Using Graphic Organizers to Improve Reading Comprehension Skills for the Middle School ESL Students. English Language Teaching, 6(2), 155-170.
Ramezaney, M. (2014). The wash back effects of university entrance exam on Iranian EFL teachers’ curricular planning and instruction techniques. Procedia-Social and Behavioral Sciences, 98, 1508-1517.
Ranjbaran, F., & Alavi, S. M. (2017). Developing a reading comprehension test for cognitive diagnostic assessment: A RUM analysis. Studies in Educational Evaluation, 55, 167-179.
Rosenblatt, L. M. (1938, 1968). Literature as exploration. New York: Noble and Noble, Publishers.
Rumelhart, D. E. (1985). Towards an interactive model of reading. In H. Singer & R.B. Ruddell (Eds.), Theoretical models and processes of reading. Newark, Delaware: International Reading Association.
Rumelhart, D. E. (1977). Understanding the summarizing stories. In D. LaBerge & S. J. Samuels (Eds.) Basic processes in reading perception and comprehension (pp. 265-303). Hillsdale, NJ: Lawrence Erlbaum.
Saville, N. (2012). Quality management in test production and administration. In G. Fulcher and F. Davidson: Routledge handbook of language testing (pp. 395-412). London: Routledge.
Schmitt, N., Jiang, X., & Grabe, W. (2011). The percentage of words known in a text and reading comprehension. The Modern Language Journal, 95(1), 26-43.
Shahmirzadi, N., Siyyari, M., Marashi, H., & Geramipour, M. (2020). Selecting the Best Fit Model in Cognitive Diagnostic Assessment: Differential Item Functioning Detection in the Reading Comprehension of the PhD Nationwide Admission Test. Journal of Language and Translation, 10(3), 1-15.
Stein, N. L., & Glenn, C. G. (1979). An analysis of story comprehension in elementary school children. New Directions in Discourse Pocessing, 2, 53-120.
Shiotsu, T., & Weir, C. J. (2007). The relative significance of syntactic knowledge and vocabulary breadth in the prediction of reading comprehension test performance. Language Testing, 24(1), 99-128.
Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford Press.
Tiwari, P. R. (2021). Reading Comprehension of Grade 8 Students: A Glimpse of Item Piloting. Educational Assessment, 81.80-96.
Urquhart, A. H., Weir, C. J. (1998). Reading in a second language: process, product, and practice. New York: Longman.
Vandergrift, L., & Goh, C. C. M. (2012). Teaching and learning second language listening: Metacognition in action. New York: Routledge.
Walter, C. (2007). First‐to second‐language reading comprehension: not transfer, but access. International Journal of Applied Linguistics, 17(1), 14-37.
Weir, C., Huizhong, Y., & Yan, J. (2000). An empirical investigation of the componentiality of L2 reading in English for academic purposes (Vol. 12). Cambridge University Press.
Weir, C. J. (2005). Language testing and validation. Hampshire: Palgrave McMillan.
Williams, E. & Moran, C. (1989). Reading in a foreign language at intermediate and advanced levels with particular reference to English. Language Teaching, 22 (4), 217-228.
Yamasaki, B. L., & Prat, C. S. (2021). Predictors and consequences of individual differences in cross-linguistic interactions: A model of second language reading skill. Bilingualism: Language and Cognition, 24(1), 154-166.
Zandi, H., Kaivanpanah, S., & Alavi, S. M. (2014). The Effect of Test Specifications Review on Improving the Quality of a Test. Iranian Journal of Language Teaching Research, 2(1), 1-14.
Zhang, L. (2018). Metacognitive and cognitive strategy use in reading comprehension: A structural equation modelling approach. Singapore: Springer.
Zwaan, R., & Rapp, D. (2006). Discourse comprehension. In M. A. Traxler & M. A. Gernsbacher (Eds.), Handbook of psycholinguistics (2nd ed. Pp. 725-764). Burlington, MA: Academic Press.
Biodata
Roshanak Rezaei is a Ph.D. candidate in TEFL from Islamic Azad University at Malayer Branch. She has taught English at the Ministry of Education for more than thirty years. Currently, she is an English teacher and translator at PNU of Hamedan and Malayer and in some other education systems of Iran. Her major areas of research lie in Language learning as well as language testing.
Email: rezaei.roshanak@gmail.com
Faramarz Aziz Malayeri is an Associate Professor of English Language Teaching and a faculty member in the department of English language, Malayer branch, IAU, Malayer, Iran. He currently teaches graduate and postgraduate courses with his main areas of research interest including Foreign Language Reading Comprehension and Language Assessment. He has published some articles concerning his interest in different journals.
Email: faramarzazizmalayerie@gmail.com
Abbas Bayat is an Associate Professor in TEFL- Teaching English as a Foreign Language and a faculty member in the department of English language, Malayer branch, IAU, Malayer, Iran. He has published articles regarding his research areas of interest the most important of which are the issues of TBLT and its assessment procedures.
Email: abbasbayat_305@yahoo.com
Hossein Ahmadi is an Assistant Professor of English Language Teaching and a faculty member in the department of English language, Malayer branch, IAU, Malayer, Iran. He has published books and articles on English Language Teaching. His current research areas of interest are interlanguage pragmatics, teaching language skills and sub-skills, and task-based language teaching.
Email: h.ahmadi@malayeriau.ac.ir
© 2024 by the authors. Licensee International Journal of Foreign Language Teaching and Research, Najafabad Iran, Iran. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY NC 4.0 license). (http://creativecommons.org/licenses/by nc/4.0/).