Language Assessment Literacy Instruction: Inquiry-Based vs. Expository Approaches to Enhance Student Teachers’ Conceptions of Assessment
الموضوعات : نشریه زبان و ترجمهMohammad Khojaste Mehr 1 , Mojtaba Mohammadi 2 , Hessam Ghanbar 3
1 - ELT Department, Faculty of Humanities, West Tehran Branch, Islamic Azad University
2 - English Language Teaching Department, Faculty of Humanities, West Tehran Branch, Islamic Azad University, Tehran, Iran
3 - English Department, Fereshtegaan International Branch, Islamic Azad University, Tehran, Iran
الکلمات المفتاحية: Conceptions of Assessment, Expository approach, Inquiry-based approach, Language assessment literacy instruction ,
ملخص المقالة :
This convergent mixed-methods study explored the effectiveness of inquiry-based and expository instruction in shaping student teachers' conceptions of assessment. Forty-three BA-level TEFL sophomores were assigned to either an experimental or control group, receiving instruction through inquiry-based or expository approaches, respectively. The pre- and post-intervention phases involved administering the Students' Conceptions of Assessment (SCoA) questionnaire {revalidated for Iranian EFL student teachers) and a metaphor analysis survey in both groups. The study yielded a key finding: a notable divergence between the quantitative and qualitative results. While the metaphor analysis indicated a significant shift in assessment conceptions post-intervention, the quantitative data (SCoA questionnaire) revealed no statistically significant differences in assessment conceptions across instruction types. Despite the lack of statistically significant differences, it is nonetheless possible that student teachers gained valuable insights into their own assessment perceptions within the learning and teaching process. These findings hold pedagogical implications for EFL learners and teachers alike.
Ausubel, D. P. (1961). Learning by discovery: Rationale and mystique. The Bulletin of the National Association of Secondary School Principals, 45(269), 18-58.
Banchi H., & Bell R. (2008). The many levels of inquiry. Science and Children, 46(2), 26-29.
Bell, R. L., Smetana, L., & Binns, I. (2005). Simplifying inquiry instruction. The Science Teacher, 72(7), 30-33.
Bell, T., Urhahne, D., Schanze, S., & Ploetzner, R. (2010). Collaborative inquiry learning: Models, tools, and challenges. International Journal of Science Education, 32(3), 349-377.
Boud, D., & Falchikov, N. (2007). Developing assessment for informing judgment: Rethinking assessment in higher education: Learning for the longer term, 14(4), 181-197.
Brindley, G. (2001). Language assessment and professional development. In C. Elder, A. Brown, K. Hill, N. Iwashita, T. Lumley, T. McNamara & K. O’Loughlin (Eds.), Experimenting with uncertainty: Essays in honor of Alan Davies (pp. 126–136). Cambridge University Press.
Brown, G. T. L. (2004). Measuring attitude with positively packed self-report ratings: Comparison of agreement and frequency scales. Psychological Reports, 94, 1015-1024
Brown, G. T. L. (2008). Students’ conceptions of assessment inventory (SCoA Version VI) [Measurement instrument]. Auckland: University of Auckland. https://doi.org/10.17608/k6.auckland.4596820.v1.
Brown, H.D., & AbeyWickrama, P. (2018). Language assessment: Principles and classroom practices (3rd. ed.). Pearson Education.
Brown, G. T., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. (2009). Use of interactive–informal assessment practices: New Zealand secondary students' conceptions of assessment. Learning and Instruction, 19(2), 97-111.
Caswell, C. J., & LaBrie, D. J. (2017). Inquiry-based learning from the learner’s point of view: A teacher candidate’s success story. Journal of Humanistic Mathematics, 7(2), 161-186.
Chan, Y. F., Sidhu, G. K., Suthagar, N., Lee, L. F., & Yap, B. W. (2016). Relationship of inquiry-based instruction on active learning in higher education. Pertanika Journal of Social Science and Humanities, 24, 55-72.
Chen, J., & Brown, G. T. L. (2018). Chinese secondary school students’ conceptions of assessment and achievement emotions: Endorsed purposes lead to positive and negative feelings. Asia Pacific Journal of Education, 38(1), 91-109 https://doi.org/10.1080/02188791.2018.1423951
Chiappetta, E. L., & Adams, A. D. (2004). Inquiry-based instruction. The Science Teacher, 71(2), 46.
Cruickshank, F., Bainer, T. D., & Metcalf, C. H. (1999). The effect of self-regulated learning strategies on academic achievement. International dissertation abstract-A, 61(12), 46-56.
Daigre, J., Berlet, G., Van Dyke, B., Peterson, K. S., & Santrock, R. (2017). Accuracy and reproducibility using patient-specific instrumentation in total ankle arthroplasty. Foot & Ankle International, 38(4), 412-418.
Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327–347.
DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17, 419–438.
DeLuca, C., LaPointe, D., & Luhanga, U. (2016). Teacher assessment literacy: A review of international standards and measures. Educational Assessment, Evaluation and Accountability, 28(3), 251–272. https://doi.org/10.1007/s11092-015-9233-6
Dincer, A., & Yesilyurt, S. (2017). Motivation to Speak English: A Self-Determination Theory Perspective. PASAA: Journal of Language Teaching and Learning in Thailand, 53, 1-25.
Dixon, H., & Haigh, M. (2009). Changing mathematics teachers’ conceptions of assessment and feedback. Teacher Development, 13(2), 173-186.
Farangi, M. R., & Rashidi, N. (2022). The Relationship Between Iranian EFL Teachers’ Conceptions of Assessment and Their Self-efficacy. International Journal of Language Testing, 12(2), 59-75. doi: 10.22034/ijlt.2022.157125
Ferretti, F., Santi, G. R. P., Del Zozzo, A., Garzetti, M., & Bolondi, G. (2021). Assessment practices and beliefs: Teachers’ perspectives on assessment during long-distance learning. Education Sciences, 11(6), 264.
Firoozi, T., Razavipour, K., & Ahmadi, A. (2019). The language assessment literacy needs of Iranian EFL teachers with a focus on reformed assessment policies. Language Testing in Asia, 9(1), 1-14.
Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132.
Fulmer, G. W., Lee, I. C., & Tan, K. H. (2015). Multi-level model of contextual factors and teachers’ assessment practices: An integrative review of research. Assessment in Education: Principles, Policy & Practice, 22(4), 475-494.
Gotch, C. M., & French, B. F. (2014). A systematic review of assessment literacy measures. Educational Measurement: Issues and Practice, 33,14–18.
Guido, M. (2017). Inquiry-Based Learning Definition, Benefits & Strategies. Retrieved October 26, 2017, from htttps://www.prodigygame.com/blog/inquiry-basedl-learning-defintiionbenefits-strategies/
Heryadi, D., & Sundari, R. S. (2020). Expository learning model. International Journal of Education and Research, 8(1), 207-216.
Inbar-Lourie, O. (2008a). Constructing an assessment knowledge base: A focus on language assessment courses. Language Testing, 25(3), 385–402.
Jalilzadeh, K., Alavi, S. M., & Siyyari, M. (2022). Comparing language assessment literacy and challenges of Iranian EFL teachers: TEFL vs non-TEFL background. Journal of Language and Translation, 12(4), 177-196.
Johnson, L., & Morris, P. (2010). Towards a framework for critical citizenship education. The curriculum journal, 21(1), 77-96.
Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the language assessment literacy survey. Language Assessment Quarterly, 17(1), 100-120.
Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169-197.
Lam, T. C., & Klockars, A. J. (1982). Anchor point effects on the equivalence of questionnaire items. Journal of Educational Measurement, 317-322.
Mackenzie, T. (2016). Dive into inquiry: Amplify learning and empower student voice. California: EdTechTeam Press.
Mapes, J. (2009). Pedaling revolution. Portland, OR: Portland State University.
Maxwell, D. O., Lambeth, D. T., & Cox, J. T. (2015). Effects of using inquiry-based learning on science achievement for fifth-grade students. Asia-Pacific Forum on Science Learning & Teaching, 16 (1), 1-31.
McGrath, I. (2006a). Teachers’ and learners’ images for coursebooks. ELT Journal, 60(2), 171–180. https://doi.org/10.1093/elt/cci104
Mendoza, A. A. L., & Arandia, R. B. (2009). Language testing in Colombia: A call for more teacher education and teacher training in language assessment. Profile, 11(2), 55–70.
Mohammadi, M., & Sanavi, R. V. (2021). Language assessment literacy: Ontogenetic and phylogenetic perspectives. In S. Hidri (Ed.), Perspectives on Language Assessment Literacy (pp. 52-65). Routledge.
Opre, D. (2015). Teachers’ conceptions of assessment. Procedia-Social and Behavioral Sciences, 209, 229-233.
Ormrod, J. E. (2022). I went to class every day, so all that stuff must be in my head somewhere. In M. M. Buehl & J. S. Vogler (Eds.), Teaching Learning for Effective Instruction (pp. 121-144). Charlotte, NC: Information Age Publishing.
Reinmann, G. (2019). Assessment and inquiry-based learning. In H. A. Mieg (Ed.), Inquiry-Based Learning – Undergraduate Research (pp. 91-105). Springer.
Saban, A., Kocbeker, B. N., & Saban, A. (2007). Prospective teachers' conceptions of teaching and learning revealed through metaphor analysis. Learning and Instruction, 17(2), 123-139.
Scarino, A. (2013). Language assessment literacy as self-awareness: Understanding the role of interpretation in assessment and in teacher learning. Language testing, 30(3), 309-327.
Schmitt, R. (2005). Systematic metaphor analysis as a method of qualitative research. The Qualitative Report, 10(2), 358-394.
Sevimel-Sahin, A. (2021). Language assessment literacy of novice EFL teachers: Perceptions, experiences, and training. In S. Hidri (Ed.), Perspectives on Language Assessment Literacy (pp. 135-158). Routledge.
Siegel, M. A., & Wissehr, C. (2011). Preparing for the plunge: Preservice teachers’ assessment literacy. Journal of Science Teacher Education, 22(4), 371-391.
Snowman, J., & McCown, R. (2015). Psychology applied to teaching (14th ed.). Stamford, CT: Cengage Learning.
Spronken‐Smith, R., & Walker, R. (2010). Can inquiry‐based learning strengthen the links between teaching and disciplinary research? Studies in Higher Education, 35(6), 723-740.
Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappa, 72(7), 534–539.
Tabachnick, B. G., & Fidell, L. S. (2007). Experimental designs using ANOVA (Vol. 724). Belmont, CA: Thomson/Brooks/Cole.
Tamim, S. R., & Grant, M. M. (2013). Definitions and uses: Case study of teachers implementing project-based learning. Interdisciplinary Journal of Problem-based Learning, 7(2), 3.
Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403–412.
Tsagari, D., Vogt, K., Froelich, V., Csépes, I., Fekete, A., Green, A., Hamp-Lyons, L., Sifakis, N. & Kordia, S. (2018). Handbook of assessment for language teachers.
http://taleproject.eu/mod/page/view.php?id=1200
Ulit, E. V., Salazar, E. S., Ferrer, L. M., Cruz, P. D., Espiritu, C. C., Sanchez, J. R., & Dacanay, A. G. (2004). Teaching the elementary school subjects. Manila: Rex Book Store.
Vandeyar, S., & Killen, R. (2007). Educators' conceptions and practice of classroom assessments in post-apartheid South Africa. South African Journal of Education, 27(1), 101-115.
Villamil, O. S., & de Guerrero, M. C. (2005). Constructing theoretical notions of L2 writing through metaphor conceptualization. In N. Bartels (Ed.), Applied Linguistics and Language Teacher Education (pp. 79-90). Boston, MA: Springer US.
Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374–402.
Vogt, K., Tsagari, D., & Spanoudis, G. (2020). What do teachers think they want? A comparative study of in-service language teachers’ beliefs on LAL training needs. Language Assessment Quarterly, 17(4), 386-409.
Woolfolk, A., & Margetts, K. (2012). Educational psychology Australian edition. Pearson Higher Education AU.
Yan, J. (2010). The place of language testing and assessment in the professional preparation of foreign language testers in China. Language Testing, 27(4), 555–584.
Yayci, L. (2017). University students' perception of being an international student: A metaphor analysis study. European Journal of Education Studies, 3(10), 379-403.
Zheng, H. B., & Song, W. J. (2010). Metaphor Analysis in the Educational Discourse: A Critical Review. Online Submission, 8(9), 42-49.
9
Language Assessment Literacy Instruction: Inquiry-Based vs. Expository Approaches to Enhance Student Teachers’ Conceptions of Assessment
Mohammad Khojaste Mehr, ELT Department, West Tehran Branch, Islamic Azad University, Tehran, Iran
Mojtaba Mohammadi*, ELT Department, West Tehran Branch, Islamic Azad University, Tehran, Iran
Hessameddin Ghanbar, English Department, Fereshtegaan International Branch, Islamic Azad University, Tehran, Iran
ABSTRACT
This convergent mixed-methods study explored the effectiveness of inquiry-based and expository instruction in shaping student teachers' conceptions of assessment. Forty-three BA-level TEFL sophomores were assigned to either an experimental or control group, receiving instruction through inquiry-based or expository approaches, respectively. The pre- and post-intervention phases involved administering the Students' Conceptions of Assessment (SCoA) questionnaire {revalidated for Iranian EFL student teachers) and a metaphor analysis survey in both groups. The study yielded a key finding: a notable divergence between the quantitative and qualitative results. While the metaphor analysis indicated a significant shift in assessment conceptions post-intervention, the quantitative data (SCoA questionnaire) revealed no statistically significant differences in assessment conceptions across instruction types. Despite the lack of statistically significant differences, it is nonetheless possible that student teachers gained valuable insights into their own assessment perceptions within the learning and teaching process. These findings hold pedagogical implications for EFL learners and teachers alike.
Keywords: Conceptions of Assessment, Expository approach, Inquiry-based approach, Language assessment literacy instruction
INTRODUCTION
Assessment significantly impacts the academic lives of English as a Foreign Language (EFL) learners as it modifies their learning processes and informs future decisions on a variety of topics, including careers (Chen & Brown, 2018). Under the assumption that assessment has benefits, it is probable that students will modify or alter their beliefs regarding assessment in response to the repercussions they encounter (Brown et al., 2009). They argued that assessment can impose both beneficial and detrimental influences on the education of students. Assessment can serve as an adaptive tool when students utilize assessment evidence to classify their educational needs and implement appropriate corrective actions for deficiencies. When learners perceive assessments as a means to be disregarded, the situation becomes maladaptive (Brown et al., 2009).
In recent times, the concept of assessment literacy has gained significant attention within the field, alongside other "literacies" (Taylor, 2013). In addition to possessing content and pedagogical knowledge, teachers are expected to be proficient in assessment. A variety of conceptual frameworks have been suggested to represent language assessment literacy, including the professional development program model by Brindley (2001), the three competency-related questions by Inbar-Lourie (2008), the spider web model by Taylor (2013), and, most recently, the work of Kremmel and Harding (2020). A comprehensive inventory of competencies has been compiled, catering not only to classroom instructors but also to university administrators, professional language test/assessment researchers, and language test/assessment developers, both theoretically and empirically. Instructors can potentially enhance their assessment skills through its implementation, while language learners may develop a significantly altered perception of assessment—becoming more self-reliant, cognizant, and answerable for their accomplishments (Brown, 2008). In her scholarly work, Sevimel-Sahin (2021) enumerated a number of qualities that are essential for a language assessment literate instructor. The ethical application of assessment; the process of selecting, developing, and analyzing tests and assessments; the ability to differentiate between sound and unsound assessments; the function of tests in relation to instructional approaches; the influence of tests on learning and teaching; and the ramifications of tests on institutions and society as a whole. In order to effectively select a suitable form of assessment in the classroom and plan, manage, and empower instruction, these are the fundamental competencies that every educator must possess (DeLuca et al., 2016; Gotch & French, 2014; Scarino, 2013; Siegel & Wissehr, 2011). Teacher education programs have the capacity to foster this consciousness (DeLuca & Klinger, 2010; Lam, 2015). Notwithstanding the noted importance, there is a scarcity of empirical research examining the impact of instructional methodologies designed to impart assessment literacy on students' perceptions of assessment (DeLuca et al., 2016). Through the implementation of inquiry-based and expository instructional approaches, the researchers of this study sought to illuminate the efficacy of these methodologies in altering the perceptions of language learners regarding assessments among pre-service English teachers.
LITERATURE REVIEW
Language Assessment Literacy (LAL)
The concept of "assessment literacy" was introduced to the field of language education shortly after Stiggins (1991) coined the term. It was first mentioned by Brindley (2001), who emphasized the dearth of research in teacher education programs regarding the assessment practices of educators. According to Vogt and Tsagari (2014), language assessment literacy (LAL) can be described as "the capacity to monitor, evaluate, grade, and score assessments using theoretical knowledge, in addition to designing, developing, and critically assessing tests and other assessment procedures" (p. 377). Davies (2008) proposed three competencies for language instructors in his model for LAL: competencies (including statistics and item writing), knowledge (including a comprehensive understanding of measurement concepts), and principles (including the proper utilization of assessments and associated concerns such as fairness and ethics). The shortcomings of earlier conceptualizations included the exclusive emphasis on the classroom teacher as the sole stakeholder in LAL literature and the expectation that all stakeholders possess a certain level of literacy. Taylor (2013) and, more recently, Kremmel and Harding (2020) proposed frameworks that dealt with this issue by taking into account the proportions of each competency that other stakeholders (such as professional test developers, test administrators, researchers, etc.) ought to possess. Consistent with the LAL conceptualization, it was deemed necessary to educate key stakeholders on LAL. Learners were included on the list of stakeholders of LAL due to the increasing influence of teachers as the primary decision-makers in curriculum development, on the one hand, and the importance of high-stakes assessments for various objectives including immigration, vocational certifications, and employment, on the other. This raises the level of LAL proficiency among teachers even further, necessitating a greater focus on the instructional strategies that are employed when instructing LAL.
Inquiry-based Instruction
In the realm of pedagogy as well as in everyday life occurrences, inquiry entails the pursuit of knowledge and explanations (Chiappetta & Adams, 2004). Learners assume ownership of their learning experiences and accountability for their learning processes through inquiry-based learning (IBL), which entails posing preliminary inquiries regarding problems, endeavoring to comprehend them, and identifying suitable resolutions (Caswell & LaBrie, 2017). In this context, learning responsibility signifies that the desire and readiness of the learners are the source of the learning process, as opposed to external influences (Spronken-Smith & Walker, 2010). According to Bell et al. (2005), the crux of inquiry lies in educators motivating students to actively seek solutions through the analysis and dissemination of data. Problem-solving and the resolution of issues are the focal points of IBL, which is a teaching methodology (Maxwell et al., 2015). Guido (2017) distinguished between inquiry-based learning and inquiry-based teaching by stating that the former involves problem-solving, whereas the latter establishes an environment conducive to analysis and comprehension, which go beyond mere curiosity.
IBL is grounded in constructivism and posits that individuals generate knowledge and significance by drawing from their own experiences (Tamim & Grant, 2013). According to John Dewey (1998, as cited in Mapes, 2009), learning can be understood as IBL in which meaning and knowledge are constructed through the examination of evidence. It was his firm conviction that students must cultivate their problem-solving abilities (Daigre et al., 2017).
There are numerous phases involved in implementing IBL. Mackenzie (2016) identifies four distinct phases of inquiry for students: guided, controlled, structured, and unstructured. Additionally, he asserted that instructors did not rigidly adhere to the phases in question; rather, it was contingent upon the teaching environment. An abbreviated summary of these stages is as follows:
1. Structured Inquiry: the instructor presents a single inquiry and guides the students in its execution.
2. Controlled Inquiry: the instructor presents inquiries and the students are required to utilize the data in order to resolve the inquiries.
3. Guided Inquiry: the learners design the output while the instructor selects inquiries.
4. Free Inquiry: the students choose their own inquiries without regard to anticipated results.
The inquiry-based approach incorporates questioning and answering as fundamental components, which address both the cognitive and affective aspects of learning and facilitate the development of higher-order thinking. Thus, the instructor does not provide direct explanations of all unit components; rather, students engage in cooperative and active participation in class. An increase in the level of learning ensues. Activated learning procedures are characterized by their capacity to generate knowledge in the form of tangible items (such as summaries of prior research, proposals for new investigations, survey instruments, and presentations of results) and to foster the formation of new mental structures. Through inquiry-based learning, pupils establish tangible correlations between classroom content and real-world challenges. This approach establishes a direct correlation between learning and research (Reinmann, 2019); attending classes or perusing books is insufficient to fulfill this requirement.
Expository-based instruction
The constructivist learning approach emphasizes the significant role that students perform as the primary actors in a learning context (Woolfolk & Margetts, 2012). Promoting self-regulation among students takes precedence over their participation and observation in learning environments, according to this methodology (Snowman et al., 2015). This implies that students ought to have the authority to independently investigate aspects of inquiry, concepts, or beliefs (Cruickshank et al., 1999). In contrast, learners acquire knowledge through the transmission of information by the instructor in expository-based instruction (Ormrod, 2022). Expository-based instruction entails learners being reliant on the sources owned by the instructors (Heryadi & Sundari, 2020).
Advocates of the expository method contend that learners' learning is most significantly influenced by instructors delivering lectures in person in front of the class (Johnson & Morris, 2010). Ulit et al. (2004) assert that the instructor is the sole authority on conveying subject matter when employing teaching strategies such as exposition. The instructors instruct the course material and subsequently administer an assessment to gauge the extent to which the material has been retained. Alternatively stated, the teacher's role in the aforementioned approach is limited to imparting information. Expository instruction centers on the prior knowledge of the students and aims to facilitate meaningful verbal learning (Ausubel, 1961) through the establishment of connections between newly acquired information and the learners' existing understanding (Johnson & Morris, 2010). Expository instruction is predicated on the notion that meaningful knowledge acquisition can occur through the effective implementation of directed verbal engagement (lecture), provided that the material presented is premeditated and connected to prior knowledge of the students (Johnson and Morris, 2010).
The instructional dimension of language assessment literacy has received insufficient attention in research, despite its recent conceptualizations (Mohammadi & Vahdani Sanavi, 2021). Furthermore, language instructors are not well-versed in the standards and competencies associated with language assessment literacy (Jalilzadeh et al., 2022). Furthermore, tertiary teacher education programs contain few modules or credit units that encompass the essential competencies of LAL; this is supported by the findings of studies cited in the literature (Fulcher, 2012; Lam, 2014; Mendoza & Arandia, 2009; Yan, 2010). Additionally, the Cambridge Teaching Knowledge Test (TKT) and other teacher professional development programs (CELTA and CertTESOL offered by Cambridge and Trinity College London, respectively) have given little or no consideration to the teaching LAL. Further examination of the teaching approaches, techniques, and strategies is imperative in order to rectify the present state of affairs, which has been marked by minimal effort in this regard. These two pedagogical approaches were implemented to instruct a wide range of subjects in the field of education as a whole and language education in particular. However, there is a scarcity of literature regarding their implementation in the training of preservice teachers in LAL.
In order to address this knowledge deficit, the purpose of this research is to determine how inquiry-based and expository-based instructional approaches influence the development and conception of LAL among student teachers. To produce more reliable results, the data were additionally triangulated via metaphor analysis, a qualitative method.
RQ. Do inquiry-based and expository-based instructional approaches have a differential impact on student teachers' development and conception of Language Acquisition Literacy (LAL)?
METHODOLOGY
Participants
A total of 43 university students enrolled in BA-level TEFL programs at various universities in Iran participated in this convergent mixed-methods study. The sample comprised both male (n = 16) and female (n = 27) students. The TEFL program aims to provide a four-year professional development curriculum for aspiring English language instructors in Iranian schools and private language institutes. The participants' ages ranged from 18 to 22 years old. Their self-reported English proficiency upon program registration indicated an intermediate level. Notably, all participants lacked prior experience in teaching English.
Instrumentation
Students’ Conceptions of Assessment Inventory
The Students' Conceptions of Assessment Inventory (SCoA-V) served as the primary instrument to assess student teachers' awareness and conceptions of assessment. Developed by Brown et al. (2009), the SCoA-V was subsequently re-validated within the Iranian context by Khojaste Mehr et al. (in press). This inventory utilizes 29 items that tap into four interrelated constructs: Improvement (perceptions of assessment as beneficial for student and teacher learning), Affect (assessment's influence on the social and emotional aspects of learning), Student Accountability (assessment as a measure of student learning quality), and Teacher Accountability (assessment as a measure of school quality). Participants responded to positively-worded statements using a response scale adapted from Lam & Klockars (1982). This scale offers two negative options ("strongly disagree," "disagree") alongside four positive options ("slightly agree," "moderately agree," "mostly agree," and "strongly agree"). The rationale for employing a positively-phrased scale aligns with Brown's (2004) observation that individuals tend to exhibit positive response bias in instruments measuring beliefs and attitudes. Positively-worded scales, compared to balanced formats like the Likert scale, are believed to elicit greater variance within participant responses.
Metaphor Analysis
Grounded in Lakoff and Johnson's metaphorical conceptual theory (1980, 1999), metaphor analysis offers a framework for "describing everyday cognitive structures using linguistic models, thereby revealing both individual and collective patterns of thought and action" (Lakoff & Johnson, 1980, p. 353). Metaphors function as a lens into human cognition, uncovering beliefs and emotions through the use of analogies (Saban et al., 2007; Zheng & Song, 2010). Within applied linguistics, this qualitative approach has been utilized to explore various aspects of EFL learners' experiences, including beliefs about language learning and speaking proficiency (Dincer & Yesilyurt, 2017), perceptions of writing (Erdogan & Erdogan, 2013), and self-perceptions as international students (Yayci, 2017). Likewise, studies have examined learners' beliefs about teacher roles (e.g., Villamil & De Guerrero, 2005; Saban et al., 2007).
In this vein, the present study employed metaphor analysis as a qualitative tool to elicit student teachers' underlying beliefs and conceptions regarding assessment. The instrument comprised two sections. The first section provided a concise explanation of metaphors with illustrative examples. The second section prompted participants to generate sentences containing at least three metaphors related to "assessment," "test," and "evaluation." They were offered the choice to respond in English or Persian. Data collection occurred at two time points: pre- and post-intervention.
Procedure
A recruitment call for participation was disseminated among BA-level TEFL students at various universities. Students expressed their interest and consent by completing a participation form. While 145 students initially expressed interest, only 92 commenced the course. Inclusion criteria for data analysis were limited absences (maximum two sessions) and completion of both pre- and post-intervention instruments. This resulted in a final sample of 43 participants.
Prior to the course, participants completed the Students' Conceptions of Assessment Inventory (SCoA-V) online via Google Forms. Additionally, they participated in a metaphor analysis survey through the same platform. Participants were randomly assigned to one of two groups receiving instruction based on either inquiry-based learning (IBL) or expository teaching methods.
Inquiry-Based Learning Group: This group experienced a question-and-answer, discovery-based intervention designed to engage students and develop cognitive and metacognitive learning skills (Chan et al., 2016). The researcher facilitated sessions by posing questions and prompting participants to explore solutions and deepen their understanding of presented content. Two levels of inquiry from Banchi and Bell (2008) were implemented: confirmation inquiry and structured inquiry. Confirmation inquiry provided the question, procedure, and solutions, while structured inquiry provided the question and method, requiring participants to generate findings and analyze results within a 10-minute timeframe. These levels were chosen due to the assumed limited experience of Iranian EFL students with independent inquiry processes.
Expository Teaching Group: This group received explicit presentations and lectures from the researcher, who holds a Ph.D. in TEFL and extensive teaching experience. While incorporating elements of inquiry, the expository approach focused on clear and concise information delivery, facilitating connections between concepts. The researcher followed the processes outlined by Bell et al. (2010) for expository teaching: creating questions, providing supporting materials and evidence (outcomes provided only in confirmation inquiry), explaining evidence, connecting explanations to obtained knowledge, and creating justifications. Participants were encouraged to actively engage throughout sessions.
The free, eight-week course consisted of two 90-minute sessions per week for both groups. The researcher, with 28 years of teaching experience and 17 years of teacher training, delivered the course based on content from three sources: the SCoA questionnaire components, the Handbook of Assessment for Language Teachers (Tsagari et al., 2018), and Language Assessment: Principles and Classroom Practices (Brown & Abeywickrama, 2018). Topics covered included assessment basics, assessing reading, listening, writing, and speaking, feedback provision, assessment alternatives, and assessment consequences. Participants were encouraged to actively explore content before, during, and after sessions. The facilitator in the IBL group provided resources such as relevant e-books and internet access to enhance learning effectiveness.
Following the course, participants completed the SCoA-V and metaphor survey again. Quantitative data from the SCoA-V were analyzed using MANOVA to examine relationships between instructional approaches and assessment components. Qualitative data from the metaphor survey underwent content analysis involving careful reading, coding responses, and extracting positive and negative conceptions of assessment.
RESULTS
To investigate the first research question regarding potential differences between expository and inquiry-based instruction on student assessment conceptions, a two-group multivariate analysis of variance (MANOVA) was employed. This analysis focused on the effects of instruction type on four latent composite factors derived from the Students' Conceptions of Assessment Inventory (SCoA-V): improvement, affect, student accountability, and teacher accountability. These factors served as the main dependent variables (DVs) in the MANOVA.
Given the latent nature of the SCoA-V factors, participant responses were averaged for each factor and used in the analysis. To assess learning gains, pre- and post-intervention questionnaire scores were utilized. Gain scores were calculated by subtracting pre-intervention scores from post-intervention scores for each factor within each group (expository and inquiry-based). These gain scores were then entered into the MANOVA to compare the two groups regarding their progress across the four assessment conception factors (refer to Tables 1 and 2 for detailed information on mean scores at each testing time and Table 3 for mean score gains by instruction type).
While Box's test indicated a potential violation of the homogeneity of covariance matrices assumption (see Table 4), Pillai's trace statistic was chosen over Wilk's Lambda for the MANOVA test. This decision aligns with recommendations suggesting Pillai's trace is more robust in cases of non-normal covariance matrices (Tabachnick & Fidell, 2007). Levene's test for homogeneity of variance confirmed that this assumption was met for all DVs (see Table 5). Additionally, no evidence of non-normality was observed, as skewness values for all factors fell within two standard errors of their respective measures.
Table 1
Descriptives of Different DVs in the Expository Group over two Testing Times (N=23)
| Minimum | Maximum | Mean | Std. Deviation | Skewness | |
Statistic | Statistic | Statistic | Statistic | Statistic | Std. Error | |
Improvement1 | 1.56 | 6.00 | 4.73 | 1.12 | -1.70 | .48 |
Improvement2 | 1.80 | 5.80 | 4.76 | 0.89 | -1.72 | .48 |
Affect1 | 1.75 | 6.00 | 4.28 | 1.00 | -0.39 | .48 |
Affect2 | 2.25 | 6.00 | 4.61 | 0.98 | -0.91 | .48 |
StudentsAccountability1 | 2.25 | 5.88 | 4.27 | 0.95 | -0.34 | .48 |
StudentsAccountability2 | 1.13 | 5.88 | 4.27 | 1.06 | -1.26 | .48 |
TeacherAccountability1 | 2.00 | 6.00 | 4.59 | 0.96 | -1.36 | .48 |
TeacherAccountability2 | 2.33 | 6.00 | 4.71 | 0.91 | -0.53 | .48 |
|
|
|
|
|
|
|
Table 2
Descriptives of Different DVs in the Inquiry-based Group over two Testing Times (N=20)
| Minimum | Maximum | Mean | Std. Deviation | Skewness | ||
Statistic | Statistic | Statistic | Statistic | Statistic | Std. Error | ||
Improvement1 | 2.80 | 6.00 | 4.85 | 0.81 | -.79 | .51 | |
Improvement2 | 4.20 | 6.00 | 5.25 | 0.53 | -.54 | .51 | |
Affect1 | 1.25 | 6.00 | 4.43 | 1.11 | -1.33 | .51 | |
Affect2 | 3.38 | 6.00 | 4.96 | 0.73 | -.53 | .51 | |
StudentsAccountability1 | 1.50 | 5.63 | 4.47 | 1.03 | -1.33 | .51 | |
StudentsAccountability2 | 3.13 | 6.00 | 4.63 | 0.85 | -.23 | .51 | |
TeacherAccountability1 | 3.67 | 5.67 | 4.72 | 0.60 | -.18 | .51 | |
TeacherAccountability2 | 3.67 | 5.67 | 4.97 | 0.58 | -.62 | .51 | |
|
|
|
|
|
|
|
Table 3
Descriptives of Mean of Gains in Different Factors over Two Testing Times
Group | Mean | Std. Deviation | N | |||||
Gain I | Expository | .03 | 1.43 | 23 | ||||
Inquiry | .40 | 0.80 | 20 | |||||
Total | .20 | 1.18 | 43 | |||||
Gain A | Expository | .34 | 1.35 | 23 | ||||
Inquiry | .53 | 1.35 | 20 | |||||
Total | .43 | 1.34 | 43 | |||||
Gain SA | Expository | .00 | 1.34 | 23 | ||||
Inquiry | .16 | 1.33 | 20 | |||||
Total | .07 | 1.32 | 43 | |||||
Gain TA | Expository | .12 | 1.35 | 23 | ||||
Inquiry | .25 | 0.81 | 20 | |||||
Total | .18 | 1.12 | 43 |
Note: Gain I = Gain in Improvement; Gain A = Gain in Affect; Gain SA = Gain in
Students Accountability; Gain TA = Gain in Teachers Accountability
Table 4
Box's Test of Equality of Covariance Matrices
Box's M | 26.30 |
F | 2.34 |
df1 | 10.00 |
df2 | 7662.41 |
Sig. | 0.01 |
Table 5
Levene's Test of Equality of Error Variances for each Factor
| F | df1 | df2 | Sig. |
Gain I | 3.70 | 1 | 41 | .061 |
Gain A | 0.01 | 1 | 41 | .899 |
Gain SA | 0.09 | 1 | 41 | .760 |
Gain TA | 4.09 | 1 | 41 | .060 |
The two-group MANOVA yielded a non-significant overall effect for instruction type (expository vs. inquiry-based) on the four SCoA-V factors (improvement, affect, student accountability, and teacher accountability), F(4, 38) = .41, p = .80, η² = .04 (see Table 6). This effect size, classified as trivial, suggests no statistically significant difference between the instructional approaches regarding their impact on students' assessment conceptions over time (pre- to post-intervention). Notably, both groups exhibited positive gains across most factors (see Table 3), with the exception of student accountability in the expository group, which showed no improvement.
Table 6
Multivariate Tests for Investigating the Holistic Effect of Instruction on the Gains in Factors of Students’ Conceptions of Assessment Questionnaire
Effect | Value | F | Hypothesis df | Error df | Sig. | Partial Eta Squared | |
|
|
|
|
|
|
| |
Intercept | Pillai's Trace | .18 | 2.08 | 4.00 | 38.00 | .10 | .18 |
Wilks' Lambda | .82 | 2.08 | 4.00 | 38.00 | .10 | .18 | |
Hotelling's Trace | .22 | 2.08 | 4.00 | 38.00 | .10 | .18 | |
Roy's Largest Root | .22 | 2.08 | 4.00 | 38.00 | .10 | .18 | |
Group | Pillai's Trace | .04 | .41 | 4.00 | 38.00 | .80 | .04 |
Wilks' Lambda | .96 | .41 | 4.00 | 38.00 | .80 | .04 | |
Hotelling's Trace | .04 | .41 | 4.00 | 38.00 | .80 | .04 | |
Roy's Largest Root | .04 | .41 | 4.00 | 38.00 | .80 | .04 |
To further investigate the MANOVA results, paired-samples t-tests were conducted within each group to examine the statistical significance of mean score gains across the four factors over time. As shown in Table 7, none of the gains in the expository group reached statistical significance, despite being positive. In contrast, the inquiry-based group exhibited a statistically significant positive gain for the "improvement" factor only (see Table 8). No other significant gains were observed in this group (refer to Table 3 for detailed mean score gains).
Table 7
Paired-samples t-tests of Gains in Factors of the Questionnaire in the Expository Group over Time
| Paired Differences | T | df | p value | ||||||||||||
| Mean | Std. D | Std. Error Mean | 95% CI | ||||||||||||
| Lower | Upper | ||||||||||||||
Improvement1 Improvement2 | .03 | 1.43 | 0.30 | -0.59 | 0.65 | .10 | 22.00 | .92 | ||||||||
Affect1-Affect2 | .34 | 1.35 | 0.28 | -0.25 | 0.92 | 1.19 | 22.00 | .24 | ||||||||
StudentsAccountability1 StudentsAccountability2 | .00 | 1.34 | 0.28 | -0.58 | 0.58 | .00 | 22.00 | 1.00 | ||||||||
TeacherAccountability1 TeacherAccountability2 | .12 | 1.35 | 0.28 | -0.47 | 0.70 | .42 | 22.00 | .68 |
Table 8
Paired-samples t-tests of Gains in Factors of the Questionnaire in the Inquiry-based Group over Time
| Paired Differences | t | df | p value | ||||
| Mean | Std. D | Std. Error Mean | 95% CI | ||||
| Lower | Upper | ||||||
Improvement1 Improvement2 | .40 | .80 | .18 | .02 | .77 | 2.19 | 19.00 | .04 |
Affect1-Affect2 | .53 | 1.35 | .30 | -.10 | 1.16 | 1.75 | 19.00 | .10 |
StudentsAccountability1 StudentsAccountability2 | .16 | 1.33 | .30 | -.46 | .78 | .54 | 19.00 | .59 |
TeacherAccountability1 TeacherAccountability2 | .25 | .81 | .18 | -.13 | .63 | 1.38 | 19.00 | .18 |
Qualitative Data Analysis
To triangulate the quantitative data, a qualitative analysis of student teachers' metaphors concerning assessment, tests, and evaluation was conducted. Participants generated metaphors in English or Persian. Persian metaphors were translated into equivalent English metaphors. However, some responses deviated from the intended data collection (e.g., comments on the instructor, teaching methods, course schedule). These responses were excluded from the analysis. Cultural considerations were prioritized when interpreting metaphors with ambiguity due to Persian-English language differences. Additionally, ambiguous metaphors (not easily classified as positive/neutral or negative) were categorized based on the student's justification for the metaphor. To facilitate data presentation and comparison with quantitative findings, a dichotomous classification system (positive/neutral vs. negative connotations) was employed for the metaphors. Similar metaphors from different participants were not redundantly presented in the tables (Tables 9 and 10). Instead, the tables utilize numbers to represent the frequency of each metaphor within the EBI and IBI groups (pre-intervention).
Table 9
Metaphors Before the Expository-based Instruction
Positive/ Neutral | Negative |
Evaluation stage of training! | The test is a torment. |
An examination is a means to an end (3) | In the evaluation, I was in a cleft stick. |
An examination is hard rain that we have to bravely go through. | Exams weigh heavily on my shoulders (2) |
The test is a one-sided glow that illuminates from only one dimension. | The exam is coffee that takes away the sleep from your eyes! |
Evaluation is the foothill of success! | The exam is a rock climbing that has no end. (2) |
| Don't try to read a book! You have to sit for a test then! |
Table 10
Metaphors Before the Inquiry-based Instruction
Positive/ Neutral | Negative |
The exam is a means of assessing your performance during the semester. (2) | The test is torment! |
Exam, a tool to face real literacy. | The test is like a giant creature! |
Exam is like cooking. | A test is a burden to me. (3) |
Evaluation is a value judgment process about the product or knowledge of students. | Assessment is not a good way to know yourself. |
The exam is like a competition. (3) | Exam stress is tougher than anything in the world. (2) |
Assessment is the best caution. | Exams make my heart heavy. |
| Exams are a waste of time. |
An examination of Tables 9 and 10 reveals minimal differences in student teachers' assessment conceptions between the EBI and IBI groups, based on both the frequency and quality of metaphors used. The frequencies of positive/neutral and negative metaphors related to assessment were comparable: nine and ten for the IBI group, and seven and eight for the EBI group, respectively. Prior to the intervention, metaphors employed by both groups to represent positive aspects of assessment included "means," "tool," "value," and "competition." Conversely, negative connotations were conveyed through metaphors like "burden," "tough issue," "heavy burden," and "rock climbing." Tables 11 and 12 present metaphors generated by the expository-based group.
Table 11
Metaphors after the Expository-based Instruction
Positive/ Neutral | Negative |
Assessment is a bridge to learning (2) | Tests (such as performance evaluation, and the ability to memorize content) are very difficult (3) |
Assessment is an ocean full of strange things (2) | the test is walking on a rope, if not prepared you fall. |
The test is a stepping stone for talents to be seen | Assessment is like a breathtaking competition (3) |
Assessment is a chart to show progress. | The test gives the bitter taste of espresso. |
Assessment is water that leads to a clear and calm river. (2) | The assessment is the Day of Judgment (teachers score(judge) everything, but not too much, not too much) (2) |
Assessment is an interesting work of art. (3) |
|
Table 12
Metaphors after the Inquiry-based Instruction
Positive/ Neutral | Negative |
The exam is a sweet competition. | The exam is a giant creature |
The exam is a challenge, you have to approach it with motivation and self-confidence so that you can pass it and achieve success. | The test is a divine test |
The exam is a laboratory, you have to pay attention to all the details to get an accurate and favorable result. | Assessment is as necessary as a defect
|
Evaluation is a gem and a diamond. | The test was divine punishment |
Evaluation is a river that is constantly moving, which undergoes ebb and flow (its level and degree changes during the semester) and you, as a skilled swimmer, must be in motion with the river in any condition. | The burden of exams is heavy on my shoulders
|
An examination is a piece of cake (very easy) | Assessment is a heavy tree log. |
Evaluation is like a step from a stage with a low score to a stage with a higher score. It can be a piece of cake |
|
Assessment is winning a difficult race. |
|
Assessment is sweet sugar. |
|
The test is bitter on the outside but beautiful on the inside. |
|
Assessment can be a means to test for self-knowledge (2) |
|
Evaluation can sometimes help us. |
|
Examination is the best teacher. |
|
Assessment is like preparing a meal |
|
Correct preparation makes the student jump from point one to ten. |
|
Tables 11 and 12 reveal a notable difference between the groups following the intervention. The EBI group exhibited minimal change in metaphor frequency (11 positive/neutral, 10 negative) and quality. Notably, the metaphors generated by the EBI group post-intervention lacked similarity to those used pre-intervention. In contrast, the IBI group displayed a significant shift in metaphor usage, reflecting a positive change in their assessment conceptions. Positive/neutral metaphors increased to 16, while negative metaphors dropped to just six. Interestingly, the metaphors generated by the IBI group post-intervention demonstrated greater variety and less repetition compared to both their pre-intervention metaphors and those of the EBI group. A key finding emerges from comparing the quantitative and qualitative results. While the questionnaire data did not reveal a statistically significant difference between the assessment conception components, the metaphor analysis suggests that the language assessment literacy intervention, regardless of instructional approach (expository or inquiry-based), may have had a discernible impact on how student teachers conceptualize assessment and its related components. This divergence highlights the potential for metaphor analysis to capture nuanced changes in student thinking that might be missed by quantitative methods alone.
DISCUSSION
A key finding of this study is the lack of statistically significant differences in student assessment conceptions between the expository and inquiry-based instruction groups. This suggests that participation in the instructional courses did not lead to significant changes in their assessment beliefs.
Several factors may explain this result. Conceptions are often considered entrenched components of an individual's knowledge system, making them resistant to change (Vandeyar & Killen, 2007). While previous research has explored the influence of conceptions on instructional practices (Opre, 2015), the literature lacks studies examining the potential for instruction to directly modify assessment conceptions. Vogt et al. (2020) highlight the complex interplay of institutional, educational, and policy factors shaping teacher beliefs, suggesting these beliefs are not easily altered through instruction alone. Similarly, Ferretti et al. (2021) reported minimal changes in teacher conceptions during the pandemic, suggesting a persistent state of confusion among educators regarding assessment practices.
The current study aligns with Firoozi et al. (2019) in recognizing the need for change in Iranian student teachers' assessment conceptions. The dominance of traditional testing culture in Iran, as argued by Firoozi et al., likely contributes to the participants' initial negative views on assessment, reflected in their pre-intervention metaphors. While some positive shifts in metaphors were observed, these changes warrant further investigation using more comprehensive approaches. Assessment conceptions are shaped by cultural values and societal beliefs (Boud & Falchikov, 2007). When a society prioritizes strict educational selection processes, student conceptions may not readily align with assessment practices. Fulmer et al. (2015) illustrate this in the context of China's higher education system, where only a small fraction of students gain admission through rigorous assessments. Conversely, Iranian students face a relatively easier path to higher education, potentially fostering a less critical view of assessment practices, as suggested by Farangi and Rashidi (2022). Their research also identified a correlation between Iranian students' positive assessment conceptions and their high self-efficacy related to these conceptions.
Dixon and Haigh (2009) attempted to modify student assessment conceptions in New Zealand by fostering knowledge and awareness about teaching and learning practices. Their findings point to the potential effectiveness of professional learning activities in enhancing teachers' critical thinking and engagement, which aligns with the present study's results.
CONCLUSION
The emergence of the sociocultural paradigm underscores assessment as a fundamental element of any educational curriculum. Consequently, teacher education programs should prioritize the development of effective assessment practices among student teachers. This study's application of convergent mixed methods yielded a key finding: a disconnect between the quantitative and qualitative results regarding the impact of instructional approaches on assessment conceptions. Notably, neither expository nor inquiry-based instruction resulted in significant changes in student assessment beliefs.
One possible explanation lies in the nature of assessment conceptions. These can be viewed as higher-order beliefs resistant to short-term interventions (Vandeyar & Killen, 2007). It is possible that longer instructional periods may be necessary to achieve lasting change. Additionally, the participants in this study were sophomores in their BA program, potentially limiting their existing knowledge and experience in assessment practices. Perhaps junior or senior students with a broader foundation would engage more critically with the instruction. The dearth of prior research examining the effects of instruction on assessment conceptions highlights the need for further studies in this area. Future investigations could explore the potential benefits of extended instructional programs and target student teachers at later stages in their academic careers.
References
Ausubel, D. P. (1961). Learning by discovery: Rationale and mystique. The Bulletin of the National Association of Secondary School Principals, 45(269), 18-58.
Banchi H., & Bell R. (2008). The many levels of inquiry. Science and Children, 46(2), 26-29.
Bell, R. L., Smetana, L., & Binns, I. (2005). Simplifying inquiry instruction. The Science Teacher, 72(7), 30-33.
Bell, T., Urhahne, D., Schanze, S., & Ploetzner, R. (2010). Collaborative inquiry learning: Models, tools, and challenges. International Journal of Science Education, 32(3), 349-377.
Boud, D., & Falchikov, N. (2007). Developing assessment for informing judgment: Rethinking assessment in higher education: Learning for the longer term, 14(4), 181-197.
Brindley, G. (2001). Language assessment and professional development. In C. Elder, A. Brown, K. Hill, N. Iwashita, T. Lumley, T. McNamara & K. O’Loughlin (Eds.), Experimenting with uncertainty: Essays in honor of Alan Davies (pp. 126–136). Cambridge University Press.
Brown, G. T. L. (2004). Measuring attitude with positively packed self-report ratings: Comparison of agreement and frequency scales. Psychological Reports, 94, 1015-1024
Brown, G. T. L. (2008). Students’ conceptions of assessment inventory (SCoA Version VI) [Measurement instrument]. Auckland: University of Auckland. https://doi.org/10.17608/k6.auckland.4596820.v1.
Brown, H.D., & AbeyWickrama, P. (2018). Language assessment: Principles and classroom practices (3rd. ed.). Pearson Education.
Brown, G. T., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. (2009). Use of interactive–informal assessment practices: New Zealand secondary students' conceptions of assessment. Learning and Instruction, 19(2), 97-111.
Caswell, C. J., & LaBrie, D. J. (2017). Inquiry-based learning from the learner’s point of view: A teacher candidate’s success story. Journal of Humanistic Mathematics, 7(2), 161-186.
Chan, Y. F., Sidhu, G. K., Suthagar, N., Lee, L. F., & Yap, B. W. (2016). Relationship of inquiry-based instruction on active learning in higher education. Pertanika Journal of Social Science and Humanities, 24, 55-72.
Chen, J., & Brown, G. T. L. (2018). Chinese secondary school students’ conceptions of assessment and achievement emotions: Endorsed purposes lead to positive and negative feelings. Asia Pacific Journal of Education, 38(1), 91-109 https://doi.org/10.1080/02188791.2018.1423951
Chiappetta, E. L., & Adams, A. D. (2004). Inquiry-based instruction. The Science Teacher, 71(2), 46.
Cruickshank, F., Bainer, T. D., & Metcalf, C. H. (1999). The effect of self-regulated learning strategies on academic achievement. International dissertation abstract-A, 61(12), 46-56.
Daigre, J., Berlet, G., Van Dyke, B., Peterson, K. S., & Santrock, R. (2017). Accuracy and reproducibility using patient-specific instrumentation in total ankle arthroplasty. Foot & Ankle International, 38(4), 412-418.
Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327–347.
DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17, 419–438.
DeLuca, C., LaPointe, D., & Luhanga, U. (2016). Teacher assessment literacy: A review of international standards and measures. Educational Assessment, Evaluation and Accountability, 28(3), 251–272. https://doi.org/10.1007/s11092-015-9233-6
Dincer, A., & Yesilyurt, S. (2017). Motivation to Speak English: A Self-Determination Theory Perspective. PASAA: Journal of Language Teaching and Learning in Thailand, 53, 1-25.
Dixon, H., & Haigh, M. (2009). Changing mathematics teachers’ conceptions of assessment and feedback. Teacher Development, 13(2), 173-186.
Farangi, M. R., & Rashidi, N. (2022). The Relationship Between Iranian EFL Teachers’ Conceptions of Assessment and Their Self-efficacy. International Journal of Language Testing, 12(2), 59-75. doi: 10.22034/ijlt.2022.157125
Ferretti, F., Santi, G. R. P., Del Zozzo, A., Garzetti, M., & Bolondi, G. (2021). Assessment practices and beliefs: Teachers’ perspectives on assessment during long-distance learning. Education Sciences, 11(6), 264.
Firoozi, T., Razavipour, K., & Ahmadi, A. (2019). The language assessment literacy needs of Iranian EFL teachers with a focus on reformed assessment policies. Language Testing in Asia, 9(1), 1-14.
Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132.
Fulmer, G. W., Lee, I. C., & Tan, K. H. (2015). Multi-level model of contextual factors and teachers’ assessment practices: An integrative review of research. Assessment in Education: Principles, Policy & Practice, 22(4), 475-494.
Gotch, C. M., & French, B. F. (2014). A systematic review of assessment literacy measures. Educational Measurement: Issues and Practice, 33,14–18.
Guido, M. (2017). Inquiry-Based Learning Definition, Benefits & Strategies. Retrieved October 26, 2017, from htttps://www.prodigygame.com/blog/inquiry-basedl-learning-defintiionbenefits-strategies/
Heryadi, D., & Sundari, R. S. (2020). Expository learning model. International Journal of Education and Research, 8(1), 207-216.
Inbar-Lourie, O. (2008a). Constructing an assessment knowledge base: A focus on language assessment courses. Language Testing, 25(3), 385–402.
Jalilzadeh, K., Alavi, S. M., & Siyyari, M. (2022). Comparing language assessment literacy and challenges of Iranian EFL teachers: TEFL vs non-TEFL background. Journal of Language and Translation, 12(4), 177-196.
Johnson, L., & Morris, P. (2010). Towards a framework for critical citizenship education. The curriculum journal, 21(1), 77-96.
Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the language assessment literacy survey. Language Assessment Quarterly, 17(1), 100-120.
Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169-197.
Lam, T. C., & Klockars, A. J. (1982). Anchor point effects on the equivalence of questionnaire items. Journal of Educational Measurement, 317-322.
Mackenzie, T. (2016). Dive into inquiry: Amplify learning and empower student voice. California: EdTechTeam Press.
Mapes, J. (2009). Pedaling revolution. Portland, OR: Portland State University.
Maxwell, D. O., Lambeth, D. T., & Cox, J. T. (2015). Effects of using inquiry-based learning on science achievement for fifth-grade students. Asia-Pacific Forum on Science Learning & Teaching, 16 (1), 1-31.
McGrath, I. (2006a). Teachers’ and learners’ images for coursebooks. ELT Journal, 60(2), 171–180. https://doi.org/10.1093/elt/cci104
Mendoza, A. A. L., & Arandia, R. B. (2009). Language testing in Colombia: A call for more teacher education and teacher training in language assessment. Profile, 11(2), 55–70.
Mohammadi, M., & Sanavi, R. V. (2021). Language assessment literacy: Ontogenetic and phylogenetic perspectives. In S. Hidri (Ed.), Perspectives on Language Assessment Literacy (pp. 52-65). Routledge.
Opre, D. (2015). Teachers’ conceptions of assessment. Procedia-Social and Behavioral Sciences, 209, 229-233.
Ormrod, J. E. (2022). I went to class every day, so all that stuff must be in my head somewhere. In M. M. Buehl & J. S. Vogler (Eds.), Teaching Learning for Effective Instruction (pp. 121-144). Charlotte, NC: Information Age Publishing.
Reinmann, G. (2019). Assessment and inquiry-based learning. In H. A. Mieg (Ed.), Inquiry-Based Learning – Undergraduate Research (pp. 91-105). Springer.
Saban, A., Kocbeker, B. N., & Saban, A. (2007). Prospective teachers' conceptions of teaching and learning revealed through metaphor analysis. Learning and Instruction, 17(2), 123-139.
Scarino, A. (2013). Language assessment literacy as self-awareness: Understanding the role of interpretation in assessment and in teacher learning. Language testing, 30(3), 309-327.
Schmitt, R. (2005). Systematic metaphor analysis as a method of qualitative research. The Qualitative Report, 10(2), 358-394.
Sevimel-Sahin, A. (2021). Language assessment literacy of novice EFL teachers: Perceptions, experiences, and training. In S. Hidri (Ed.), Perspectives on Language Assessment Literacy (pp. 135-158). Routledge.
Siegel, M. A., & Wissehr, C. (2011). Preparing for the plunge: Preservice teachers’ assessment literacy. Journal of Science Teacher Education, 22(4), 371-391.
Snowman, J., & McCown, R. (2015). Psychology applied to teaching (14th ed.). Stamford, CT: Cengage Learning.
Spronken‐Smith, R., & Walker, R. (2010). Can inquiry‐based learning strengthen the links between teaching and disciplinary research? Studies in Higher Education, 35(6), 723-740.
Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappa, 72(7), 534–539.
Tabachnick, B. G., & Fidell, L. S. (2007). Experimental designs using ANOVA (Vol. 724). Belmont, CA: Thomson/Brooks/Cole.
Tamim, S. R., & Grant, M. M. (2013). Definitions and uses: Case study of teachers implementing project-based learning. Interdisciplinary Journal of Problem-based Learning, 7(2), 3.
Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403–412.
Tsagari, D., Vogt, K., Froelich, V., Csépes, I., Fekete, A., Green, A., Hamp-Lyons, L., Sifakis, N. & Kordia, S. (2018). Handbook of assessment for language teachers.
http://taleproject.eu/mod/page/view.php?id=1200
Ulit, E. V., Salazar, E. S., Ferrer, L. M., Cruz, P. D., Espiritu, C. C., Sanchez, J. R., & Dacanay, A. G. (2004). Teaching the elementary school subjects. Manila: Rex Book Store.
Vandeyar, S., & Killen, R. (2007). Educators' conceptions and practice of classroom assessments in post-apartheid South Africa. South African Journal of Education, 27(1), 101-115.
Villamil, O. S., & de Guerrero, M. C. (2005). Constructing theoretical notions of L2 writing through metaphor conceptualization. In N. Bartels (Ed.), Applied Linguistics and Language Teacher Education (pp. 79-90). Boston, MA: Springer US.
Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374–402.
Vogt, K., Tsagari, D., & Spanoudis, G. (2020). What do teachers think they want? A comparative study of in-service language teachers’ beliefs on LAL training needs. Language Assessment Quarterly, 17(4), 386-409.
Woolfolk, A., & Margetts, K. (2012). Educational psychology Australian edition. Pearson Higher Education AU.
Yan, J. (2010). The place of language testing and assessment in the professional preparation of foreign language testers in China. Language Testing, 27(4), 555–584.
Yayci, L. (2017). University students' perception of being an international student: A metaphor analysis study. European Journal of Education Studies, 3(10), 379-403.
Zheng, H. B., & Song, W. J. (2010). Metaphor Analysis in the Educational Discourse: A Critical Review. Online Submission, 8(9), 42-49.
Biodata