Development and Validation of an Integrated Knowledge Questionnaire: A Quantitative Investigation
Subject Areas : All areas of language and translation
Farzaneh Rahimzadeh
1
,
Ahmad Mohseni
2
*
,
Ghafour Rezaie
3
1 - Department of English, Science and Research Branch, Islamic Azad University, Tehran, Iran
2 - Department of English, South Tehran Branch, Islamic Azad University Tehran, Iran
3 - Department of English, Garmsar Branch, Islamic Azad University, Garmsar, Iran
Keywords: Keywords: Assessment, Confirmatory Factor Analysis, Integrated Knowledge Questionnaire, Quantitative Study, Teacher&rsquo, s Knowledge,
Abstract :
Abstract: Integrating technology into instructional designs is currently a growing topic in higher education. This study sheds light on the integration of technology and KARDS factors. The researcher designed the integrated version of the teachers’ knowledge questionnaire to provide a more comprehensive understanding of teacher’s knowledge. This study aimed to develop and validate a Teachers’ Integrated Knowledge Questionnaire (TIKQ) through a quantitative investigation. The researcher aimed to assess individuals’ integrated knowledge in a specific domain. This research involved multiple stages: item generation, pilot testing, and statistical analysis. The researcher generated a pool of potential items for the TIKQ. These items covered various aspects of integrated knowledge within the chosen domain. The questionnaire was piloted and administered to 461 teachers in the second segment, which included 58 items. To determine if the trait structure of the teachers’ integrated knowledge questionnaire enjoys a good fit, statistical analysis techniques such as factor analysis and confirmatory factor analysis (CFA) were conducted. Cronbach’s alpha reliability and structural equation modeling were used to measure the internal consistency reliability of the teachers’ Integrated Knowledge Questionnaire (TIKQ). The new version of the integrated knowledge scale will assist teacher trainers in assessment of teacher’s knowledge. The findings from this study will contribute to developing an effective tool for measuring integrated knowledge in a specific domain. The tool can be used in various contexts, such as educational settings, research studies, and professional assessments.
Ahmadian, M. J., Ketabi, S., & Brown, C. M. (2020). The language assessment knowledge questionnaire (LAKQ): Assessing language teachers’ assessment literacy. Language Testing in Asia, 10(1), 1-21.
Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for conceptualizing, developing, and assessing ICT-TPACK: Advances in technological pedagogical content knowledge (TPACK). Computer & Education, 52(1), 154-168. https://doi.org/10.1016/j.compedu.2008.07.006
Aydın, E. & Mıhladız Turhan, G. (2023). Exploring primary school teachers’ pedagogical content knowledge in science classes based on PCK model. Journal of Pedagogical Research, 7(3), 70-99. https://doi.org/10.33902/JPR.202318964
Blömeke, S., Jentsch, A., Ross, N., Kaiser, G., and König, J. (2022). Opening up the black box: teacher competence, instructional quality, and students’ learning progress. Learning and Instruction, 79 (1). 101600. https://doi.org/10.1016/j.learninstruc.2022.101600
Ball, D. L., & Bass, H. (2000). Interweaving content and pedagogy in teaching and learning to teach: Knowing and using mathematics. In J. Boaler (Ed.), Multiple perspectives on teaching and learning mathematics (pp. 83–104). Westport, CT: Ablex.
Ball, D.L., Thames, M.H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389-407.
Bae, J., & Bachman, L. F. (2010). An investigation of four writing traits and two tasks across two languages. Language Testing, 27(2), 213-234. https://doi.org/10.1177/0265532209349470
Baran, E., & Uygun, E. (2016). Putting technological, pedagogical, and content (TPACK) in action: An integrated TPACK-design-based learning (DBL) approach. Australasian Journal of Educational Technology, 32(2), 47-63. https://doi.org/10.14742/ajet.2551
Blaschke, M. (2012). Heutagogy and lifelong learning: A Review of heutagogical practice and self-determined learning.The International Review of Research in Open and Distance Learning, 13(1), 56-71.https://doi.org/10.19173/irrodl.v13i1.1076
Baser, D., Kopcha, T. J., & Ozden, M. Y. (2016). Developing a technological pedagogical content knowledge (TPACK) assessment for preservice teachers learning to teach English as a foreign language. Computer Assisted Language Learning, 29 (4), 749-764. https://doi.org/10.1080/09588221.2015.1047456
Bostancıoğlu, A., & Handley, Z. (2018). Developing and validating a questionnaire for evaluating the EFL' Total PACKage': Technological Pedagogical Content Knowledge (TPACK) for English as a Foreign Language (EFL). Computer Assisted Language Learning, 31 (5-6), 572-598. https://doi.org/10.1080/09588221.2017.1422524
Brown, E., & Davis, L. (2018). Understanding the influence of context on teachers’ beliefs and practices. Teaching and Teacher Education, 73, 145-155.
Brown, E., & Johnson, S. (2016). The impact of teacher actions on student learning. Journal of Educational Psychology, 108(3), 345-360.
Carrier, S. I., & Moulds, L. D. (2003). Pedagogy, andragogy, and cybergogy: exploring best - practice paradigm for online teaching and learning. Sloan –C 9th International Conference on Asynchronous Learning Networks (ALN), USA PPT
Chadha, N. K. (2009). Applied Psychometry. Sage Publications.
Chapnick, S. &Meloy, J. (2005). From Andragogy to Heutagogy: Renaissance e-learning: creating dramatic and unconventional learning experiences. Essential resources for training and HR professionals. John Wiley and Sons.
Chen, P. S., & Hsu, Y. C. (2020). Developing and validating a technology pedagogical and content knowledge questionnaire for language teachers. Assessing Writing, 45, 1-17
De Haan. Baran, E., & Uygun, E. (2016). Putting technological, pedagogical, and content knowledge (TPACK) in action: An integrated TPACK-design-based learning (DBL) approach. Australasian Journal of Educational Technology, 32(2), 47-63. https://doi.org/10.14742/ajet.255
Duong, P. N., Voordeckers, W., Huybrechts, J., & Lambrechts, F. (2022). On external knowledge sources and innovation performance: Family versus non-family firms. Technovation, 114, 102448.https://doi.org/10.1016/j.technovation.2021.10244
Faramarzi, S., Heidari Tabrizi, H., & Chalak, A. (2019). Telegram: an instant messaging application to assist distance language learning. Teaching English with Technology, 12(1), 1263-1280. https://doi.org/10.29333/iji.2019.12181a
Fennema, E., & Franke, M. (1992). Teachers’ knowledge and its impact. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp.147-164). Simon & Schuster Macmillan.
George, D., & Mallery, P. (2020). IBM SPSS statistics 26 step by step: A simple guide and reference. Routledge.
Goldberg, M. (2012). Arts integration: Teaching subject matter through the arts in multicultural settings (4th ed.). Pearson.
Graham, C. R., Burgoyne, N., Cantrell, P., Smith, L., St. Clair, L., & Harris, R. (2009). TPACK development in science teaching: Measuring the TPACK confidence of in-service science teachers. TechTrends, 53(5), 70-79.
Guerriero, S. (2014). Teachers’ pedagogical knowledge and the teaching profession: Background report and project Objectives. OECD.
Hase, S., & Kanyon, C. (2013). Self-determined learning: Heutagogy in action. Bloomsbury.
Hassani, V., Khatib, M., YazdaniMoghaddam, M. (2019). An Investigation of Teachers’ Perceptions of KARDS in an EFL Context. International Journal of Foreign Language Teaching and Research,7(28), 135-153.
Hill, H. C., Ball, D. L., & Schilling, S. G. (2008). Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal for Research in Mathematics Education, 39(4), 372-400.
Hofer, M. & Harris, J. (2010). Differentiating TPACK development: Using learning activity types with in-service and preservice teachers. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.). Research highlights in technology and teacher education (pp. 295-302). Chesapeake, VA: Society for Information Technology and Teacher Education (SITE)
Hudson, B., & Zgaga, P. (2017). History, context, and overview: Implications for teacher education policy, practice, and future research. In B. Hudson (Ed.), Overcoming fragmentation in teacher education policy and practice (pp. 1–26). Cambridge University Press.
Jones, B., & Brown, C. (2019). Exploring the relationship between critical analysis and teacher effectiveness. Teaching and Learning Studies, 18(3), 245-260.
Johnson, S., & Smith, M. (2017). Contextual factors in instructional decision-making: Comparing urban and rural teachers. Urban Education, 52(8), 983-1009.
Kaiser, G., & König, J. (2019). Evaluating the outcome of teacher education at different points in time. Journal of Teacher Education, 70(4), 393-406
Kazutoshi, T., & Ever, M. B. (1999). Ergonagy: Its Relation to Andragogy. Paper presented at the Annual Meeting of the Comparative and International Education Society. Toronto: Canada (April 14-18). (ERIC Document Reproduction Service No. ED438464).
Kirschner, F., Paas, F., & Kester, L. (2018). Thematic review of studies in the field of teachers' practical knowledge. Educational Research Review, 23, 75-89.
König, J., Hanke, P., Glutsch, N. et al. (2022). Teachers’ professional knowledge for teaching early literacy: conceptualization, measurement, and validation. Educ Asse Eval Acc, 34, 483–507. https://doi.org/10.1007/s11092-022-09393-z
Korkmaz, M. V. N., Goksuluk, D., & Zararsiz, G. (2019). An R package for assessing multivariate normality. Department of Biostatistics, Hacettepe University, Ankara, Turki
Kumaravadivelu, B. (2012). Language teacher education for a global society. Taylor & Francis.
Kurt, G., & Atay, D. (2021). Development and validation of a Content and Language Integrated Learning (CLIL) knowledge questionnaire for language teachers. Language Teaching Research, 25(2), 249-273.
Kyriazos, T. A., & Stalikas, A. (2018). Applied psychometrics: The steps of scale development and standardization process. Psychology, 9, 2531-2560. https://doi.org/10.4236/psych.2018.911145
Lehmann, T. (2020). International perspectives on knowledge integration: Theory, research, and good practice in preservice teacher and higher education. Brill Sense. https://doi.org/10.1163/9789004429499.
Lim, P. S., Din. W. A., Nik Mohamed, N. Z., & Swanto, S. (2022). Development and validation of a survey questionnaire assessing technological pedagogical content knowledge and E-Learning acceptance for Malaysian English teachers. International Journal of Education, Psychology and Counseling, 7 (48), 206-220. doi: 10.35631/IJEPC.748015
Liu, S. Y., & Shulman, L. S. (2007). Validating measures of teachers’ knowledge of statistics: The use of and interpretation of reliability coefficients. Journal of Educational Measurement, 44(1), 47-62.
Lee, S., Kim, H., & Park, S. (2017). Evaluating teachers integrated pedagogical content knowledge: Development and validation of a questionnaire. Journal of Education and Training Studies, 5(1), 47-65.
Lee, J., & Smith, M. K. (2018). Teacher responsiveness and student engagement: A multilevel analysis. Journal of Educational Research, 111(4), 501-514.
Lee, J., & Turner, J. E. (2017). Extensive knowledge integration strategies in preservice teachers: The role of perceived instrumentality, motivation, and self-regulation. Educational Studies, 5(44), 505-520. https://doi.org/10.1080/03055698.2017.1382327
Lewin, A. Y., Massini, S., & Peeters, C. (2011). Microfoundations of internal and external absorptive capacity routines. Organization Science, 22(1), 1343-1371.
Li, S., Liu, Y., & Su, Y. (2022). Differential analysis of teachers’ technological pedagogical content knowledge (TPACK) abilities according to teaching stages and educational levels. Sustainability, 14(12), 1-15. https://doi.org/10.3390/su14127176
Lieberei, T., Welter, VDE., Großmann, L., & Krell, M. (2023). Findings from the expert-novice paradigm on differential response behavior among multiple-choice items of a pedagogical content knowledge test – implications for test development. Front. Psychol, 14, 1-17. doi: 10.3389/fpsyg.2023.1240120
Nafiyan, A. A. (2020). Language teachers’ perceptions of continuing professional development (CPD) opportunities: Development and validation of a questionnaire. Teaching English with Technology, 20(1), 85-103.
Nguyen, L.A.T., Habók, A. (2023). Tools for assessing teacher digital literacy: a review. Journal of Computers in Education. 1-42.
Onyefulu, C. and Abayomi, W.O. (2023). Student teachers’ knowledge and conceptions of classroom assessment at a university in Jamaica. Open Access Library Journal, 10, 1-19. doi: 10.4236/oalib.1110640.
Park, S., Choe, Y., & Johnson, J. F. (2020). Teach for America teachers’ pathway to common instructional language for literacy teaching. Teachers College Record, 122(6), 1-43
Rehhali, M., Mazouak, A., & Belaaouad, S. (2022). The Digital assessment of learning: Current situation and perspectives: Case of teachers of life and earth sciences. Journal of Information Technology Management, 14(3), 65-78. doi: 10.22059/jitm.2022.87534
Sahin, I. (2011). Development of survey of technological pedagogical and content knowledge (TPACK). Turkish Online Journal of Educational Technology,10(1), 97-105.
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological Pedagogical Content Knowledge (TPACK): The Development and Validation of an Assessment Instrument for Preservice Teachers, JRTE, 42(2), 123-149.
Sherin, M. G., & ES, V. (2002). Learning to notice: scaffolding new teachers’ interpretations classroom interactions. Journal of Technology and Teacher Education, 10(4), 571-596.
Sherin, M. G. (2007). The development of teachers’ professional vision in video clubs. In Video research in the learning sciences (pp. 383-395). Erlbaum.
Shulman, L. S. (1986). Paradigms and research programs in the study of teaching. A contemporary perspective. In M. Wittrock (Ed.), Handbook of research on teaching (3rded.). Macmillan.
Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1-22.
Smith, A. (2018). The impact of reflective practices on teacher development. Journal of Education, 42(2), 123-145.
Steele, M. D., & Hillen, A. F. (2012). The content-focused methods course: A model for integrating pedagogy and mathematics content. Mathematics TeacheR Educator, 1(1), 53-70. https://doi.org/10.5951/mathteaceduc.1.1.0053
Stoten, D.W. (2022). Navigating heutagogic learning: mapping the learning journey in management education through the OEPA model. Journal of Research in Innovative Teaching & Learning, 15 (1), 83-97. https://doi.org/10.1108/JRIT-07-2020-0038
Timur, B., & Tasar, M. F. (2011). In-service science teachers’ technological pedagogical content knowledge confidences and views about technology-rich environments. CEPS Journal, 1(4), 11-25. - DOI: 10.25656/01:6057
Van Driel, J. H., & Berry, A. (2012). Teacher professional development focusing on pedagogical content knowledge. Educational Researcher, 41(1), 26-28. https://doi.org/10.3102/0013189X11431010
Vongkulluksn, K., Xie, K., & Bowman, M. (2018). Revisit, redefine, and embrace the technology pedagogical content knowledge (TPACK) framework: Examining theoretical and empirical advancements. Australasian Journal of Educational Technology, 34(1), 1-14.
Yılmaz, E. (2020). Integrated knowledge: A comprehensive framework for understanding and enhancing the quality of education. Educational Sciences: Theory and Practice, 20(4), 1-19.
You, J. (2020). Technological pedagogical content knowledge for computer-assisted language learning: Validating a questionnaire for English as a foreign language teacher. Computers & Education, 157, 1-15.
Zhu, X., Raquel, M., & Aryadoust, V. (2019). Structural equation modeling to predict performance in English proficiency tests. In Quantitative Data Analysis for Language Assessment Volume II (pp. 101-126). Routledge.
Development and Validation of an Integrated Knowledge Questionnaire: A Quantitative Investigation
Farzaneh Rahimzadeh1, Ahmad Mohseni 2*, Ghafour Rezaei Golandouz3
1Department of English, Science and Research Branch, Islamic Azad University, Tehran, Iran
2Department of English, South Tehran Branch, Islamic Azad University Tehran, Iran
3Department of English, Garmsar Branch, Islamic Azad University, Garmsar, Iran
2024-02-06 2024/06/02
Abstract
Integrating technology into instructional designs is currently a growing topic in higher education. This study sheds light on the integration of technology and KARDS factors. The researcher designed the integrated version of the teachers’ knowledge questionnaire to provide a more comprehensive understanding of teacher’s knowledge. This study aimed to develop and validate a Teachers’ Integrated Knowledge Questionnaire (TIKQ) through a quantitative investigation. The researcher aimed to assess individuals’ integrated knowledge in a specific domain. This research involved multiple stages: item generation, pilot testing, and statistical analysis. The researcher generated a pool of potential items for the TIKQ. These items covered various aspects of integrated knowledge within the chosen domain. The questionnaire was piloted and administered to 461 teachers in the second segment, which included 58 items. To determine if the trait structure of the teachers’ integrated knowledge questionnaire enjoys a good fit, statistical analysis techniques such as factor analysis and confirmatory factor analysis (CFA) were conducted. Cronbach’s alpha reliability and structural equation modeling were used to measure the internal consistency reliability of the teachers’ Integrated Knowledge Questionnaire (TIKQ). The new version of the integrated knowledge scale will assist teacher trainers in assessment of teacher’s knowledge. The findings from this study will contribute to developing an effective tool for measuring integrated knowledge in a specific domain. The tool can be used in various contexts, such as educational settings, research studies, and professional assessments.
Keywords: Assessment, Confirmatory Factor Analysis, Integrated Knowledge Questionnaire, Quantitative Study, Teacher’s Knowledge,
INTRODUCTION
Currently, technology has made a valuable contribution to the knowledge domains of teachers ’assessment (Rehhali et al., 2022). One-way technology has contributed to teacher assessment is through the development of digital assessment tools and online platforms (Nguyen & Habók, 2023). Several studies have developed and validated questionnaires to assess teachers’ TPACK with other approaches to assess teacher’s knowledge (Hill et al., 2008; Liu & Shulman, 2007; Kirschner et al., 2018; Park et al., 2020; Schmidt et al., 2009; Van Driel & Berry, 2012; Vongkulluksn et al., 2018). These studies highlight the importance of developing and validating specialized questionnaires to assess teachers’ knowledge in specific domains and contexts.
The underlying issue of education quality is the teacher’s knowledge (Blömeke et al. 2022; König et al., 2022; Lieberei et al., 2023). In the quest for enhancing education, educators have long realized the significance of effectively assessing teachers’ knowledge. The accurate evaluation of their knowledge base not only empowers educators but also plays a pivotal role in shaping the future of the learners (Onyefulu &Abayomi, 2023). Teachers’ knowledge assessment has been a challenge in the education sector. Traditional assessment methods often focus on isolated areas of knowledge, failing to capture the intricacies of a teacher’s holistic understanding. This limitation hampers educators’ ability to effectively address students’ diverse needs and impedes educational growth. Recognizing this issue, our study aimed to develop an assessment tool that overcomes these shortcomings.
The significance of this study lies in its potential to bridge the gap between traditional assessments and the dynamic demands of modern education. By developing an integrated knowledge scale, teachers can capture the multidimensional nature of teacher knowledge, providing a comprehensive understanding of their capabilities. This comprehensive assessment tool aims to empower educators and foster continuous professional growth.The purpose of the study is to develop an integrated scale for teacher knowledge assessment in Iran. Scale development involves selecting and arranging suitable items to form questions (Chadha, 2009).
Knowledge integration is vital for understanding the development of strategy and capabilities within organizations (Lehmann, 2020). Researchers interested in exploring the micro-foundations of approaches and competencies have recently been drawn to this concept (Lewin, et al., 2011). Concerning the disputes mentioned earlier, numerous kinds of research are done on exploring KARDS and TPACK (Angeli & Valanides, 2009; Hassani, et al., 2019).
One-way technology contributes to pedagogical content knowledge (PCK) is by providing teachers with access to a multitude of digital resources and tools that can enhance their understanding and delivery of subject matter (Li et al., 2022). Pedagogical content knowledge is a fundamental aspect of teachers’ professional expertise, encompassing an understanding of learners’ conceptual ideas and effective instructional strategies (Lieberei et al., 2023). There are existing scales in the literature that assess aspects of pedagogical content knowledge (PCK) (Aydın & Turhan, 2023; Guerriero, 2014; Shulman, 1987). Even though there is a lack of extensive research on knowledge integration, diverse theories and approaches can still be used to assess it (Lehmann, 2020).
However, it seems that there is a gap in the literature when it comes to scales specifically addressing the integration of cybergogy knowledge, heutagogy knowledge, and KARDS pedagogy. During the last two decades, a growing number of empirical studies have directly assessed teacher knowledge. These studies provide evidence that teachers’ knowledge and skills are crucial factors in their students’ achievement (e.g., König et al., 2021). Studies on the effectiveness of teacher education (Blömeke et al., 2022) have highlighted the significance of assessing teacher knowledge as an outcome at different points in teacher education (Kaiser & König, 2019). These studies underscore the importance of assessing teacher knowledge as a crucial aspect of effective teacher education. By assessing teacher knowledge at different points in teacher education, researchers and educators can gain insights into the effectiveness of teacher preparation programs and identify areas for improvement.
Reviews of the literature indicate that there are presently no criteria for technology integration with KARDS that could be utilized as a guide in designing instruction and knowledge, specifically in teachers’ knowledge. However, few studies have explored the mixture of two instructional designs to develop an integrated knowledge questionnaire. Integrating technology into the modules is pivotal to increasing the effectiveness of the KARDS curriculum. Although there have been few quantitative studies on this topic, this would benefit teachers.
One line of the study focused on KARDS pedagogy as the pillar for integrating technology into teachers’ knowledge. The model consists of five sections, each accountable for a distinct stage of instructional design. These sections contain knowing, analyzing, recognizing, doing, and seeing modules. Another aspect of the study centered on TPACK-XL, which encompasses supplementary knowledge in technology, pedagogy, and context. It is called ICT-TPCK and was proposed by Angeli and Valanides in (2009). Accordingly, a new survey has been developed to assess teachers’ expertise in various subjects.
The contemporary inquiry is designed to train instructors to operationalize the constructs and reliably produce measurable investigations of them. The study addresses the following research questions:
RQ1. Do the teachers’ integrated knowledge questionnaire and its 13 components enjoy appropriate reliability indices?
RQ2. Does the trait structure of the teachers’ integrated knowledge questionnaire enjoy a good fit?
RQ3. Are there any significant correlations among the components of the teachers’ integrated knowledge questionnaire?
The development of an integrated knowledge scale offers a pioneering approach to teacher assessment, addressing the limitations of traditional evaluation methods. By recognizing the multifaceted nature of teacher knowledge, this innovative tool empowers educators, promotes professional growth.
LITERATURE REVIEW
Innovation is based on combining new concepts; therefore, organizations need external knowledge to innovate. Innovation occurs when new and existing knowledge is integrated, resulting in a discovery. Knowledge is recognized as an essential administrative attribute for nurturing innovation (Duong et al., 2022). Knowledge integration involves purposefully applying knowledge from various domains to act and teach, encompassing a range of knowledge types (Lehmann, 2020). Instructors must use integrative teaching methods. These procedures facilitate knowledge integration (Hudson & Zgaga, 2017).
Pedagogy is a method of education that existed during the Industrial Revolution 1.0 and 2.0. Pedagogy is the knowledge and art of teaching that denotes a particular instruction theory. Modern pedagogy is influenced by the development of the industrial revolution with the hope that education needs to be systematic in creating an understanding based on scientific philosophy to integrate existing knowledge with new knowledge (Carrier & Moulds, 2003). Andragogy is introduced as a method of education during the Industrial Revolution 1.0 and 2.0. Andragogy views how adults go through the learning process, i.e., implementing education experience for adults (Hase &Kenyon, 2013). An added value to pedagogy, andragogy, synergogy, and cybergogy is heutagogy. Heutagogy provides a distinct emphasis on learning, how to learn, and how to create chances of universal progress and not a linear process in which the students determine the direction of their progress (Balshake, 2012).
Theoretical Framework: KARDS
Kumaravadivelu (2012) declares that from a post-transmission viewpoint, the focus modifies from information-driven to inquiry-driven approaches; also, he describes that, in the particular setting of L2 teaching and teacher education, transcending the confines of transmission replicas means going further than the notion of the method. The post-transmission viewpoint pursues the transmission of an information-oriented teacher education model into an inquiry-driven one. In the post method, prospective instructors would be conscious intellectuals, instructors, and investigators.
One of the module-driven models for language instructor training is KARDS. The triangular modules consist of five facets: Knowing, analyzing, recognizing, doing, and seeing. Accordingly, acquiring personal knowledge, professional knowledge, and practical knowledge are building blocks of the "knowing" section. Next, inspecting learners’ needs wants, and situations is a subsection of the analyzing facet. Self, peer, and educator assessments are props of the recognizing issue. Doing includes executing (a) microteaching, (b) team-teaching, and (c) self-teaching. Comprehending divergences between learner, teacher, and researcher standpoints of instruction performances is involved in the "seeing" module (Kumaravadivelu, 2012).
Although there was some research about TPACK in Iran, little research has been done on integrating KARDS pedagogy with TPACK. Teachers’ knowledge based on KARDS and TPACK needs more investigation in Iran. Research in interdisciplinary teacher education is needed in the Iranian context. An integrated paradigm shift is needed in teacher tutelage in the Iranian teaching context.
Teachers’ knowledge is not constant. It is an extensive, integrated, functioning system with each part that is hard to isolate (Fennema & Franke, 1992). Accordingly, some have studied knowledge as an integrated issue, but some still need to confirm it. One line of teacher knowledge research centers on categorizing the diverse sorts of teacher knowledge used in the teaching procedure (Ball et al., 2008).
Several studies have investigated knowledge integration (e.g., Ball et al., 2008; Steele & Hillen, 2012). However, a few of them developed the integrated knowledge questionnaire. Numerous assessment studies have been done on technological pedagogical content knowledge in this field. (Baran & Uygun, 2016; Hofer & Harris, 2010; Sahin, 2011; Timur & Tasar, 2011). Some explorations are done on the scales of TPACK (Graham et al., 2009; Sahin, 2011; Timur & Tasar, 2011).
Researchers have developed questionnaires to assess language teachers’ knowledge of language assessment principles and practices. These questionnaires cover areas such as test development, test analysis, and formative assessment. Ahmadian et al. (2020) developed the language assessment knowledge questionnaire (LAKQ) to assess language teachers’ assessment literacy.
Studies have focused on developing questionnaires to assess language teachers’ knowledge of different teaching approaches and methodologies. These questionnaires can cover various approaches such as communicative language teaching, task-based learning, or content and language integrated learning. Kurt and Atay (2021) developed a questionnaire to assess language teachers’ knowledge of content and language integrated learning (CLIL).
Researchers have developed questionnaires to assess language teachers’ knowledge of professional development opportunities and their effectiveness. These questionnaires capture teachers’ understanding and perceptions of continuing professional development (CPD) programs in the field of language education (Nafiyan, 2020).
Several studies have focused on developing TPACK questionnaires specifically designed for language teachers. These questionnaires measure language teachers’ knowledge and beliefs about integrating technology into language instruction. Examples include the TPACK-LTQ by Chen and Hsu (2020) and the TPACK-CALL by You (2020).
This study has focused on developing integrated knowledge questionnaire specifically designed for language teachers. This questionnaire measures teachers’ knowledge domains of KARDS and technology factors into pedagogical assessment. There are some studies related to the development of integrated knowledge questionnaires specifically designed for language teachers’ assessment (Lee et al., 2017; Lim et al., 2022; Yilmaz, 2020). This tool aims to measure various domains of knowledge such as KARDS, as well as integrated factors such as cybergogy, heutagogy, and post method issues.
METHODOLOGY
Design of the Study
In the first phase, exploratory factor analysis was run on 76 items. After omitting the 18 items, which did not load under their respective factors, the second EFA has run. The varimax rotation and principal axis factoring method was run to probe the underlying constructs of the 58 items of the TIKQ. The LISREL software was used. It indicated that the present sample size was sufficient for running confirmatory factor analysis and the results probed three research questions.
Participants
The participants for the main questionnaire consisted of 464 EFL teachers for developing the questionnaire. The participants were male and female EFL teachers. They had an average of two to 20 years of teaching experience, and they were from various groups ages ranging from19 to 50. They had different university degrees ranging from BA and MA to PhD. Candidates.
Figure 1
Teachers’ degree
Statistically, they had different university degrees ranging; 51.2 % had MA degrees. 31.3 % had BA degrees, and 17.5 % were Ph.D. candidates. Twenty-eight percent of teachers had experience between 10 and 20 years.
Figure 2
Teachers ’experience
31.9 % had experience between five and ten years. 32.4 % had less than five years of experience. 7.7 % have had more than 20 years of experience teaching. 28% had experiences between ten and twenty years.
Figure 3
Teachers’ workplace
Instruments
This study utilized the KARDS component and technology framework components to develop a questionnaire. The questionnaire phase is the focus of this survey.
Google Forms
The researcher used Google Forms to develop questions in the form of a Likert scale. A Google Form is a free online tool that allows users to create forms, surveys, or quizzes.
Telegram
Telegram is a free application that can be used for online language learning programs that possess major advantages to facilitate the process of learning. (Faramarzi, 2019). Telegram was used to distribute the questionnaire among the participants.
Procedure
Various researchers have developed and proposed the steps of this questionnaire (Baser, Kopcha & Ozden, 2016; Bostancıoğlu & Handley, 2018).
Item Development
In the preliminary steps for developing items, the writers of the contemporary research reviewed the literature on instructors’ integrated knowledge to check for any available instruments and to find a theoretical background for the questionnaire. In a corresponding move, the in-depth analysis of the works was complemented by content sampling to produce comprehensive and demonstrative content for improving the scale items. Hence, the content assortment of the scale varied from the preceding instructors’ knowledge questionnaires. Self-initiative item generation was the only route for making item pools. Therefore, the researcher created an efficient item pool based on integrated disciplines in different fields. A primary item pool involved 100 statements. This preliminary review and consultation with domain experts caused the reduction of the items to 76. The specialists commented on their properness, applicability, accuracy, and phrasing. They comment on omitting, rephrasing, and similarity of items. The factor analysis was done on 76 items and some of the items were omitted. The 58 items were written in a Google document layout based on the standard route for questionnaire advancement. A six-point Likert scale (Strongly Agree, Agree, Slightly Agree, Slightly Disagree, Disagree, and Strongly Disagree) was used. Ultimately, the questionnaire was administered to 464 English language teachers.
Item Analysis
The researcher has not found any context-based quantitatively appropriate instrument for measuring teachers’ integrated knowledge, so the researcher conducted and developed a questionnaire instrument based on comprehensive investigations of research findings. The researcher developed a questionnaire that contained three sectors. The first subdivision lay a demographic section that gathered information on the partakers’ gender, teaching background, age, and university degree. The second section, which is the central part of the questionnaire, includes closed-ended items with a 6-point Likert sort scale requesting respondents to read each statement and check the box that best signifies their ideas. This section developed the components of teacher’s knowledge integration based on the mentioned frameworks through the closed-ended items. The questionnaire was piloted with 76 items; 58 remained in the revised questionnaire. The revised questionnaire was distributed to 461 Iranian ELT instructors. The convenience sampling process was done to accumulate information. This survey was distributed via WhatsApp and Telegram apps. The outcomes of the inspection are satisfactory, and there is no problem. Accordingly, the research statistics were analyzed to respond to the first research question, and Cronbach’s alpha reliability indices and structural equation modeling were employed.
Thirteen components were measured through 58 items to respond to the second research question. The assumptions of univariate and multivariate normality were retained. The skewness and kurtosis indices were within the range of ±2. Henceforth, univariate normality of the data. The Mardia test for multivariate normality was used. The index was computed using R Package MVM developed by Korkmaz et al. (2019). Mardia’ s index of multivariate normality was used.
Consequently, the assumption of multivariate normality was retained. The Cronbach’s alpha reliability indices for the TIKQ questionnaire and its 13 components enjoyed a suitable reliability index. The LISREL software was used for confirmatory factor analysis on TIKQ. Standardized regression coefficients (beta values) were computed for the one-headed and two-headed arrows. The relationships between the higher-order latent variables were analyzed. All the indicators had significant contributions to their latent variables. Model indices, chi-square, and root-mean-square of error approximation value were used to answer the third research question.
Accordingly, the model was supported by the results. The results of the probability of close fit, the square root mean residual, and the goodness of fit index value all supported the fit of the model. All the incremental fit indices, comparative fit index, and normed fit index supported the model’s fit. As a result, the critical N value indicated that the current sample size was adequate for running confirmatory factor analysis. In the following part, components of higher-level variables are discussed in detail.
RESULTS
Exploratory Factor Analysis
Before discussing the results, the univariate and multivariate normality of the data and reliability indices should be reported. As shown in Table 4.6 the skewness and kurtosis indices of normality were all lower than ±2. Indicating that the assumption of univariate normality has retained.
Table 1 Univariate and Multivariate Test of Normality | |||||||||
Items | Skew | Kurtosis | Items | Skew | Kurtosis | Items | Skew | Kurtosis | |
Q1 | -0.28 | -0.78 | Q21 | -0.06 | -0.98 | Q41 | -0.19 | -0.82 | |
Q2 | -0.06 | -0.76 | Q22 | -0.20 | -0.78 | Q42 | -0.15 | -0.82 | |
Q3 | -0.11 | -0.84 | Q23 | -0.17 | -0.78 | Q43 | -0.07 | -0.91 | |
Q4 | -0.07 | -0.87 | Q24 | -0.16 | -0.89 | Q44 | -0.04 | -0.90 | |
Q5 | -0.18 | -0.78 | Q25 | -0.17 | -0.89 | Q45 | -0.21 | -0.67 | |
Q6 | -0.17 | -0.70 | Q26 | -0.22 | -0.85 | Q46 | -0.13 | -0.95 | |
Q7 | -0.11 | -0.67 | Q27 | -0.18 | -0.92 | Q47 | -0.18 | -0.83 | |
Q8 | -0.18 | -0.77 | Q28 | -0.11 | -0.71 | Q48 | -0.01 | -0.93 | |
Q9 | -0.27 | -0.74 | Q29 | -0.25 | -0.83 | Q49 | -0.21 | -0.78 | |
Q10 | -0.22 | -0.79 | Q30 | -0.05 | -0.82 | Q50 | -0.07 | -0.85 | |
Q11 | -0.18 | -0.83 | Q31 | -0.11 | -0.94 | Q51 | -0.12 | -0.74 | |
Q12 | -0.16 | -0.89 | Q32 | -0.09 | -0.85 | Q52 | -0.22 | -0.74 | |
Q13 | -0.32 | -0.64 | Q33 | -0.19 | -0.87 | Q53 | 0.06 | -0.75 | |
Q14 | -0.04 | -0.95 | Q34 | -0.25 | -0.65 | Q54 | -0.17 | -0.85 | |
Q15 | -0.18 | -0.90 | Q35 | -0.06 | -1.01 | Q55 | -0.28 | -0.83 | |
Q16 | -0.28 | -0.78 | Q36 | -0.11 | -0.89 | Q56 | -0.08 | -0.84 | |
Q17 | -0.12 | -0.82 | Q37 | -0.08 | -0.77 | Q57 | -0.25 | -0.73 | |
Q18 | -0.18 | -0.79 | Q38 | -0.15 | -0.76 | Q58 | -0.04 | -0.88 | |
Q19 | -0.09 | -0.80 | Q39 | -0.10 | -0.92 | Mardia | .985 | ||
Q20 | -0.08 | -0.93 | Q40 | -0.16 | -0.80 |
Table 1 shows the results of the Mardia’ test of multivariate normality. The index has computed using R Package MVM developed by Korkmaz et al. (2019). The Mardia’s index of multivariate normality of .985 was lower than the criteria of ±3 (Bae and Bachman, 2010). It has concluded that the assumption of multivariate normality was retained.
The Cronbach’s alpha reliability indices for the TIKQ questionnaire and its 13 components shows that the overall questionnaire enjoyed a reliability index of .851. The reliability indices for the 13 components were as follows; Pedagogical knowledge (α = .805), Pedagogical content knowledge (α = .878), Post method (α = .882), Cybergogy (α = .822), Technological level (α = .856), Self-regulation (α = .833), Self-determined learning (α = .844), Analyzing (α = .846), Reflection (α = .841), Recognizing (α = .852), Doing (α = .841), Context (α = .877), and Seeing (α = .864).
Based on these criteria, it can be concluded that the overall TIKQ, and its 13 components enjoyed “good” reliability indices; i.e. they were equal to or higher than .80.
The EFA was run using varimax rotation because the elements inside the Component Correlation Matrix were all lower than ±.32. The number of factors to be extracted, was made based on two types of Parallel Analyses; i.e. computational and graphical. As shown in table 4.9, the computational Parallel Analysis suggested 13 factors to be extracted as the underlying constructs of the 58 items of TIKQ.
As shown in Figure 4 graphical Parallel Analysis also suggested 13 factors to be extracted as underlying constructs of the 58 items of TIKQ.
Figure 4
Graphical Parallel Analysis
After justifying the number of factors extracted, and the rotation method, the results of the first EFA have discussed. Table 2 shows the KMO index of sampling adequacy, and Bartlett’s test of sphericity. The KMO index of .682 was higher than the minimum required criterion of .60 (Pallant, 2016; Field, 2018). The significant results of the Bartlett’s test (χ2 (1653) = 6134.89, p < .05) indicated that the correlation matrix (Appendix I) was factorable. It should be noted that in order for EFA to render meaningful factors, items related to a factor should have high correlations; consequently, they should have low correlations with items loading under other factors. The significant results of the Bartlett’s test proved that there were meaningful among items.
Table 2 KMO and Bartlett's Test | ||||
Kaiser-Meyer-Olkin Measure of Sampling Adequacy. | .682 | |||
Bartlett's Test of Sphericity | Approx. Chi-Square | 6134.895 | ||
df | 1653 | |||
Sig. | .000 |
The number of factors extracted and the total variance explained. The SPSS Software extracted 13 items that accounted for 56.89 percent of total variance.
Table 3 shows the factor loadings of the 58 items under the 13 extracted factors.
Based on the results shown in Table 4. 12 it can be concluded that;
Items 14, 56, 59, 60, 61, and 64 loaded under the first factor that can be labeled as “Post method”. The composite reliability; i.e. reliability of construct for the first factor was .877. Its average variance extracted (AVR); i.e. convergent validity was .736. That is to say, there was 73.6 percent chance that the first factor measured “Post method”. All factor loadings enjoyed large effect sizes; i.e. => .50.
Items 19, 50, 51, 52 and 53 loaded under the second factor that can be labeled as “Context”. The CR and AVE indices were .870, and .756 respectively. All factor loadings enjoyed large effect sizes; i.e. => .50.
Table 3 Rotated Factor Matrix | ||||||||||||||
| Factor | |||||||||||||
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | ||
First Factor “Post Method”, CR = .877, AVE = .736. | ||||||||||||||
Q59 | .752 |
|
|
|
|
|
|
|
|
|
|
|
| |
Q61 | .748 |
|
|
|
|
|
|
|
|
|
|
|
| |
Q64 | .744 |
|
|
|
|
|
|
|
|
|
|
|
| |
Q56 | .735 |
|
|
|
|
|
|
|
|
|
|
|
| |
Q14 | .730 |
|
|
|
|
|
|
|
|
|
|
|
| |
Q60 | .709 |
|
|
|
|
|
|
|
|
|
|
|
| |
Second Factor “Context”, CR = .870, AVE = .756. | ||||||||||||||
Q51 |
| .780 |
|
|
|
|
|
|
|
|
|
|
| |
Q53 |
| .779 |
|
|
|
|
|
|
|
|
|
|
| |
Q52 |
| .753 |
|
|
|
|
|
|
|
|
|
|
| |
Q19 |
| .752 |
|
|
|
|
|
|
|
|
|
|
| |
Q50 |
| .718 |
|
|
|
|
|
|
|
|
|
|
| |
Third Factor “Technological Level”, CR = .848, AVE = .726. | ||||||||||||||
Q29 |
|
| .771 |
|
|
|
|
|
|
|
|
|
| |
Q26 |
|
| .740 |
|
|
|
|
|
|
|
|
|
| |
Q27 |
|
| .725 |
|
|
|
|
|
|
|
|
|
| |
Q40 |
|
| .708 |
|
|
|
|
|
|
|
|
|
| |
Q28 |
|
| .687 |
|
|
|
|
|
|
|
|
|
| |
Fourth Factor “Recognizing”, CR = .848, AVE = .725. | ||||||||||||||
Q31 |
|
|
| .768 |
|
|
|
|
|
|
|
|
| |
Q11 |
|
|
| .761 |
|
|
|
|
|
|
|
|
| |
Q13 |
|
|
| .728 |
|
|
|
|
|
|
|
|
| |
Q34 |
|
|
| .695 |
|
|
|
|
|
|
|
|
| |
Q12 |
|
|
| .675 |
|
|
|
|
|
|
|
|
| |
Fifth Factor “Reflection”, CR = .834, AVE = .708. | ||||||||||||||
Q72 |
|
|
|
| .724 |
|
|
|
|
|
|
|
| |
Q73 |
|
|
|
| .714 |
|
|
|
|
|
|
|
| |
Q74 |
|
|
|
| .703 |
|
|
|
|
|
|
|
| |
Q75 |
|
|
|
| .700 |
|
|
|
|
|
|
|
| |
Q76 |
|
|
|
| .699 |
|
|
|
|
|
|
|
| |
Sixth Factor “Pedagogical Content Knowledge”, CR = .872, AVE = .794. | ||||||||||||||
Q25 |
|
|
|
|
| .813 |
|
|
|
|
|
|
| |
Q43 |
|
|
|
|
| .803 |
|
|
|
|
|
|
| |
Q41 |
|
|
|
|
| .798 |
|
|
|
|
|
|
| |
Q24 |
|
|
|
|
| .760 |
|
|
|
|
|
|
| |
Seventh Factor “Seeing”, CR = .849, AVE = .764. | ||||||||||||||
Q21 |
|
|
|
|
|
| .785 |
|
|
|
|
|
| |
Q8 |
|
|
|
|
|
| .770 |
|
|
|
|
|
| |
Q9 |
|
|
|
|
|
| .770 |
|
|
|
|
|
| |
Q23 |
|
|
|
|
|
| .731 |
|
|
|
|
|
| |
Eighth Factor “Analyzing”, CR = .840, AVE = .753. | ||||||||||||||
Q48 |
|
|
|
|
|
|
| .784 |
|
|
|
|
| |
Q35 |
|
|
|
|
|
|
| .774 |
|
|
|
|
| |
Q32 |
|
|
|
|
|
|
| .755 |
|
|
|
|
| |
Q33 |
|
|
|
|
|
|
| .697 |
|
|
|
|
| |
Ninth Factor “Self-Determined Learning”, CR = .837, AVE = .750. | ||||||||||||||
Q36 |
|
|
|
|
|
|
|
| .788 |
|
|
|
| |
Q37 |
|
|
|
|
|
|
|
| .745 |
|
|
|
| |
Q3 |
|
|
|
|
|
|
|
| .734 |
|
|
|
| |
Q2 |
|
|
|
|
|
|
|
| .732 |
|
|
|
| |
Tenth Factor “Doing”, CR = .835, AVE = .747. | ||||||||||||||
Q7 |
|
|
|
|
|
|
|
|
| .782 |
|
|
| |
Q1 |
|
|
|
|
|
|
|
|
| .753 |
|
|
| |
Q20 |
|
|
|
|
|
|
|
|
| .752 |
|
|
| |
Q71 |
|
|
|
|
|
|
|
|
| .700 |
|
|
| |
Eleventh Factor “Self-Regulated”, CR = .827, AVE = .738. | ||||||||||||||
Q47 |
|
|
|
|
|
|
|
|
|
| .770 |
|
| |
Q17 |
|
|
|
|
|
|
|
|
|
| .739 |
|
| |
Q45 |
|
|
|
|
|
|
|
|
|
| .729 |
|
| |
Q4 |
|
|
|
|
|
|
|
|
|
| .714 |
|
| |
Twelfth Factor “Cybergogy”, CR = .810, AVE = .718. | ||||||||||||||
Q39 |
|
|
|
|
|
|
|
|
|
|
| .757 |
| |
Q18 |
|
|
|
|
|
|
|
|
|
|
| .757 |
| |
Q16 |
|
|
|
|
|
|
|
|
|
|
| .683 |
| |
Q10 |
|
|
|
|
|
|
|
|
|
|
| .676 |
| |
Thirteenth Factor “Pedagogical Knowledge”, CR = .791, AVE = .697. | ||||||||||||||
Q69 |
|
|
|
|
|
|
|
|
|
|
|
| .710 | |
Q66 |
|
|
|
|
|
|
|
|
|
|
|
| .703 | |
Q70 |
|
|
|
|
|
|
|
|
|
|
|
| .691 | |
Q68 |
|
|
|
|
|
|
|
|
|
|
|
| .685 |
Items 26, 27, 28, 29, and 40 are loaded under the third factor labeled as “Technological level”. The CR and AVE indices were .848, and .726 respectively.
Items 11, 12, 13, 31, and 34 are loaded under the fourth factor labeled as “Recognizing”. The CR and AVE indices were .848, and .725 respectively.
Items 72, 73, 74, 75, and 76 are loaded under the fifth factor labeled as Reflection. The CR and AVE indices were .834, and .708 respectively.
Items 24, 25, 41, and 43 are loaded under the sixth factor labeled as Pedagogical Content Knowledge”. The CR and AVE indices were .872, and .794 respectively.
Items 8, 9, 21, and 23 are loaded under the seventh factor labeled as Seeing. The CR and AVE indices were .849, and .764 respectively.
Items 32, 33, 35, and 48 are loaded under the eighth factor labeled as Analyzing. The CR and AVE indices were .840, and .753 respectively.
Items 2, 3, 36, and 37 are loaded under the ninth factor, labeled Self-Determined Learning. The CR and AVE indices were .837, and .750 respectively.
Items 1, 7, 20, and 71 are loaded under the tenth factor labeled as Doing. The CR and AVE indices were .835, and .747 respectively.
Items 4, 17, 45, and 47 loaded under the eleventh factor labeled as Self-Regulation. The CR and AVE indices were .827, and .738 respectively.
Items 10, 16, 18, and 38 are loaded under the twelfth factor, labeled as Cybergogy. The CR and AVE indices were .810, and .718 respectively.
Finally, items 6, 68, 69, and 70 loaded under the thirteenth factor labeled as Pedagogical Knowledge. The CR and AVE indices were .791, and .697 respectively. Accordingly, all factor loadings enjoyed large effect sizes; i.e. => .50.
Reliability and Validity of the Questionnaire
The focal goal of contemporary research has been to explore the reliability and construct validity of the teachers’ integrated knowledge questionnaire (TIKQ) through Cronbach's alpha reliability indices and structural equation modeling. It should be noted that the TIKQ had thirteen components measured through 58 items as follows;
Pedagogical knowledge (items 49, 50, 51, and 52),
Pedagogical content knowledge (items 20, 21, 35, and 36),
Post method (items 12, 44, 45, 46, 47, and 48),
Cybergogy (items 8, 13, 15, and 33),
Technological level (items 22, 23, 24, 25, and 34),
Self-regulation (items 4, 14, 37, and 38),
Self-determined learning (items 2, 3, 31, and 32),
Analyzing (items 27, 28, 30, and 39),
Reflection (items 54, 55, 56, 57, and 58),
Recognizing (items 9, 10, 11, 26, and 29),
Doing (items 1, 5, 17, and 53),
Context (items 16, 40, 41, 42, and 43), and
Seeing (items 6, 7, 18, and 19).
Accordingly, it must be mentioned that both univariate and multivariate normality have been reserved.
Cronbach’s Alpha Reliability Indices
Based on the results of Table 4, Cronbach's alpha reliability indices for the TIKQ questionnaire and its 13 components are discussed. Accordingly, the whole questionnaire enjoyed a reliability index of .960. The reliability indices for the 13 components were as follows; Pedagogical knowledge (α = .804), Pedagogical content knowledge (α = .854), Post method (α = .869), Cybergogy (α = .806), Technological level (α = .849), Self-regulation (α = .822), Self-determined learning (α = .829), Analyzing (α = .813), Reflection (α = .832), Recognizing (α = .826), Doing (α = .822), Context (α = .849), and Seeing (α = .850).
Table 4 Cronbach’s Alpha Reliability Statistics | ||
Components | Cronbach's Alpha | N of Items |
Pedagogical knowledge | .804 | 4 |
Pedagogical content knowledge | .854 | 4 |
Post method | .869 | 6 |
Cybergogy | .806 | 4 |
Technological level | .849 | 5 |
Self-regulation | .822 | 4 |
Self-determined learning | .829 | 4 |
Analyzing | .813 | 4 |
Reflection | .832 | 5 |
Recognizing | .826 | 5 |
Doing | .822 | 4 |
Context | .849 | 5 |
Seeing | .850 | 4 |
TIKQ | .960 | 58 |
It could have resulted that the total reliability index for the TIKQ was "excellent." So, the reliability indices for the 13 components were considered as "good"; i.e., =>.80. Thus, the first research question was covered. The TIKQ and its 13 components enjoyed appropriate reliability indices.
Confirmatory Factor Analysis on TIKQ
Figure 5
Conceptual Diagram of Teachers’ Integrated Knowledge Questionnaire
Based on Figure 2, the conceptual model of the TIKQ is displayed. The model included 58 indicators, items, or observed variables (blue squares) which measured 13 latent variables (yellow ovals), which in turn, measured six higher-level latent variables (green ovals).
Figure 6\Teachers’ Integrated Knowledge Questionnaire (Standardized Regression Weights)
Figure 6 shows the same model; however, standardized regression coefficients (beta values) were computed for the one-headed and two-headed arrows. These standardized regression coefficients range between ±1 and can be interpreted similarly to Pearson correlations. That is to say, standardized regression coefficients equal to or lower than .10 are considered weak, coefficients higher than .30 and less than .50 are labeled as moderate, and finally, values equal to or higher than .50 are large. The LISREL software automatically prints non-significant paths in red.
Table 5 Contributions of Latent Variables to Higher Order Latent Variables | |||
Latent Variables | Higher Level Latent Variables | Beta | t-value |
Pedagogical knowledge |
Knowing
| .81 | 13.82 |
Pedagogical content knowledge | .79 | 14.76 | |
Post method | .74 | 13.75 | |
Cybergogy | TPACK | .77 | 13.56 |
Technological level | .76 | 14.23 | |
Self-regulation | Heutagogy | .80 | 14.76 |
Self-determined learning | .79 | 14.49 | |
Analyzing | Reflective | .82 | 15.18 |
Reflection | .84 | 15.22 | |
Recognizing | Recognizing/Doing | .79 | 14.35 |
Doing | .76 | 16.03 | |
Context | Contextual/Seeing | .76 | 14.07 |
Seeing | .76 | 14.80 |
Table 5 shows the contributions of the 13 latent variables to their higher-level latent variables. From these outcomes, it must be discovered that;
A: Pedagogical knowledge (beta = .81, t = 13.82), Pedagogical content knowledge (beta = .79, t = 14.76), and Post method (beta = .74, t = 13.75) had large and significant contributions to the higher-level latent variable of "Knowing."
B: Cybergogy (beta = .77, t = 13.56) and Technological level (beta = .76, t = 14.23) had large and significant contributions to the higher-level latent variable of "TPACK."
C: Self-regulation (beta = .80, t = 14.76) and Self-determined learning (beta = .79, t = 14.49) had large and significant contributions to the higher-level latent variable of "Heutagogy."
D: Analyzing (beta = .82, t = 15.18) and reflection (beta = .84, t = 15.22) had large and significant contributions to the higher-level latent variable of "Reflective."
E: Recognizing (beta = 79, t = 14.35) and Doing (beta = .76, t = 16.03) had large and significant contributions to the higher-level latent variable of "Recognizing/Doing"; and finally,
F: Context (beta = .76, t = 14.07) and Seeing (beta = .76, t = 14.80) had large and significant contributions to the higher-level latent variable of "Contextual/Seeing).
Figure 7
Teachers’ Integrated Knowledge Questionnaire (t-values)
Figure 7 shows the model in t-values.
Table 6 Relationships between Higher Order Latent Variables | |||||||
| Knowing | TPACK | Heutagogy | Reflective | Recognize/Doing | Context/Seeing | |
Knowing |
|
|
|
|
|
| |
TPACK | .75 (17.48) |
|
|
|
|
| |
Heutagogy | .69 (16.33) | .99 (26.77) |
|
|
|
| |
Reflective | .67 (15.85) | .96 (26.87 | .82 (21.61) |
|
|
| |
Recognize/Doing | .73 (17.17) | 1.05 (27.23) | .90 (22.96) | 1.04 (32.54) |
|
| |
Context/Seeing | .69 (15.36) | 1.02 (26.01) | .86 (21.02) | 1.01 (30.68) | .85 (19.75) |
|
Table 6 shows the beta and t-values for the two-headed arrows estimating relationships between the higher-order latent variables. Since all relationships were large; i.e., =>.50, and significant; i.e., t-value >1.96, the null hypothesis as "there were not any significant correlations among the components of the teachers' Integrated knowledge questionnaire" was rejected. Regarding the individual correlations among the latent variables, it can be concluded that;
A: Knowing had large and significant relationships with TPACK (beta = .75, t = 17.48), Heutagogy (beta = .69, t = 16.33), Reflective (beta = .67, t = 15.85), Recognizing/Doing (beta = .73, t = 17.17), and Context/Seeing (beta = .69, t = 15.36).
B: TPACK had large and significant relationships with Heutagogy (beta = .99, t = 26.77), Reflective (beta = .96, t = 26.87), Recognizing/Doing (beta = 1.05, t = 27.23), and Context/Seeing (beta = 1.02, t = 26.01).
C: Heutagogy had large and significant relationships with Reflective (beta = .82, t = 21.61), Recognizing/Doing (beta = .90, t = 22.96), and Context/Seeing (beta = .86, t = 21.02).
D: Reflective had large and significant relationships with Recognizing/Doing (beta = 1.04, t = 32.54), and Context/Seeing (beta = 1.01, t = 30.68); and finally,
E: Recognizing/Doing had large and significant relationships with Context/Seeing (beta = .85, t = 19.75).
Table 7 Model Fit Indices of Teachers’ Integrated Knowledge Questionnaire | ||||||
Fit Indices | Labels | Statistic | DF. | P-Value | Criterion | Conclusion |
Absolute | Χ2 | 1379.43 | 1567 | .999 | >.05 | Good Fit |
Χ2 Ratio | .88 |
|
| <=3 | Good Fit | |
SRMR | .026 |
|
| <=.10 | Good Fit | |
RMSEA | .000 |
|
| <=.05 | Good Fit | |
90 % CI for REMSEA | [.000, .080] |
|
| <=.05 | Good Fit | |
PCLOSE | 1.00 |
|
| =>.05 | Good Fit | |
GFI | .91 |
|
| =>.90 | Good Fit | |
Incremental | RFI | .98 |
|
| =>.90 | Good Fit |
CFI | 1.00 |
|
| =>.90 | Good Fit | |
NFI | .98 |
|
| =>.90 | Good Fit | |
IFI | 1.00 |
|
| =>.90 | Good Fit | |
Sampling Adequacy | Critical N | 556.10 |
|
| =>200 | Adequate |
Table 7 shows model fit indices that should be employed to investigate the third research question. The consequences have been described in the following section.
The chi-square badness of fit of 1379.43 at 1567 degrees of freedom was non-significant, i.e., p > .05. Its ratios over the degree of freedom, i.e., 1379.43/1567 = .88, were lower than 3. These results reinforced the fitting of the model. Next, the root mean square of error approximation (RMSEA) value of .000; and its 90 % CI [.000, .000] were lower than .05. These results further supported the model.
The probability of close fit (PCLOSE) of one was higher than .05. The square root mean residual (SRMR) of .026 was lower than .05, and the goodness of fit index (GFI) value of .91 was higher than .90. The consequences all supported the fitting of the model. All of the incremental fitting indices were higher than the criterion of .90; i.e., relative fit index (RFI = .98), comparative fit index (CFI = 1.00), normed fit index (NFI = .98), and incremental fit index (IFI = 1) all reinforced the fitting of the model. Lastly, the critical N value of 556.10 was higher than 200; It indicated that the present sample size was sufficient for running confirmatory factor analysis.
DISCUSSION
In the current study, an effective tool was developed to assess EFL teachers’ knowledge to fill the respective research literature gaps. In the contemporary analysis, the researchers intended to propose a model of teachers’ integrated knowledge, testing its fitness through constructing a questionnaire, and finally proposing a new definition of teachers’ knowledge.
The endeavor resulted in an integrated knowledge questionnaire including 13 elements related to pedagogical knowledge (items 49, 50, 51, and 52), content knowledge (items 20, 21, 35, and 36), post-method principles (items 12, 44, 45, 46, 47, and 48, cybergogy (items 8, 13, 15, and 33), technological level (items 22, 23, 24, 25, and 34), self-regulation (items 4, 14, 37, and 38), self-determined learning (items 2, 3, 31, and 32), analyzing ability (items 27, 28, 30, and 39), reflection (items 54, 55, 56, 57, and 58), recognizing ability (items 9, 10, 11, 26, and 29), and doing (items 1, 5, 17, and 53), context (items 16, 40, 41, 42, 43), and seeing (items 6, 7, 18, and 19). There is no other research in line with this study which makes the current study a pioneer in this respect.
The first higher latent variable, the knowledge module, emerged from clustering items from the three latent variables: pedagogical knowledge, pedagogical content knowledge, and post-method knowledge. The second higher latent variable, TPACK, emerged from clustering cybergogy and technology-level items. The third higher latent variable is heutagogy; self-regulation and self-determined learning are the underlying latent variables in this issue. The reflection factor and analyzing modules are the latent variables for the fourth higher latent variable of reflective analyzing. The doing and recognizing modules are the latent variables of the fifth latent variable, recognizing doing. The sixth latent variable, contextual seeing, emerged from the integration of context items in TPACK-Xl and seeing items in KARDS. The items extracted from the context in TPACK-Xl correlated with seeing items in KARDS. The extracted items from reflection correlated with analyzing in KARDS. The items of recognizing and doing correlate in the KARDS model. The higher latent heutagogy correlated with self-regulation in the KARDS model. Therefore, the developed scale is an effective tool for assessing different versions of knowledge among EFL teachers.
The researcher aimed to address existing gaps in the literature by proposing a model of teachers’ integrated knowledge and constructing a questionnaire to test its adequacy. Additionally, the study sought to redefine teachers’ knowledge.
Discussion on Factor one: knowing pedagogies
The first higher latent variable, referred to as the knowledge module, was derived from clustering items related to pedagogical knowledge, pedagogical content knowledge, and post-method knowledge. This suggests that these three categories of knowledge are interconnected and form a cohesive unit within the overall construct. This factor highlights the significance of integrating different forms of knowledge in pedagogies to improve teaching effectiveness and efficiency. It suggests that educators should consider how different types of knowledge can be integrated to enhance their pedagogical practices. This factor assesses teachers’ understanding and application of effective teaching strategies and methods. In line with the results Shulman (1986) highlighted the importance of pedagogical content knowledge in effective teaching.
Discussion on Factor two: Knowing TPACK
The second higher latent variable, TPACK, emerged from clustering items related to cybergogy and technology-level knowledge. This indicates that these two aspects are closely related and can be considered as part of a broader understanding of technological knowledge integration in teaching practices. The TPACK factor emphasizes the need for teachers to possess both cybergogy knowledge and technology-level knowledge to effectively integrate technology into their pedagogical practices. In line with the results, Schmit et al (2009) focused on the development and validation of an assessment instrument to measure preservice teachers’ TPACK.
Discussion on Factor three: Knowing Heutagogy
The third higher latent variable, heutagogy, is composed of underlying latent variables of self-regulation and self-determined learning. This suggests that these two aspects play a significant role in shaping heutagogical practices in teaching and learning contexts. Heutagogy is learner-centric self-determined learning grounded on human theory conducted by technology-centered learning design (Blaschke, 2012).
By measuring these factors, the TIKQ questionnaire provides valuable insights into teachers’understanding of heutagogy and their proficiency in nurturing self-regulation and self-determined learning within their teaching practices. This knowledge can guide professional development programs and support teachers in developing the necessary skills and strategies to promote heutagogical approaches effectively. The results are in line with other Studies that have highlighted the positive impact of self-regulation on various aspects of education, including academic achievement and motivation (Zimmerman, 2008).
Discussion on Factor Four: Reflective Analyzing
The fourth higher latent variable, reflective analyzing, is composed of latent variables related to reflection and analysis. This indicates that reflective practices and the ability to critically analyze teaching experiences are intertwined and represent important elements of professional growth. Dialogic reflection in analyzing is essential, based on the synergy of reflection and the module of Analyzing in KARDS. Dialogic reflection happens as soon as instructors contemplate their rehearses through a conversation with other people (Mann & Walsh, 2017). Dialogic reflection refers to a rarer intensive approach that embraces dialogue with the self to discover a supposed circumstance or happening. Analyzing consists of scrutinizing students (a) needs, (b) wants, and (c) situations (Kumaravadivelu, 2012). The new construct considered analyzing the module through dialogic reflection.
The inclusion of the reflective analyzing factor in the TIKQ questionnaire highlights the importance of reflection and critical analysis in the professional growth of educators. This factor encompasses latent variables related to reflection and analysis, indicating their interconnection in enhancing teaching practices. By incorporating the reflective analyzing factor into the TIKQ questionnaire, educators and researchers can gain valuable information about teachers’ engagement in reflective practices and their ability to critically analyze teaching experiences. The results of the study are in line with previous researches that have highlighted the positive impact of reflective practices and critical analysis on teacher professional growth (Jones & Brown, 2019; Smith, 2018).
Discussion on Factor Five: Recognizing and Doing
The fifth latent variable, recognizing doing, consists of latent variables related to recognition and action. This suggests that recognizing potential opportunities and taking appropriate actions are interconnected and relevant for effective teaching practices. Recognizing and doing aspects was deliberated and considered (Kumaravadivelu, 2012). The inclusion of the recognizing and doing factor in the TIKQ questionnaire aligns with Kumaravadivelu's (2012) considerations and recognizes the interconnection between recognition and action in teaching.
The results of the study conducted using the TIKQ questionnaire align with existing research that emphasizes the importance of recognizing and doing in effective teaching. Previous studies have shown that teachers who possess the ability to recognize teaching opportunities and take appropriate actions exhibit increased instructional effectiveness and student engagement (Brown & Johnson, 2016; Lee & Smith, 2018).
Discussion on Factor Six: Contextual Seeing
The sixth latent variable, contextual seeing, emerged from combining context items in TPACK-Xl and seeing items in KARDS. The correlation between context items and seeing items suggests the importance of understanding the contextual factors and their influence on teaching practices. Seeing the module from the researchers’ vision is considered (Kumaravadivelu, 2012). The concept of context is mentioned as X in the TPACK-XL model. These two elements displayed similar backgrounds in the study. The results are in line with previous researches that have highlighted the importance of understanding contextual factors and their influence on teaching practices. Several studies have emphasized the significance of considering the context in effective teaching (Brown & Davis, 2018; Johnson & Smith, 2017).
Overall, these components shed light on the complex nature of teachers’ knowledge, highlighting the interrelationships between different aspects and indicating their role in shaping effective teaching practices. The literature of mainstream teacher education advocates that knowledge of technology issues is one of the characteristics of professional teachers. These results reinforce the idea that traditional content knowledge, originally defined by Shulman (1987) to account for teachers’ knowledge of a particular subject matter, should be revisited when applied to teacher educators.
What was mentioned was an effort to devise a questionnaire to statistically measure EFL teachers’ integrated knowledge. A new model of teachers’ integrated knowledge in a novel context is expected to broaden the academicians’ understanding of this construct, its main components, and how it can affect the academic community. The fact that there is no existing questionnaire in line with this study makes it a pioneering effort in this area.
CONCLUSION
This study first explored the reliability indices of 13 components in the proposed questionnaire. Secondly, the good fit of the trait structure of the questionnaire was explored, and thirdly, the correlation among components of the questionnaire was explored. Based on the contributions of the 13 latent variables to their higher-level latent variables, it was concluded that; Pedagogical knowledge, pedagogical content knowledge, and post method had large and significant contributions to the higher-level latent variable of "Knowing." Cybergogy and technological levels significantly contributed to the higher latent variable of TPACK. Also, self-regulation and self-determined learning significantly contributed to the higher-level latent variable of Heutagogy.
Further, analyzing, and reflecting, had large and significant aids to the higher-level latent variable of reflection. Recognizing and doing had large and significant supports to the higher-level latent variable of "Recognizing Doing," and finally, Context and Seeing had large and significant supports to the higher-level latent variable of "Contextual Seeing."
All relationships were large and significant; Regarding the individual correlations among the latent variables, it can be concluded that; Knowing had large and significant relationships with TPACK, Heutagogy, Reflection, Recognizing/Doing, and Context/Seeing. TPACK had large and significant relationships with Heutagogy, Reflection, Recognizing/Doing, and Context/Seeing. Heutagogy had large and significant relationships with Reflection, Recognizing/Doing, and Context/Seeing. Reflection had large and significant relationships with Recognizing/Doing and Context/Seeing; finally, Recognizing/Doing had large and significant relationships with Context/Seeing.
The study can be generated to discover variables resembling gender, local context scopes, and educational degree within the demographic information. The study can be replicated to scrutinize the same variables within circumstances other than EFL settings. Supplementary research is required to analyze the degree of knowledge assessment and quality integration among EFL instructors. It would be of great interest if future researchers investigated the ESP teachers’ perception of the proposed questionnaire and compared the results to determine the similarities and differences.
The findings recommend that language education policymakers propose more technology-integrated designs of language teaching to syllabus creators and educational organizations to create integrated knowledge among teachers and learners, which could lead to their higher levels of awareness of language teaching in broad-spectrum and pedagogical content knowledge in particular-spectrum. The implications of the current research might assist EFL teachers in Iran in using standards of integrative knowledge and moving from pedagogy toward heutagogy and cybergogy for better knowledge assessment. In the scope of this research, sample size, gender, experiences, and length of time were regarded as limitations. The other limitation was linked to the small sample size. By conducting further research in this area, researchers can contribute to the ongoing development of knowledge in teaching and learning and provide valuable insights for practitioners and policymakers.
REFERENCES
Ahmadian, M. J., Ketabi, S., & Brown, C. M. (2020). The language assessment knowledge questionnaire (LAKQ): Assessing language teachers’ assessment literacy. Language Testing in Asia, 10(1), 1-21.
Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for conceptualizing, developing, and assessing ICT-TPACK: Advances in technological pedagogical content knowledge (TPACK). Computer & Education, 52(1), 154-168. https://doi.org/10.1016/j.compedu.2008.07.006
Aydın, E. & Mıhladız Turhan, G. (2023). Exploring primary school teachers’ pedagogical content knowledge in science classes based on PCK model. Journal of Pedagogical Research, 7(3), 70-99. https://doi.org/10.33902/JPR.202318964
Blömeke, S., Jentsch, A., Ross, N., Kaiser, G., and König, J. (2022). Opening up the black box: teacher competence, instructional quality, and students’ learning progress. Learning and Instruction, 79 (1). 101600. https://doi.org/10.1016/j.learninstruc.2022.101600
Ball, D. L., & Bass, H. (2000). Interweaving content and pedagogy in teaching and learning to teach: Knowing and using mathematics. In J. Boaler (Ed.), Multiple perspectives on teaching and learning mathematics (pp. 83–104). Westport, CT: Ablex.
Ball, D.L., Thames, M.H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389-407.
Bae, J., & Bachman, L. F. (2010). An investigation of four writing traits and two tasks across two languages. Language Testing, 27(2), 213-234. https://doi.org/10.1177/0265532209349470
Baran, E., & Uygun, E. (2016). Putting technological, pedagogical, and content (TPACK) in action: An integrated TPACK-design-based learning (DBL) approach. Australasian Journal of Educational Technology, 32(2), 47-63. https://doi.org/10.14742/ajet.2551
Blaschke, M. (2012). Heutagogy and lifelong learning: A Review of heutagogical practice and self-determined learning.The International Review of Research in Open and Distance Learning, 13(1), 56-71.https://doi.org/10.19173/irrodl.v13i1.1076
Baser, D., Kopcha, T. J., & Ozden, M. Y. (2016). Developing a technological pedagogical content knowledge (TPACK) assessment for preservice teachers learning to teach English as a foreign language. Computer Assisted Language Learning, 29 (4), 749-764. https://doi.org/10.1080/09588221.2015.1047456
Bostancıoğlu, A., & Handley, Z. (2018). Developing and validating a questionnaire for evaluating the EFL' Total PACKage': Technological Pedagogical Content Knowledge (TPACK) for English as a Foreign Language (EFL). Computer Assisted Language Learning, 31 (5-6), 572-598. https://doi.org/10.1080/09588221.2017.1422524
Brown, E., & Davis, L. (2018). Understanding the influence of context on teachers’ beliefs and practices. Teaching and Teacher Education, 73, 145-155.
Brown, E., & Johnson, S. (2016). The impact of teacher actions on student learning. Journal of Educational Psychology, 108(3), 345-360.
Carrier, S. I., & Moulds, L. D. (2003). Pedagogy, andragogy, and cybergogy: exploring best - practice paradigm for online teaching and learning. Sloan –C 9th International Conference on Asynchronous Learning Networks (ALN), USA PPT
Chadha, N. K. (2009). Applied Psychometry. Sage Publications.
Chapnick, S. &Meloy, J. (2005). From Andragogy to Heutagogy: Renaissance e-learning: creating dramatic and unconventional learning experiences. Essential resources for training and HR professionals. John Wiley and Sons.
Chen, P. S., & Hsu, Y. C. (2020). Developing and validating a technology pedagogical and content knowledge questionnaire for language teachers. Assessing Writing, 45, 1-17
De Haan. Baran, E., & Uygun, E. (2016). Putting technological, pedagogical, and content knowledge (TPACK) in action: An integrated TPACK-design-based learning (DBL) approach. Australasian Journal of Educational Technology, 32(2), 47-63. https://doi.org/10.14742/ajet.255
Duong, P. N., Voordeckers, W., Huybrechts, J., & Lambrechts, F. (2022). On external knowledge sources and innovation performance: Family versus non-family firms. Technovation, 114, 102448.https://doi.org/10.1016/j.technovation.2021.10244
Faramarzi, S., Heidari Tabrizi, H., & Chalak, A. (2019). Telegram: an instant messaging application to assist distance language learning. Teaching English with Technology, 12(1), 1263-1280. https://doi.org/10.29333/iji.2019.12181a
Fennema, E., & Franke, M. (1992). Teachers’ knowledge and its impact. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp.147-164). Simon & Schuster Macmillan.
George, D., & Mallery, P. (2020). IBM SPSS statistics 26 step by step: A simple guide and reference. Routledge.
Goldberg, M. (2012). Arts integration: Teaching subject matter through the arts in multicultural settings (4th ed.). Pearson.
Graham, C. R., Burgoyne, N., Cantrell, P., Smith, L., St. Clair, L., & Harris, R. (2009). TPACK development in science teaching: Measuring the TPACK confidence of in-service science teachers. TechTrends, 53(5), 70-79.
Guerriero, S. (2014). Teachers’ pedagogical knowledge and the teaching profession: Background report and project Objectives. OECD.
Hase, S., & Kanyon, C. (2013). Self-determined learning: Heutagogy in action. Bloomsbury.
Hassani, V., Khatib, M., YazdaniMoghaddam, M. (2019). An Investigation of Teachers’ Perceptions of KARDS in an EFL Context. International Journal of Foreign Language Teaching and Research,7(28), 135-153.
Hill, H. C., Ball, D. L., & Schilling, S. G. (2008). Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal for Research in Mathematics Education, 39(4), 372-400.
Hofer, M. & Harris, J. (2010). Differentiating TPACK development: Using learning activity types with in-service and preservice teachers. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.). Research highlights in technology and teacher education (pp. 295-302). Chesapeake, VA: Society for Information Technology and Teacher Education (SITE)
Hudson, B., & Zgaga, P. (2017). History, context, and overview: Implications for teacher education policy, practice, and future research. In B. Hudson (Ed.), Overcoming fragmentation in teacher education policy and practice (pp. 1–26). Cambridge University Press.
Jones, B., & Brown, C. (2019). Exploring the relationship between critical analysis and teacher effectiveness. Teaching and Learning Studies, 18(3), 245-260.
Johnson, S., & Smith, M. (2017). Contextual factors in instructional decision-making: Comparing urban and rural teachers. Urban Education, 52(8), 983-1009.
Kaiser, G., & König, J. (2019). Evaluating the outcome of teacher education at different points in time. Journal of Teacher Education, 70(4), 393-406
Kazutoshi, T., & Ever, M. B. (1999). Ergonagy: Its Relation to Andragogy. Paper presented at the Annual Meeting of the Comparative and International Education Society. Toronto: Canada (April 14-18). (ERIC Document Reproduction Service No. ED438464).
Kirschner, F., Paas, F., & Kester, L. (2018). Thematic review of studies in the field of teachers' practical knowledge. Educational Research Review, 23, 75-89.
König, J., Hanke, P., Glutsch, N. et al. (2022). Teachers’ professional knowledge for teaching early literacy: conceptualization, measurement, and validation. Educ Asse Eval Acc, 34, 483–507. https://doi.org/10.1007/s11092-022-09393-z
Korkmaz, M. V. N., Goksuluk, D., & Zararsiz, G. (2019). An R package for assessing multivariate normality. Department of Biostatistics, Hacettepe University, Ankara, Turki
Kumaravadivelu, B. (2012). Language teacher education for a global society. Taylor & Francis.
Kurt, G., & Atay, D. (2021). Development and validation of a Content and Language Integrated Learning (CLIL) knowledge questionnaire for language teachers. Language Teaching Research, 25(2), 249-273.
Kyriazos, T. A., & Stalikas, A. (2018). Applied psychometrics: The steps of scale development and standardization process. Psychology, 9, 2531-2560. https://doi.org/10.4236/psych.2018.911145
Lehmann, T. (2020). International perspectives on knowledge integration: Theory, research, and good practice in preservice teacher and higher education. Brill Sense. https://doi.org/10.1163/9789004429499.
Lim, P. S., Din. W. A., Nik Mohamed, N. Z., & Swanto, S. (2022). Development and validation of a survey questionnaire assessing technological pedagogical content knowledge and E-Learning acceptance for Malaysian English teachers. International Journal of Education, Psychology and Counseling, 7 (48), 206-220. doi: 10.35631/IJEPC.748015
Liu, S. Y., & Shulman, L. S. (2007). Validating measures of teachers’ knowledge of statistics: The use of and interpretation of reliability coefficients. Journal of Educational Measurement, 44(1), 47-62.
Lee, S., Kim, H., & Park, S. (2017). Evaluating teachers integrated pedagogical content knowledge: Development and validation of a questionnaire. Journal of Education and Training Studies, 5(1), 47-65.
Lee, J., & Smith, M. K. (2018). Teacher responsiveness and student engagement: A multilevel analysis. Journal of Educational Research, 111(4), 501-514.
Lee, J., & Turner, J. E. (2017). Extensive knowledge integration strategies in preservice teachers: The role of perceived instrumentality, motivation, and self-regulation. Educational Studies, 5(44), 505-520. https://doi.org/10.1080/03055698.2017.1382327
Lewin, A. Y., Massini, S., & Peeters, C. (2011). Microfoundations of internal and external absorptive capacity routines. Organization Science, 22(1), 1343-1371.
Li, S., Liu, Y., & Su, Y. (2022). Differential analysis of teachers’ technological pedagogical content knowledge (TPACK) abilities according to teaching stages and educational levels. Sustainability, 14(12), 1-15. https://doi.org/10.3390/su14127176
Lieberei, T., Welter, VDE., Großmann, L., & Krell, M. (2023). Findings from the expert-novice paradigm on differential response behavior among multiple-choice items of a pedagogical content knowledge test – implications for test development. Front. Psychol, 14, 1-17. doi: 10.3389/fpsyg.2023.1240120
Nafiyan, A. A. (2020). Language teachers’ perceptions of continuing professional development (CPD) opportunities: Development and validation of a questionnaire. Teaching English with Technology, 20(1), 85-103.
Nguyen, L.A.T., Habók, A. (2023). Tools for assessing teacher digital literacy: a review. Journal of Computers in Education. 1-42.
Onyefulu, C. and Abayomi, W.O. (2023). Student teachers’ knowledge and conceptions of classroom assessment at a university in Jamaica. Open Access Library Journal, 10, 1-19. doi: 10.4236/oalib.1110640.
Park, S., Choe, Y., & Johnson, J. F. (2020). Teach for America teachers’ pathway to common instructional language for literacy teaching. Teachers College Record, 122(6), 1-43
Rehhali, M., Mazouak, A., & Belaaouad, S. (2022). The Digital assessment of learning: Current situation and perspectives: Case of teachers of life and earth sciences. Journal of Information Technology Management, 14(3), 65-78. doi: 10.22059/jitm.2022.87534
Sahin, I. (2011). Development of survey of technological pedagogical and content knowledge (TPACK). Turkish Online Journal of Educational Technology,10(1), 97-105.
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological Pedagogical Content Knowledge (TPACK): The Development and Validation of an Assessment Instrument for Preservice Teachers, JRTE, 42(2), 123-149.
Sherin, M. G., & ES, V. (2002). Learning to notice: scaffolding new teachers’ interpretations classroom interactions. Journal of Technology and Teacher Education, 10(4), 571-596.
Sherin, M. G. (2007). The development of teachers’ professional vision in video clubs. In Video research in the learning sciences (pp. 383-395). Erlbaum.
Shulman, L. S. (1986). Paradigms and research programs in the study of teaching. A contemporary perspective. In M. Wittrock (Ed.), Handbook of research on teaching (3rded.). Macmillan.
Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1-22.
Smith, A. (2018). The impact of reflective practices on teacher development. Journal of Education, 42(2), 123-145.
Steele, M. D., & Hillen, A. F. (2012). The content-focused methods course: A model for integrating pedagogy and mathematics content. Mathematics TeacheR Educator, 1(1), 53-70. https://doi.org/10.5951/mathteaceduc.1.1.0053
Stoten, D.W. (2022). Navigating heutagogic learning: mapping the learning journey in management education through the OEPA model. Journal of Research in Innovative Teaching & Learning, 15 (1), 83-97. https://doi.org/10.1108/JRIT-07-2020-0038
Timur, B., & Tasar, M. F. (2011). In-service science teachers’ technological pedagogical content knowledge confidences and views about technology-rich environments. CEPS Journal, 1(4), 11-25. - DOI: 10.25656/01:6057
Van Driel, J. H., & Berry, A. (2012). Teacher professional development focusing on pedagogical content knowledge. Educational Researcher, 41(1), 26-28. https://doi.org/10.3102/0013189X11431010
Vongkulluksn, K., Xie, K., & Bowman, M. (2018). Revisit, redefine, and embrace the technology pedagogical content knowledge (TPACK) framework: Examining theoretical and empirical advancements. Australasian Journal of Educational Technology, 34(1), 1-14.
Yılmaz, E. (2020). Integrated knowledge: A comprehensive framework for understanding and enhancing the quality of education. Educational Sciences: Theory and Practice, 20(4), 1-19.
You, J. (2020). Technological pedagogical content knowledge for computer-assisted language learning: Validating a questionnaire for English as a foreign language teacher. Computers & Education, 157, 1-15.
Zhu, X., Raquel, M., & Aryadoust, V. (2019). Structural equation modeling to predict performance in English proficiency tests. In Quantitative Data Analysis for Language Assessment Volume II (pp. 101-126). Routledge.
Biodata
Farzaneh Rahimzadeh is a Ph.D. candidate in the field of TEFL (Teaching English as a Foreign Language) at the Islamic Azad University, Science and Research Branch of Tehran, Iran. She has been teaching general English courses in different universities in Iran. Her main areas of interest include teacher education, and assessment.
Email: rahimzadeh2017farzaneh@gmail.com
Ahmad Mohseni is an associate professor at the Islamic Azad University, South Tehran Branch. He has been teaching TEFL/TESL for 42 years at the undergraduate and postgraduate levels. He has carried out a number of research projects, and he is the author of six books and published several scholarly essays in national and international academic journals. He has also participated in a number of national and international conferences and seminars. He is interested in teaching courses such as methods of writing research papers, teaching language skills, essay writing, ESP (in BA, MA, PhD levels). He has been an invited professor at American Global University- College of Education in the state of Wyoming, USA. Currently, he is the dean of the Faculty of Persian literature and foreign languages LAU, STB.
Email: amohseny1328@gmail.com
Ghafour Rezaei Golandouz is assistant professor of Teaching English as a Foreign Language (TEFL) at Islamic Azad University, Garmsar Branch, Garmsar, Iran. His main research interests include cognitive aspects of second language acquisition, research in applied linguistics, and language assessment. He has published a number of research articles.
Email: rezaie434 @gmail.com
-
-
-
Effectiveness of Web-based Tools in Learning Idiomatic Expressions: A CALL Study
Print Date : 2024-04-25 -
Effect of Iranian IELTS Teachers’ Immunity on Personalized Vocabulary Instruction
Print Date : 2023-01-01