Development and Validation of the EFL Teachers’ Stroke Questionnaire
Subject Areas : Journal of Language, Culture, and Translation
Shabnam Kadkhodaei
1
,
Mohammad Reza Talebinejad
2
,
Mohsen Shahrokhi
3
*
1 - Ph.D. Candidate, Department of English Language, Shahreza Branch, Islamic Azad University, Shahreza, Iran
2 - Associate Professor, Department of English Language, Shahreza Branch, Islamic Azad University, Shahreza, Iran
3 - Associate Professor, Department of English Language, Shahreza Branch, Islamic Azad University, Shahreza, Iran
Keywords: teacher stroke, scale development, construct validity, EFL teachers,
Abstract :
Classroom interpersonal recognition, commonly termed “stroke”, is theorized to shape student motivation, engagement, and wellbeing, yet extant research has relied predominantly on learner-reported measures and lacks a rigorously developed teacher-self report instrument; to address this gap the present study developed and validated an EFL Teachers’ Stroke Questionnaire. The study's objectives were to generate a theory-driven item pool, establish content validity through expert review, identify the instrument’s latent structure via exploratory factor analysis (EFA), confirm that structure with confirmatory factor analysis (CFA) in an independent sample, and evaluate reliability and convergent/discriminant validity. Using a multi-stage scale-development design, items were drafted from Transactional Analysis and the stroke literature, reviewed by subject-matter experts, cognitively piloted with practicing EFL teachers, and administered to stratified samples for EFA and CFA (N = 124). EFA suggested a coherent four-factor solution accounting for ~59% of variance; CFA produced acceptable to good fit (CFI = .955, TLI = .947, RMSEA = .046, SRMR = .044) with standardized loadings .45–.82. Subscales demonstrated satisfactory internal consistency (αs = .79–.90; ωs = .80–.91), and convergent and discriminant validity were supported by AVE/CR indices, cross-loadings, Fornell–Larcker criteria, and HTMT ratios. The Teachers’ Stroke Questionnaire therefore provided a psychometrically sound, multidimensional teacher-centered measure of stroking behaviors that can support research, teacher professional development, and intervention evaluation in EFL contexts.
Berne, E. (2011). Games people play: The basic handbook of transactional analysis. Tantor eBooks.
Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6, 149. https://doi.org/10.3389/fpubh.2018.00149
Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press.
Cheung, G. W. (2024). Reporting reliability, convergent and discriminant validity: Best practices and tools. Journal of Management Studies.
Cheung, G. W., Cooper-Thomas, H. D., Lau, R. S., & Wang, L. C. (2023). Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pacific Journal of Management, 41(2), 745–783. https://doi.org/10.1007/s10490-023-09871-y
Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In G. A. Marcoulides (Ed.), Modern methods for business research (pp. 295–336). Lawrence Erlbaum.
Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(1), Article 7. https://doi.org/10.7275/jyj1-4868
de Ayala, R. J. (2009). The theory and practice of item response theory. Guilford Press.
DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage publications.
Epskamp, S., Borsboom, D., & Fried, E. I. (2018). Estimating psychological networks and their accuracy: A tutorial paper. Behavior Research Methods, 50(1), 195–212. https://doi.org/10.3758/s13428-017-0862-1
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.2307/3151312
Gao, Y. (2021). Toward the role of language teacher confirmation and stroke in EFL/ESL students’ motivation and academic engagement: A theoretical review. Frontiers in Psychology, 12, Article 723432. https://doi.org/10.3389/fpsyg.2021.723432
Gefen, D., & Straub, D. W. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Communications of the Association for Information Systems, 16(1), 91–109. https://doi.org/10.17705/1CAIS.01605
Goretzko, D., Siemund, K., & Sterner, P. (2024). Evaluating model fit of measurement models in confirmatory factor analysis. Educational and Psychological Measurement. https://doi.org/10.1177/00131644231163813
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Pearson.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Hayton, J. C., Allen, D. G., & Scarpello, V. (2004). Factor retention decisions in exploratory factor analysis: A tutorial on parallel analysis. Organizational Research Methods, 7(2), 191–205. https://doi.org/10.1177/1094428104263675
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
Holland, P. W., & Thayer, D. T. (1988). Differential item performance and the Mantel-Haenszel procedure. In H. Wainer & H. Braun (Eds.), Test validity (pp. 129–145). Erlbaum. (Classic reference for Mantel–Haenszel DIF.)
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. https://doi.org/10.1007/BF02289447
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012
Li, C.-H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. https://doi.org/10.3758/s13428-015-0619-7
Little, R. J. A. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), 1198–1202. https://doi.org/10.1080/01621459.1988.10478722
Little, R., & Rubin, D. (1987). Multiple imputation for nonresponse in surveys. Wiley, 10, 9780470316696.
Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382–385.
MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4(1), 84–99. https://doi.org/10.1037/1082-989X.4.1.84
McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23(3), 412–433. https://doi.org/10.1037/met0000144
Mundfrom, D. J., Shaw, D. G., & Ke, T. L. (2005). Minimum sample size recommendations for conducting factor analyses. International Journal of Testing, 5(2), 159–168. https://doi.org/10.1207/s15327574ijt0502_4
O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, & Computers, 32(3), 396–402. https://doi.org/10.3758/BF03200807
Pishghadam, R., & Khajavy, G. H. (2014). Development and validation of the Student Stroke Scale and examining its relation with academic motivation. Studies in Educational Evaluation, 43, 109–114. https://doi.org/10.1016/j.stueduc.2014.03.004
Pishghadam, R., Derakhshan, A., Jajarmi, H., Tabatabaee Farani, S., & Shayesteh, S. (2021). Examining the role of teachers’ stroking behaviors in EFL learners’ active/passive motivation and teacher success. Frontiers in Psychology, 12, Article 707314. https://doi.org/10.3389/fpsyg.2021.707314
Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what's being reported? Research in Nursing & Health, 29(5), 489–497. https://doi.org/10.1002/nur.20147
Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Developmental Review, 41, 71–90. https://doi.org/10.1016/j.dr.2016.06.004
Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/jss.v048.i02
Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores (Psychometrika Monograph No. 17). Psychometric Society.
Satorra, A., & Bentler, P. M. (1994). Corrections to test statistics and standard errors in covariance structure analysis. In A. von Eye & C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Sage.
Singh, R. K., Neuert, C. E., & Raykov, T. (2024). Assessing conceptual comparability of single-item survey instruments with a mixed-methods approach. Quality & Quantity, 58, 3303–3329. https://doi.org/10.1007/s11135-023-01801-w
Song, Z. (2021). Teacher stroke as a positive interpersonal behavior on EFL learners’ success and enthusiasm: A review. Frontiers in Psychology, 12, Article 761658. https://doi.org/10.3389/fpsyg.2021.761658
Stewart, I., & Joines, V. (1987). TA today: A new introduction to transactional analysis. Lifespace.
Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson.
Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design. Sage.
Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and recommendations for best practices. The Counseling Psychologist, 34(6), 806–838. https://doi.org/10.1177/0011000006288127
Xia, Y., & Yang, Y. (2019). RMSEA, CFI, and TLI in structural equation modeling with ordered categorical data: The story they tell depends on the estimation methods. Behavior Research Methods, 51(1), 409–428. https://doi.org/10.3758/s13428-018-1055-2
Yuan, L. (2022). Enhancing Chinese EFL students’ grit: The impact of teacher stroke and teacher–student rapport. Frontiers in Psychology, 12, Article 823280. https://doi.org/10.3389/fpsyg.2021.823280