Iraqi EFL Learners’ Interactional Competence in Opinion Exchange and Jigsaw Tasks: Face-to-Face Versus Virtual Learning Contexts
Subject Areas : All areas of language and translationKarwan Othman Azeez Zanganah 1 , Elaheh Sadeghi Barzani 2 , Parween Shawkat Kawther Qader 3 , Fatinaz Karimi 4
1 -
2 -
3 -
4 -
Keywords: Interactional competence, virtual learning environment, face-to-face learning environment, opinion exchange, jigsaw,
Abstract :
This study used opinion exchange and jigsaw activities to assess EFL learners' interactional ability in face-to-face and virtual learning contexts. For this objective, 40 elementary Iraqi EFL learners were convenience sampled and assigned to two groups (face-to-face vs. virtual learning environment) and two task groups (opinion exchange vs. jigsaw activities). ANOVA and MANOVA were used to evaluate Oxford Quick Placement Test, story-telling jigsaw, and opinion-exchange task data. Topic management and interactive listening were the most important factors for Iraqi elementary EFL learners' opinion exchange assignment in face-to-face learning. Topic management was also most important for Iraqi elementary EFL students undertaking the jigsaw activity face-to-face. Jigsaw task performance increased turn and subject management ratings of interactional competence, while opinion exchange task performance increased interactive listening. Topic management was the most influential component for Iraqi elementary EFL learners when doing the opinion exchange and jigsaw tasks in the virtual learning environment. Iraqi elementary EFL learners scored higher in subject, turn management, and interactive listening in the virtual learning environment after the opinion exchange exercise. In conclusion, the virtual learning environment promoted topic management and interactive listening competences in the opinion exchange task, while the face-to-face environment promoted turn management and these skills in the jigsaw task.
Al Ruheili, H., & Al-Saidi A. (2015). Students’ perceptions on the effectiveness of using Edmodo in EFL Classes. Journal of Global Academic Institute, 1(1), 23-33.
Alhawiti, M. M. F. (2017). The effect of virtual classes on the student’s English achievement in Tabuk Community College. International Journal of Learning, Teaching and Educational Research, 16(5), 17-25.
Al-Qahtani, M. H. (2019). Teachers’ and students’ perceptions of virtual classes and the effectiveness of virtual classes in enhancing communication skills. Arab World English Journal, Special Issue: The Dynamics of EFL in Saudi Arabia, 7, 223-240.
Al-Said, K. (2015). Students’ perceptions of Edmodo and mobile learning and their real barriers towards them. The Turkish Online Journal of Educational Technology, 14(2), 167-180.
Andyani, H., Setyosari, P., Wiyono, B. B., & Djatmika, E. T. (2020). Does technological pedagogical content knowledge impact on the use of ICT in pedagogy? International Journal of Emerging Technologies in Learning (iJET), 15(3), 126-139.
Atkinson, D. (2011). Alternative approaches to second language acquisition. Taylor & Francis.
Atkinson, J. M., & Heritage, J. (1984). Structures of social action. Cambridge University Press.
Babbie, E. R. (2010). The practice of social research. (12th ed.). Belmont, CA: Wadsworth Cengage.
Baker, D. A., Burns, D. M., & Reynolds-Kueny, C. (2020). Just sit back and watch: Large disparities between video and face-to-face interview observers in applicant ratings. International Journal of Human–Computer Interaction, 36(4), 1–12.
Balasubramanian, K., Jaykumar V., & Fukey L. N. (2014). A study on “student preference towards the use of Edmodo as a learning platform to create responsible learning environment.” Procedia - Social and Behavioral Sciences, 144(2014), 416-422.
Baltes, B. B., Dickson, M. W., Sherman, M. P., Bauer, C. C., & LaGanke, J. S. (2002). Computer-mediated communication and group decision making: A meta-analysis. Organizational Behavior and Human Decision Processes, 87(1), 156–179.
Baralt, M. (2013). The impact of cognitive complexity on feedback efficacy during online versus face-to-face interactive tasks. Studies in Second Language Acquisition, 35(4), 689-725.
Bataineh, R. F., & Mayyas, M. B. (2017). The utility of blended learning in EFL reading and grammar: A case for Moodle. Teaching English with Technology, 17(3), 35-49
Bicen, H. (2014). The role of social learning networks in mobile-assisted language learning: Edmodo as a case study. Journal of Universal Computer Science, 21(10), 1297-1306.
Binder, J. F., Cebula, K., Metwally, S., Vernon, M., Atkin, C., & Mitra, S. (2019). Conversational engagement and mobile technology use. Computers in Human Behavior, 99, 66–75.
Bøhn, H. (2015). Assessing Spoken EFL Without a Common Rating Scale: Norwegian EFL Teachers’ Conceptions of Construct. SAGE Open, 5(4), 27-36.
Borger, L. (2019). Assessing Interactional Skills in A Paired Speaking Test:
Raters’ Interpretation of The Construct. Apples - Journal of Applied Language
Studies, 13(1), 151-174.
Brooks, L. (2009). Interacting in pairs in a test of oral proficiency: Co-constructing a
better performance. Language Testing, 26, 341-366.
Brouwer, C. E., & Wagner, J. (2004). Developmental issues in second language conversation. Journal of Applied Linguistics, 1(1), 29-47.
Creswell, J. W. (2009). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Pearson.
Dayag, J. (2018). Reaching out: Facilitating EFL learning through Edmodo. International Journal of Advanced Multidisciplinary Scientific Research, 1(2), 1-7.
Dimitrova-Galaczi, E. (2004). Peer-peer interaction in a paired speaking test: The case of the First Certificate in English. (Doctoral dissertation), Columbia University.
Ducasse, A. M., & Brown, A. (2009). Assessing paired orals: Raters' orientation to interaction. Language Testing, 26(3), 423-443.
Duncan, S. (1972). Some signals and rules for taking speaking turns in conversation. Journal of Personality and Social Psychology, 23(2), 283–292.
Emrani, F., & Hooshmand, M. (2019). A conversation analysis of self-initiated self-repair structures in advanced Iranian EFL learners”. International Journal of Language Studies, 13(1), 57-76.
Galaczi, E. D., & Taylor, L. B. (2020). Measuring interactional competence. In P. Winke & T. Brunfaut (Eds.), The Routledge handbook of second language acquisition and language testing (pp. 338-348). Routledge.
Galaczi, E., & Taylor, L. (2018). Interactional competence: Conceptualisations, operationalisations, and outstanding questions. Language Assessment Quarterly, 15(3), 219-236.
Garrison, D. R. (2017). E-learning in the 21st century: A community of inquiry framework for research and practice. Taylor & Francis.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in Cognitive Sciences, 8(1), 8-11.
Geiger, I. (2020). From letter to Twitter: A systematic review of communication media in negotiation. Group Decision and Negotiation, 29(3), 207–250.
Hamouda, A. (2020). The effect of virtual classes on Saudi EFL students’ speaking skills. International Journal of Linguistics, Literature and Translation, 3(4), 175–204
Herrera Mosquera, L. (2017). Impact of implementing a virtual learning environment (VLE) in the EFL classroom. Íkala, revista de lenguaje y cultura, 22(3), 479-498.
Jungheim, N. O. (2001). The unspoken element of communicative competence:
Evaluating language learners’ nonverbal behavior. In T. Hudson & J. D. Brown
(Eds.), A focus on language test development: Expanding the language
proficiency construct across a variety of tests (pp. 1-34). University of Hawai'i, Second Language Teaching and Curriculum Center.
Kear, K. (2007). Communication aspects of virtual learning environments: Perspectives of early adopters. Retrieved from: http://oro.open.ac.uk/8623/1/KearCommunicationAspects2.pdf
Kern, R., Ware, P., & Warschauer, M. (2004). 11. Crossing frontiers: New directions in online pedagogy and research. Annual Review of Applied Linguistics, 24, 243–260.
Kim, Y. (2009). The effects of task complexity on learner-learner interaction. System, 37(2), 254-268.
Knapp, M.L., & Hall, J.A. (2002). Nonverbal communication in human interaction.
Crawfordsville, IN: Thomson Learning.
Kukulska-Hulme, A., Pettit, J., Bradley, L., Carvalho, A. A., Herrington, A., Kennedy, D. M., & Walker, A. (2011). Mature students using mobile devices in life and learning. International Journal of Mobile and Blended Learning, 3(1), 18-52.
Lee, J., & Choi, H. (2017). What affects learner’s higher-order thinking in technology-enhanced learning environments? The effects of learner factors. Computers & Education, 115, 143–152.
Levinson, S. C., & Holler, J. (2014). The origin of human multi-modal communication. Philosophical Transactions of the Royal Society, 369(1651), 20130302.
Liu, L. (2022). Interactional features of Chines EFL learners' discourse in a paired speaking test: implications for L2 teaching and testing (Doctoral dissertation), Macquarie University.
MacKey, A., & Gass, S. (2006). Second language research: Methodology and design. Routledge.
Martin, F., Stamper, B., & Flowers, C. (2020). Examining student perception of their readiness for online learning: Importance and confidence. Online Learning, 24(2), 38-58.
Mathew, N. G., Sreehari, P., & Al-Rubaat, A. M. (2019). Challenges and implications of virtual e-learning platform in EFL context: Perceptions of teachers. International Journal of English Language Teaching, 7(2), 100–116.
May, L. (2011). Interactional competence in a paired speaking test: Features salient to raters. Language Assessment Quarterly, 8(2), 127–145.
McCarthy, M. (2002). Good listenership made plain: British and American non-minimal response tokens in everyday conversation. In R. Reppen, S.M. Fitzmaurice, & D. Biber (Eds.), Using corpora to explore linguistic variation (pp. 49–72). Philadelphia: John Benjamins.
Mensah, R. O., Quansah, C., Oteng, B., & Nii Akai Nettey, J. (2023). Assessing the effect of information and communication technology usage on high school student’s academic performance in a developing country. Cogent Education, 10(1), 1–16.
Meredith, J. (2020). Conversation analysis, cyberpsychology and online interaction. Social and Personality Psychology Compass, 14, Article e12529.
Milis, K., Wessa, P., Poelmans, S., Doom, C., & Bloemen, E. (2008). The impact of gender on the acceptance of virtual learning environments. KU Leuven Association.
Mondada, L. (2016). Challenges of multimodality: Language and the body in social interaction. Journal of Sociolinguistics, 20(3), 336–366.
Morales, M. C., & Lee, J. S. (2015). Stories of assessment: Spanish-English bilingual children's agency and interactional competence in oral language assessments. Linguistics and Education, 29, 32-45.
Muijs, D. (2010). Doing quantitative research in education with SPSS. (2nd ed.). London, England: Sage.
Mukasheva, M., Kornilov, I., Beisembayev, G., Soroko, N., Sarsimbayeva, S., & Omirzakova, A. (2023). Contextual structure as an approach to the study of virtual reality learning environment. Cogent Education, 10(1), 1–22.
Nayak, D. (2021). E – learning during COVID -19 and student’ s satisfaction. Education, 4, 287–299.
Nisa, L. Z., Prameswari, T. N., & Alawiyah, Y. I. (2021). The effect of using small group discussions through zoom breakout room to increase the frequency of individual speaking participation in the speaking courses. Journal of Digital Learning and Education, 1(3), 109-117.
Nordquist, R. (2019). The cooperative principle in conversation. ThoughtCo.
O’Leary, R. (2004). Virtual learning environment. Retrieved from: https://www.alt.ac.uk/sites/default/files/assets_editor_uploads/documents/eln002pdf.
Pekarek Doehler, S., & Pochon-Berger, E. (2015). The development of L2 interactional
competence: evidence from turn-taking organization, sequence organization, repair organization and preference organization. In T. Cadierno, & W. S. Eskildsen, (Eds.), Usage-Based Perspectives on Second Language Learning (pp. 233–268). De Gruyter.
Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-Based Virtual Learning Environment: A research framework and a preliminary assessment of effectiveness in basic IT skills training. MIS Quarterly, 25(4), 401–426.
Raman, A., & Rathakrishnan, M. (2018). FROG VLE: Teachers’ technology acceptance using utaut model. International Journal of Mechanical Engineering and Technology, 9(3), 529–538.
Ramirez, A., Jr., & Burgoon, J. K. (2004). The effect of interactivity on initial interactions: The influence of information valence and modality and information richness on computer-mediated interaction. Communication Monographs, 71(4), 422–447.
Rashid, A. H. A., Shukor, N. A., Tasir, Z., & Na, K. S. (2021). Teachers’ perceptions and readiness toward the implementation of virtual learning environment. International Journal of Evaluation and Research in Education (IJERE), 10(1), 209–214
Rusk, F., & Pörn, M. (2019). Delay in L2 interaction in video-mediated environments in the context of virtual tandem language learning. Linguistics and Education, 50, 56–70.
Schegloff, E. A. (1986). The routine as achievement. Human Studies, 9(2-3), 111-151.
Schegloff, E. A., Jefferson, G., & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53(ii), 361-382.
Seuren, L. M., Wherton, J., Greenhalgh, T., & Shaw, S. E. (2021). Whose turn is it anyway? Latency and the organization of turn-taking in video-mediated interaction. Journal of Pragmatics, 172(4), 63–78.
Syukur, H. (2016). Building up students’ speaking achievement through jigsaw technique. Jurnal Adabiyah, 16(2), 122-137.
Tuzlukova, V., Al Busaidi, S., Coombe, C. & Stojkovic N. (2016). Research on technology-based language education in the Sultanate of Oman: Perspectives for student skills’ enhancement. Journal of Teaching English for Specific and Academic Purposes, 4(1), 1-8.
Varonis, E. M., & Gass, S. M. (1985). Miscommunication in native/nonnative conversation. Language in society, 14(3), 327-343.
Wahjono, H., Wiyono, B. B., Maisyaroh, & Mustiningsih. (2021). Development of blended-learning-based semester credit system implementation model to improve learning service. Information, 12(12), 511. https://doi.org/10.3390/info12120511
Wang, L. (2015). Assessing interactional competence in second language paired speaking tasks. (Doctoral Dissertation), Northern Arizona University.
Waring, H. Z. (2018). Teaching L2 interactional competence: Problems and possibilities. Classroom Discourse, 9(1), 57-67.
Warner, R. (2013). Personal and professional skills of TESOL practitioners of the future. In P. Davidson, M. Al Hamly, C. Coombe, S. Troudi & C. Gunn (Eds.), Proceedings of the 18th TESOL Arabia conference: Achieving Excellence through Life Skills Education (pp. 22-28). Dubai: TESOL Arabia.
Wells, G. (2002). Learning and teaching for understanding: The key role of collaborative knowledge building. Advances in Research on Teaching, 9, 1-42.
Wilson, M., & Wilson, T. P. (2005). An oscillator model of the timing of turn-taking. Psychonomic Bulletin & Review, 12(6), 957–968.
Yildiz, L. M. (2011). English VG1 level oral examinations: how are they designed, conducted and assessed? (Master's thesis), Universitetet i Oslo.
Zhu, M. X., Yan, X. L., & Yuan, Q. J. (2018). A review of researches based on media richness theory in MIS discipline. Journal of Modern Information, 38(09), 146–154.
Research Paper
Journal of
Language and Translation
Volume 15, Number 1, 2025, (pp.47-69)
Iraqi EFL Learners’ Interactional Competence in Opinion Exchange and Jigsaw Tasks: Face-to-Face Versus Virtual Learning Contexts
Karwan Othman Azeez Zanganah1, Elahe Sadeghi Barzani2*, Parween Shawkat Kawther Qader3, Fatemeh Karimi4
1Ph. D. Candidate, Department of English Language, Isfahan (Khorasgan) Branch, Islamic Azad University, Isfahan, Iran
2Assistant Professor, Department of English Language, Isfahan (Khorasgan) Branch, Islamic Azad University, Isfahan, Iran
3Assistant Professor, Department of English Language, Salahaddin university/ college of Education, Erbil, Kurdistan Region of Iraq
Parween.Kawther@su.edu.krd
4Assistant Professor, Department of English Languages, Isfahan (Khorasgan) Branch, Islamic Azad University, Isfahan, Iran
Received: September 10, 2024 Accepted: November 16, 2024
Abstract
This study used opinion exchange and jigsaw activities to assess EFL learners' interactional ability in face-to-face and virtual learning contexts. For this objective, 40 elementary Iraqi EFL learners were convenience sampled and assigned to two groups (face-to-face vs. virtual learning environment) and two task groups (opinion exchange vs. jigsaw activities). ANOVA and MANOVA were used to evaluate Oxford Quick Placement Test, story-telling jigsaw, and opinion-exchange task data. Topic management and interactive listening were the most important factors for Iraqi elementary EFL learners' opinion exchange assignment in face-to-face learning. Topic management was also most important for Iraqi elementary EFL students undertaking the jigsaw activity face-to-face. Jigsaw task performance increased turn and subject management ratings of interactional competence, while opinion exchange task performance increased interactive listening. Topic management was the most influential component for Iraqi elementary EFL learners when doing the opinion exchange and jigsaw tasks in the virtual learning environment. Iraqi elementary EFL learners scored higher in subject, turn management, and interactive listening in the virtual learning environment after the opinion exchange exercise. In conclusion, the virtual learning environment promoted topic management and interactive listening competences in the opinion exchange task, while the face-to-face environment promoted turn management and these skills in the jigsaw task.
Keywords: Keywords: Interactional competence, virtual learning environment, face-to-face learning environment, opinion exchange, jigsaw
INTRODUCTION
The term interaction typically refers to the spoken interaction that takes place between two or more interlocutors, such as informal conversations and formal interviews. Interaction may be face-to-face when the participants share the same physical environment, or it may be mediated by technology in telephone or online contexts. Whatever the medium, such interaction is dynamic and co-constructed by those involved, not necessarily in a linear or predictable manner. It is reciprocal, and participants are both pro-active and re-active, simultaneously processing input as listeners and constructing their own output as speakers while drawing upon a wide range of linguistic and paralinguistic knowledge and skills (Galaczi & Taylor, 2020). Interaction is thus strongly shaped by diverse cognitive and contextual factors. From a cognitive psychology perspective, human beings seem “designed for dialogue rather than monologue” (Garrod & Pickering, 2004, p. 8), and from a sociological perspective, interaction can be seen as the “primordial site of sociality” (Schegloff, 1986, p. 112). Not surprisingly, acquiring competence in spoken interaction-interactional competence (IC)-whether as a native (L1) or second/foreign language (L2) speaker, is a complex and lengthy process.
IC in L2 language education has assumed greater importance in recent decades as applied linguists’ understanding of the complex nature of speaking ability has developed. This has been reflected in communicative approaches to language teaching and learning. Speaking tests have consequently evolved to elicit and measure aspects of IC, such as in the introduction of paired/group formats beyond just the individual test interview and the use of video conferencing (Galaczi & Taylor, 2020).
In second language acquisition, focusing on interactional competence is worth pursuing for a number of reasons. First, it puts the notion of language use at the center of the language learning agenda. Brouwer and Wagner (2004) highlighted this alternative view as they noted that an account of language learning cannot just pay attention to the formal linguistic items as in psycholinguistically oriented SLA work (Atkinson, 2011). We must recognize interactional skills and interactional resources at all points of the learning experience and study how the L2 speakers construct their actions and make sense of their world as they participate in their L2 discourse community. Moreover, the learners are no longer viewed as handicapped language users (Varonis & Gass, 1985) or inferior to their native speaker counterparts in any permanent way. With this view on language learning, learning to participate in an interactionally competent manner is not limited to only second language speakers but includes any speakers who are entering a new discursive practice of which they are not a member. A discourse practice can be defined at a professional level, community level, or any specialized form of talk.
LITRETURE REVIEW
Interactional Competence
Interactional competence is defined as the knowledge and ability constructed as a result of “interactional processes during interactive tasks such as negotiation of meaning, feedback, and production of modified output” (Kim, 2009, p. 255). It is “co-constructed by all participants in a discursive practice; participants recognize and respond to expectations of what to say and how to say it by drawing on various identity, linguistic, and interactional resources that they bring to the interaction" (Morales & Lee, 2015, p. 34).
Galaczi and Taylor (2018) defined the construct of IC as the ability to co-construct interaction in a purposeful and meaningful way. This ability can be supported by nonverbal or visual behaviors, such as eye contact, facial expression, laughter, and posture. However, the authors acknowledged that there should be space for more nonverbal micro-features to be added to nonverbal behavior. Thus, they called for empirical research to uncover more features underlying nonverbal communication. In addition to nonverbal features, Galaczi and Taylor also defined the construct of IC in terms of four verbal macro-level groups: turn management, topic management, breakdown repair, and interactive listening.
Turn management consists of pausing/latching/interrupting, ending, maintaining, and starting. Topic management is composed of closing, shifting, extending, and initiating. Breakdown repair includes joint utterance creation, self/other, and recasts. Interactive listening consists of back-channeling, comprehension checks, and continuers. Turn management is “a way of organizing conversation, where participants alternate, and one speaker speaks at a time” Galaczi and Taylor (2020, p. 340). It is a way of conversational organization in which participants alternate speaking turns, and only one speaker speaks at a time. Speakers use linguistic and non-linguistic cues to create turns that are related to preceding turns and distribute turns to other speakers. The silence between turns and overlapping conversations is normally avoided.
Topic management entails knowing how to participate in conversations properly and being able to smoothly initiate, shift, and terminate a topic. Even for proficient speakers, these skills do not always happen easily. Participants can use a variety of methods to inform one another when topics are being initiated, shifted, or closed. Atkinson and Heritage (1984, p. 165) stated that “topic may well prove to be among the most complex conversational phenomena to be investigated and, correspondingly, the most recalcitrant to systematic analysis.” Yildiz (2011) found that teachers mentioned the ability to communicate and the ability to reflect and discuss the topic independently as some of the most important criteria when assessing oral English exams. Similarly, Borger (2019) found topic development to be one of the three most salient criteria attended to by teachers in relation to the national speaking test mandatory for all Swedish students of English. Essentially, the ability to manage topics in interaction is required to be able to display competence, both when it comes to communication and content.
Repair is defined by (Nordquist, 2019, p. 1) as “the process by which a speaker recognizes a speech error and repeats what has been said with some sort of correction.” A linguistic repair is sometimes viewed as a type of dysfluency because it is characterized by hesitation and an editing term (e.g., “I mean”). In a conversation, repair addresses recurrent errors in hearing, understanding, and speaking (Schegloff et al., 1977, p. 361). As a result, repair is a linguistic phenomenon that is necessary for maintaining smooth and accurate communication. When the speaker and/or the recipient notice an error, they repair it. So, one of them takes the initiative in this regard (Emrani & Hooshmand, 2019). As a result, repair can be classified as either self-repair or other-repair. That is to say, the speaker corrects or repairs himself versus having someone else do it (Schegloff et al., 1977). In a Norwegian educational context, Bøhn (2015) identified that teachers paid some attention to the use of compensatory strategies and less attention to the ability to repair when assessing L2 oral exams in upper secondary. Making use of various strategies to identify potential sources of interactional trouble and knowing how to repair any problems before they lead to a breakdown in communication is essential in managing interaction and can be indicative of a more advanced L2 speaker (Pekarek Doehler & Pochon-Berger, 2015).
In terms of interactive listening, listeners use verbal and non-verbal means to indicate that they are following the interaction. Verbal means include comprehension checks (e.g., “Exactly!”) and back-channel (e.g., “Yeah”); non-verbal cues include gaze and nodding (Galaczi & Taylor, 2020). McCarthy (2002) came up with the term Listenership to describe the feedback provided by listeners. Though there are various terms currently used to describe this process, ‘back-channel’ is the one that is most commonly used in the literature, particularly in conversation analysis. As observed by May (2011), raters likely commented negatively when the listener responded minimally or irrelevantly to the other interlocutor who was speaking. Features of interactive listening consist of two categories: signaling comprehension and supportive listening.
Finally, Nonverbal behavior is defined as communication that is produced by some means other than words (Knapp & Hall, 2002). As a natural part of oral communication, nonverbal interaction features accompany and support the verbal interaction process (Ducasse & Brown, 2009). Ducasse and Brown’s (2009) nonverbal category included only gaze and hand gestures. Empirical research on features of nonverbal interaction related to language learning, however, has mainly focused on three features: head nods, gaze direction, and gesture (Jungheim, 2001).
Wang (2015) examined interaction features in 35 paired performances of the four speaking tasks: spot-the-difference, story-completion, decision-making, and free discussion. The features in the hypothesized models of interaction features in speaking tasks were developed based on the two main categories of interaction from Ducasse and Brown’s (2009) study: interactive listening (IL) and interaction management (IM). IL comprises features of signaling comprehension (filling a silence, making comments, agreeing/disagreeing, and correcting a mistake) and features of signaling support (back-channeling and prompting). IM consists of features of topic management (initiation, development, and connection), turn-taking management (number of turns, turn interruption, and turn overlapping), and using questions (agreement, confirmation, opinion, information, and floor-offer). Wang found that when considering all of the four tasks together, test takers in the study used more features of IM (turn management, topic management, and using questions), but they had difficulties in using IL features (signaling comprehension and signaling support) to respond to their partner appropriately. Among the IM features, test takers used topic management and turn-taking management the most frequently. Test takers were able to initiate and develop topics more than they could connect topics. The discourse of test taker performances also suggested that while test takers used agreement questions and floor offer questions, they did not use questions to request information.
Virtual learning Environment (VLE)
VLE has become a significant part of the 21st-century academic landscape. It is a fad that continually gains immense popularity in the academe, which is gradually influenced by technology, blended learning, and the students’ increasing propensity to use their smartphones or tablets wherever they are and whenever they like (Kukulska-Hulme et al., 2011, Tuzlukova et al., 2016). VLE refers to a particular form of e-learning technology that uses networked computers to provide a range of functions to tutors, students, and other users (O’Leary, 2004).
Learning environments are typically defined with respect to three critical dimensions, namely time, place, and space. For the time dimension, VLE breaks free from traditional time constraints. With respect to place, it transcends geographical limitations. Space Dimension: VLEs grant access to a diverse range of resources. However, for space, VLE provides access to a diverse range of resources (Mukasheva et al., 2023; Piccoli et al., 2001). VLEs have revolutionized education by enabling various teaching and learning strategies. It offers a unique form of learning that empowers teachers to engage learners from diverse backgrounds. VLEs provide interactive and personalized learning resources, which support and enhance individual learning outcomes and knowledge management, regardless of time and space constraints (Martin et al., 2020). The integration of technology in education also brings added benefits, such as the ability to track student progress and enhance awareness among teachers and parents about learning outcomes (Wahjono & Wiyono, 2021).
By integrating VLEs into education, the traditional learning process evolves from an individual effort to a dynamic, many-to-many interaction, including both learners and teachers (Rashid et al., 2021). This integration brings about several valuable advantages, namely that it plays a significant role in enhancing communication efficiency within the educational ecosystem. VLEs facilitate more effective interactions among teachers and students as well as foster collaboration among students themselves (Milis et al., 2008; Raman & Rathakrishnan, 2018). VLEs have proven their potential to deliver considerable learning outcomes in terms of empowering learners with higher-order thinking and reasoning abilities, enabling them to address complicated and realistic issues (Lee & Choi, 2017; Nayak, 2021). In the long run, this can contribute to improved academic performance (Mensah et al., 2023). Additionally, the use of technology tends to improve learning efficacy, accountability, and transparency in the educational process (Andyani et al., 2020).
Studies suggest that VLE is perceived positively by students and that it has a positive effect on their educational experience (Al-Ruheili & Al-Saidi, 2015; Dayag, 2018). VLE fosters interaction and collaboration among students (Kear, 2007; Dayag, 2018) as it offers a number of communication tools that facilitate effective communication and collaboration among the primary stakeholders - students and educators alike- of the academe (Warner, 2013). In several settings, VLE is perceived to have a positive impact on students’ communication, collaboration, and participation in the classroom (Balasubramanian et al., 2014).
In the context of EFL, studies suggest that VLE is perceived by students as a helpful tool that offers a safe learning environment that allows them to deepen their knowledge and enhance their communication skills beyond the confines of their classrooms (Bicen, 2014; Al-Said, 2015). VLE is also deemed useful to students in terms of providing timely opportunities for learning and providing supplementary instructional materials that enrich students’ learning experiences (Bataineh & Mayyas, 2017; Dayag, 2018).
Hamouda (2020) contends that virtual learning facilitates connections with native speakers beyond traditional learning methods. Additionally, VLE has played a crucial role in enhancing students’ perceived usefulness of English skills, including listening, speaking, reading, and writing. Alhawiti (2017) argues that VLE enables students to practice their language skills with ease. Furthermore, Hamouda (2020) justifies the use of VLE in EFL classrooms, as it has been shown to improve students’ speaking scores compared to traditional classrooms, especially with regard to grammar, vocabulary, pronunciation, comprehension, and fluency. Other collaborative studies mentioned by Al-Qahtani (2019) and Mathew et al. (2019) reported that EFL teachers agreed that voice and text chat tools effectively encourage students to improve their communication skills. Similarly, Kern et al. (2004) support the idea that video chatting enhances and fosters more sophisticated output. Therefore, Herrera Mosquera (2017) claims that VLE has brought about positive learning experiences by providing access to various tools and applications.
Considering the above-mentioned points about interactional competence and the under-researched area of the manifestation of interactional competence in virtual learning environments, the present study sought to answer the following questions.
RQ1. Which interactional competence components (topic management, turn management, non-verbal behavior, breakdown repair, and interactive listening) do Iraqi elementary EFL learners more frequently use in opinion exchange and jigsaw tasks in the face-to-face learning environment?
RQ2. Which interactional competence components (topic management, turn management, non-verbal behavior, breakdown repair, and interactive listening) do Iraqi elementary EFL learners more frequently use in opinion exchange and jigsaw tasks in the virtual learning environment?
RQ3. Is there any difference between the competence components employed by Iraqi elementary EFL learners in opinion exchange and jigsaw tasks used in face-to-face and virtual learning environments?
METHODOLOGY
Design of the Study
A quantitative descriptive and comparative design was adopted for this study. Muijs (2010, p. 1) defined quantitative research as “explaining phenomena by collecting numerical data that are analyzed using mathematically based methods, in particular statistics.” Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon (Babbie, 2010; Muijs, 2010). Creswell (2009) stated that quantitative research “employs strategies of inquiry such as experiments and surveys and collects data on predetermined instruments that yield statistical data” (p. 18).
Participants
The participants comprised 40 elementary Iraqi EFL learners studying English at Kirkuk University, Iraq, whose ages ranged between 19-40. They were selected by convenience sampling. Opting for a convenience sampling procedure benefited the researcher by relying on those available participants at the time of the research process and by saving time as another significant point in research (Mackey & Gass, 2006). The participants were assigned 20 pairs for opinion exchange and jigsaw tasks. They were all Arabic native speakers, and none had lived or studied in an English-speaking country.
Ten pairs (N=20) of learners were included in the face-to-face and ten pairs (N=20) in the virtual learning environment group. For an assignment in terms of task type, five pairs were included in each task (jigsaw vs. opinion exchange) in both learning environments (face-to-face vs. virtual).
Instruments
Oxford Quick Placement Test (OQPT)
OQPT was administered to select elementary and advanced EFL learners. OQPT is a flexible measure of English language proficiency, consisting of 60 multiple-choice items on vocabulary (30 items) and grammar (30 items), and learners with scores ranging from 0 to 10 are considered beginners; the learners with scores of 11 to 17 are deemed breakthrough; learners with scores of 18 to 29 are considered elementary; Pre-intermediate students have 30 to 39 points; intermediate students have 40 to 47 points; advanced students have 48 to 54 points, and; proficient students have 55 to 60 points (See appendix A). The reliability of the test, estimated by Cronbach’s alpha, was .7, and its validity was confirmed by two TEFL university professors.
Storytelling Jigsaw Task
A picture story was presented to the participants in both face-to-face and virtual learning environments. This task was about a man who sat on a bench that had been painted before. The bench was wet, and his coat got dirty, so he went to dry cleaners. This task was taken from Wang (2015) and has a closed outcome and convergent negotiation results.
Opinion-Exchange Task
The participants of both face-to-face and virtual learning environments engaged in discussion about the topic of “pros and cons of using social networks” and the exchange of ideas. They did not need to reach an agreement.
Procedures
After administering the OQPT and selecting the participants, the researcher explained every step of doing the task that was necessary to know before starting the administration of the tasks, e.g., the researcher explained the purpose of doing the tasks, then each student was requested to sit with a partner facing to each other (in the face-to-face learning environment). Students were requested, at the start of voice recording, to say their names and tasks. It was necessary for interlocutors to know how much time they had to close the conversation.
For the storytelling task, half of the pictures were given to the first interlocutor and the second half to the second one. They needed to unscramble the pictures and make the complete story. They had at most ten minutes to do it. For the opinion-exchange task, the same procedure was adopted, and as the pairs discussed the topic, their voice was reordered for analysis. As before, the task was to be finished in ten minutes. As mentioned in the participants’ section, the participants were divided into two groups, i.e., face-to-face and virtual learning environments.
The participants in the face-to-face learning environment did the tasks in the classroom while sitting face-to-face; however, the participants in the virtual learning environment group studied English using the Moodle platform. In so doing, Zoom meeting was added to Moodle (using the “add an activity or resource” option) to facilitate two-way interaction similar to that of a face-to-face environment. Then, in the “description” section, we added the task and the details on what to do. This section is similar to the Microsoft Word environment for typing. The options “participant video” and “audio options (VoIP only)” were selected, and the time limit was set by the “duration” option. The researcher’s video was activated by turning on the “host video” option. When two students are interacting, you can keep the rest in the “waiting room” list. At the end of the time limit, the host selected the “save and return to course” option. The list of the saved interactions can be seen below on the same page. The same procedure was repeated for all pairs.
After data collection, the researcher transcribed all the conversations and extracted the features that interlocutors used to maintain the conversation. Each interaction was transcribed verbatim, though not including any nonverbal sounds or body language. Mistakes such as mispronounced words or misuse of phrases were transcribed verbatim. Following Galaczi and Taylor’s (2018) conceptualization of interactional competence, topic management was assessed by topic initiation, shift, extension, and closure since maintaining a conversation requires interlocutors to know how to start, shift, and develop a topic as well as when to close it. Turn management was assessed in relation to its components: starting, maintaining, pausing, and ending. The analysis of breakdown repair included checking for whether the interlocutors could modify what they intended to say using repair strategies (e.g., recast and joint creation) when there were communication problems. From the four sub-components of non-verbal behavior, laughter was only assessed in the study since the audio recordings were analyzed. Finally, interactive listening was assessed by back-channeling (e.g., “Yeah, that’s right”), comprehension check (e.g., “You mean he?”), and continuers (e.g., “aha, uhum”). It is noteworthy that the data obtained from the tasks were analyzed by ANOVA and MANOVA.
RESULTS
The first research question of the study sought to find the interactional competence components (topic management, turn management, non-verbal behavior, breakdown repair, and interactive listening) that Iraqi elementary EFL learners more frequently used in opinion exchange and jigsaw tasks in the face-to-face learning environment. Analysis of variance (ANOVA) was run to compare each group on the interactional competence components, and MANOVA was run to compare Iraqi elementary EFL learners’ performance in opinion exchange and jigsaw tasks.
Table 1
Descriptive Statistics of Interactional Competence for Elementary Level Opinion Exchange Task (Face-to-Face)
| Mean | SD | N |
Topic management | 25.2 | 5.57 | 10 |
Turn management | 9.5 | 3.77 | 10 |
Non-verbal behavior | .2 | .42 | 10 |
Breakdown repair | .6 | 1.07 | 10 |
Interactive listening | 20.1 | 5.42 | 10 |
As the above table shows, Iraqi elementary EFL learners obtained higher scores in topic management and interactive listening, respectively, while doing the opinion exchange task in the face-to-face learning environment.
Table 2
Multivariate Test of Interactional Competence for Elementary Level Opinion Exchange Task (Face-to-Face)
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .98 | 121.14 | 4 | 6 | .00 |
Wilks' Lambda | .01 | 121.14 | 4 | 6 | .00 | |
Hotelling’s Trace | 80.76 | 121.14 | 4 | 6 | .00 | |
Roy’s Largest Root | 80.76 | 121.14 | 4 | 6 | .00 |
The result of Wilk’s Lambda F (4,6) = 121.14, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 2). The pairwise comparison results (Table 3) show the components whose difference was significant.
Table 3
Pairwise Comparisons of Interactional Competence for Elementary Level Opinion Exchange Task (Face-to-Face)
(I) factor | (J) factor | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | Turn management | 15.7* | 2.2 | .00 |
Non-verbal behavior | 25* | 1.7 | .00 | |
Breakdown repair | 24.6* | 1.51 | .00 |
*. The mean difference is significant at the .05 level.
The pairwise comparisons table reveals that the difference between the mean scores of interactional components was significant for the elementary group (p< .05). In other words, topic management and interactive listening were the two most influential components for Iraqi elementary EFL learners when doing the opinion exchange task in the face-to-face learning environment. The same analysis was repeated with the jigsaw task in the face-to-face learning environment.
Table 4
Descriptive Statistics of Interactional Competence for Elementary Level Jigsaw Task (Face-to-Face)
| Mean | SD | N |
Topic management | 31.3 | 3.4 | 10 |
Turn management | 19.4 | 3.5 | 10 |
Non-verbal behavior | .4 | .69 | 10 |
Breakdown repair | 1.3 | 1.82 | 10 |
Interactive listening | 7.9 | 3.63 | 10 |
Table 4 shows that Iraqi elementary EFL learners obtained higher scores in topic management and turn management, respectively, while doing the jigsaw task in the face-to-face learning environment.
Table 5
Multivariate Test of Interactional Competence for Elementary Level Jigsaw Task (Face-to-Face)
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .99 | 340.89 | 4 | 6 | .00 |
Wilks' Lambda | .004 | 340.89 | 4 | 6 | .00 | |
Hotelling’s Trace | 227.21 | 340.89 | 4 | 6 | .00 | |
Roy’s Largest Root | 227.21 | 340.89 | 4 | 6 | .00 |
The result of Wilk’s Lambda F (4,6) = 340.89, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 4). The pairwise comparison results (Table 6) show the components whose difference was significant.
Table 6
Pairwise Comparisons of Interactional Competence for Elementary Level Jigsaw Task (Face-to-Face)
(I) factor | (J) factor | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | Turn management | 11.9* | .73 | .00 |
Non-verbal behavior | 30.9* | 1.14 | .00 | |
Breakdown repair | 30* | 1.39 | .00 | |
Interactive listening | 23.4* | 2.02 | .00 |
*. The mean difference is significant at the .05 level.
The pairwise comparisons table reveals that the difference between the mean scores of interactional components was significant for the elementary group in the jigsaw task (p< .05). In other words, topic management was the most influential component for Iraqi elementary EFL learners when doing the jigsaw task in the face-to-face learning environment. As stated above, MANOVA was run to compare Iraqi elementary EFL learners’ performance in opinion exchange and jigsaw tasks in the face-to-face learning environment.
Table 7
Multivariate Test of Interactional Competence for Elementary Level in both Tasks (Face-to-Face)
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .82 | 12.9 | 5 | 14 | .00 |
Wilks’ Lambda | .17 | 12.9 | 5 | 14 | .00 | |
Hotelling’s Trace | 4.61 | 12.9 | 5 | 14 | .00 | |
Roy’s Largest Root | 4.61 | 12.9 | 5 | 14 | .00 |
The result of Wilk’s Lambda F (5,14) = 12.9, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 7). The pairwise comparison results (Table 8) show the components whose difference was significant.
Table 8
Pairwise Comparisons of Interactional Competence for Elementary Level in both Tasks (Face-to-Face)
Dependent Variable | (I) group | (J) group | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | jigsaw task | opinion-exchange task | 6.1* | 2.06 | .00 |
Turn management | jigsaw task | opinion-exchange task | 9.9* | 1.62 | .00 |
Interactive listening | opinion-exchange task | jigsaw task | 12.2* | 2.06 | .00 |
*. The mean difference is significant at the .05 level.
As shown in Table 8, the jigsaw task performance resulted in higher scores in turn and topic management components of interactional competence, while the opinion-exchange task elicited more interactive listening (p< .05).
The second research question of the present study was “Which interactional competence components (topic management, turn management, non-verbal behavior, breakdown repair, and interactive listening) do Iraqi elementary EFL learners more frequently use in opinion exchange and jigsaw tasks in the virtual learning environment?” Similar to the first research question, analysis of variance (ANOVA) was run to compare each group on the interactional competence components, and MANOVA was run to compare Iraqi elementary EFL learners’ performance in opinion exchange and jigsaw tasks.
Table 9
Descriptive Statistics of Interactional Competence for Elementary Level Opinion Exchange Task (Virtual)
| Mean | SD | N |
Topic management | 34.4 | 2.59 | 10 |
Turn management | 5.7 | 3.19 | 10 |
Non-verbal behavior | .2 | .42 | 10 |
Breakdown repair | 1.6 | 1.83 | 10 |
Interactive listening | 29.6 | 2.11 | 10 |
Table 9 shows that Iraqi elementary EFL learners obtained higher scores in topic management and interactive listening, respectively, while doing the jigsaw task in the virtual learning environment.
Table 10
Multivariate Test of Interactional Competence for Elementary Level Opinion Exchange Task (Virtual)
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .99 | 1025.38 | 4 | 6 | .00 |
Wilks' Lambda | .01 | 1025.38 | 4 | 6 | .00 | |
Hotelling’s Trace | 683.59 | 1025.38 | 4 | 6 | .00 | |
Roy’s Largest Root | 683.59 | 1025.38 | 4 | 6 | .00 |
The result of Wilk’s Lambda F (4,6) = 1025.38, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 10). The pairwise comparison results (Table 11) show the components whose difference was significant.
Table 11
Pairwise Comparisons of Interactional Competence for Elementary Level Opinion Exchange Task (Virtual)
(I) factor | (J) factor | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | Turn management | 28.7* | 1.09 | .00 |
Non-verbal behavior | 34.2* | .87 | .00 | |
Breakdown repair | 32.8* | 1.10 | .00 | |
Interactive listening | 4.8* | 1.22 | .00 |
*. The mean difference is significant at the .05 level.
The pairwise comparisons table reveals that the difference between the mean scores of interactional components was significant for the elementary group (p< .05). In other words, topic management was the most influential component for Iraqi elementary EFL learners when doing the opinion exchange task in the virtual learning environment. The same analysis was conducted with the jigsaw task in the virtual learning environment, whose findings are presented below.
Table 12
Descriptive Statistics of Interactional Competence for Elementary Level Jigsaw Task (Virtual)
| Mean | SD | N |
Topic management | 13.7 | 4.05 | 10 |
Turn management | 1.9 | 1.28 | 10 |
Non-verbal behavior | .3 | .48 | 10 |
Breakdown repair | .6 | .84 | 10 |
Interactive listening | 3.4 | 1.64 | 10 |
As shown in the above table, Iraqi elementary EFL learners obtained higher scores in topic management and interactive listening, respectively, while doing the jigsaw task in the virtual learning environment.
Table 13
Multivariate Test of Interactional Competence for Elementary Level Jigsaw Task (Virtual)
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .92 | 19.65 | 4 | 6 | .00 |
Wilks’ Lambda | .07 | 19.65 | 4 | 6 | .00 | |
Hotelling’s Trace | 13.1 | 19.65 | 4 | 6 | .00 | |
Roy’s Largest Root | 13.1 | 19.65 | 4 | 6 | .00 |
The result of Wilk’s Lambda F (4,6) = 19.65, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 13). The pairwise comparison results (Table 14) show the components whose difference was significant.
Table 14
Pairwise Comparisons of Interactional Competence for Elementary Level Jigsaw Task (Virtual)
(I) factor | (J) factor | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | Turn management | 11.8* | 1.32 | .00 |
Non-verbal behavior | 13.4* | 1.36 | .00 | |
Breakdown repair | 13.1* | 1.37 | .00 | |
Interactive listening | 10.3* | 1.41 | .00 |
*. The mean difference is significant at the .05 level.
The pairwise comparisons table reveals that the difference between the mean scores of interactional components was significant for the elementary group in the jigsaw task (p< .05). In other words, topic management was the most influential component for Iraqi elementary EFL learners when doing the jigsaw task in the virtual learning environment. MANOVA was run to compare Iraqi elementary EFL learners’ performance in opinion exchange and jigsaw tasks in the virtual learning environment, the findings of which are presented below.
Table 15
Multivariate Test of Interactional Competence for Elementary Level in both Tasks (Virtual)
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .94 | 49.64 | 5 | 14 | .00 |
Wilks’ Lambda | .05 | 49.64 | 5 | 14 | .00 | |
Hotelling’s Trace | 17.73 | 49.64 | 5 | 14 | .00 | |
Roy’s Largest Root | 17.73 | 49.64 | 5 | 14 | .00 |
The result of Wilk’s Lambda F (5,14) = 49.64, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 15). The pairwise comparison results (Table 16) show the components whose difference was significant.
Table 16
Pairwise Comparisons of Interactional Competence for Elementary Level in both Tasks (Virtual)
Dependent Variable | (I) group | (J) group | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | opinion-exchange task | Jigsaw task | 20.7* | 1.52 | .00 |
Turn management | opinion-exchange task | Jigsaw task | 3.8* | 1.09 | .00 |
Interactive listening | opinion-exchange task | Jigsaw task | 26.2* | .84 | .00 |
*. The mean difference is significant at the .05 level.
Table 16 reveals that the opinion-exchange task resulted in higher scores of Iraqi elementary EFL learners in topic and turn management and interactive listening in the virtual learning environment (p<.05).
The third research question was posed to find whether there was any difference between the competence components employed by Iraqi elementary EFL learners in opinion exchange and jigsaw tasks used in face-to-face and virtual learning environments. In so doing, MANOVA was run to compare each task type in two learning environments (face-to-face vs. virtual).
Table 17
Descriptive Statistics of Interactional Competence in Opinion Exchange Task of Elementary Level in Both Learning Environments
| Group | Mean | SD | N |
Topic management | face-to-face | 25.2 | 5.57 | 10 |
Virtual | 34.4 | 2.59 | 10 | |
Total | 29.8 | 6.33 | 20 | |
Turn management | face-to-face | 9.5 | 3.77 | 10 |
Virtual | 5.7 | 3.19 | 10 | |
Total | 7.6 | 3.92 | 20 | |
Non-verbal behavior | face-to-face | .2 | .42 | 10 |
Virtual | .2 | .42 | 10 | |
Total | .2 | .41 | 20 | |
Breakdown repair | face-to-face | .6 | 1.07 | 10 |
Virtual | 1.6 | 1.83 | 10 | |
Total | 1.1 | 1.55 | 20 | |
Interactive listening | face-to-face | 20.1 | 5.42 | 10 |
Virtual | 29.6 | 2.11 | 10 | |
Total | 24.85 | 6.31 | 20 |
The above table indicates that the participants in the virtual learning environment obtained higher scores in topic management, breakdown repair, and interactive listening, while their face-to-face counterparts had higher scores in turn management.
Table 18
Multivariate Test of Interactional Competence in Opinion Exchange Task of Elementary Level in Both Learning Environments
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .84 | 15.04 | 5 | 14 | .00 |
Wilks’ Lambda | .15 | 15.04 | 5 | 14 | .00 | |
Hotelling’s Trace | 5.37 | 15.04 | 5 | 14 | .00 | |
Roy’s Largest Root | 5.37 | 15.04 | 5 | 14 | .00 |
The result of Wilk’s Lambda F (5,14) = 15.04, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 18). The pairwise comparison results (Table 19) show the components whose difference was significant.
Table 19
Pairwise Comparisons of Interactional Competence in Opinion Exchange Task of Elementary Level in Both Learning Environments
Dependent Variable | (I) group | (J) group | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | virtual | face-to-face | 9.2* | 1.94 | .00 |
Turn management | face-to-face | virtual | 3.8* | 1.56 | .02 |
Interactive listening | virtual | face-to-face | 9.5* | 1.84 | .00 |
*. The mean difference is significant at the .05 level.
Table 19 reveals that the virtual learning environment prompted more topic management and interactive listening competences in opinion exchange task while the face-to-face learning environment promoted turn management (p<.05).
Regarding the jigsaw task, the performance of both groups (virtual and face-to-face) were compared to see whether the application of interactional competence components differed in terms of the learning environment.
Table 20
Descriptive Statistics of Interactional Competence in Jigsaw Task of Elementary Level in Both Learning Environments
| group | Mean | SD | N |
Topic management | face-to-face | 31.3 | 3.4 | 10 |
virtual | 13.7 | 4.05 | 10 | |
Total | 22.5 | 9.73 | 20 | |
Turn management | face-to-face | 19.4 | 3.5 | 10 |
virtual | 1.9 | 1.28 | 10 | |
Total | 10.65 | 9.33 | 20 | |
Non-verbal behavior | face-to-face | .4 | .69 | 10 |
virtual | .3 | .48 | 10 | |
Total | .35 | .58 | 20 | |
Breakdown repair | face-to-face | 1.3 | 1.82 | 10 |
virtual | .6 | .84 | 10 | |
Total | .95 | 1.43 | 20 | |
Interactive listening | face-to-face | 7.9 | 3.63 | 10 |
virtual | 3.4 | 1.64 | 10 | |
Total | 5.65 | 3.58 | 20 |
Table 20 indicates that the participants in the face-to-face learning environment obtained higher scores in topic management, turn management, breakdown repair, and interactive listening.
Table 21
Multivariate Test of Interactional Competence in Jigsaw Task of Elementary Level in Both Learning Environments
Effect | Value | F | Hypothesis df | Error df | Sig. | |
| Pillai’s Trace | .97 | 118.45 | 5 | 14 | .00 |
Wilks' Lambda | .02 | 118.45 | 5 | 14 | .00 | |
Hotelling’s Trace | 42.3 | 118.45 | 5 | 14 | .00 | |
Roy’s Largest Root | 42.3 | 118.45 | 5 | 14 | .00 |
The result of Wilk’s Lambda F (5,14) = 118.45, P= .00 indicates a statistically significant difference among the scores of interactional competence components (Table 21). The pairwise comparison results (Table 22) show the components whose difference was significant.
Table 22
Pairwise Comparisons of Interactional Competence in Jigsaw Task of Elementary Level in Both Learning Environments
Dependent Variable | (I) group | (J) group | Mean Difference (I-J) | Std. Error | Sig. |
Topic management | face-to-face | virtual | 17.6* | 1.67 | .00 |
Turn management | face-to-face | virtual | 17.5* | 1.18 | .00 |
Interactive listening | face-to-face | virtual | 4.5* | 1.26 | .00 |
*. The mean difference is significant at the .05 level.
Table 22 reveals that the face-to-face learning environment prompted more topic management, turn management, and interactive listening competences compared to the virtual one.
DISCUSSION
This study intended to assess the interactional competence of EFL learners in face-to-face and virtual learning environments using opinion exchange and jigsaw tasks. Three research questions were posed for the present study, the findings of which are discussed below.
The first research question of the study sought to find the interactional competence components (topic management, turn management, non-verbal behavior, breakdown repair, and interactive listening) that Iraqi elementary EFL learners more frequently used in opinion exchange and jigsaw tasks in the face-to-face learning environment. The findings revealed that topic management and interactive listening were the two most influential components for Iraqi elementary EFL learners when doing the opinion exchange task in the face-to-face learning environment. Additionally, topic management was the most influential component for Iraqi elementary EFL learners when doing the jigsaw task in the face-to-face learning environment. Finally, the jigsaw task performance resulted in higher scores in turn and topic management components of interactional competence, while the opinion exchange task elicited more interactive listening.
The results might be explained by the following reasons. Firstly, due to the potential stress imposed by the tasks of the study, the participants might have felt obliged to complete the task. Consequently, they tended to make more efforts to construct their part of the conversation through interactional management features (e.g., managing turns and topics and asking questions) and put more effort into responding to their interlocutor as listeners. Therefore, topic management, turn management, and interactive listening were the most frequently used interactional competence components.
Secondly, being elementary learners, they might spend more time on practicing interactional management features such as topic and turn management. These features might be more easily noticed by learners. At this proficiency level, it might be challenging to be aware of responding to others using other components, such as non-verbal behavior or breakdown repair, as an inseparable part of having effective conversations (Wang, 2015).
Topic management was the most frequently used component in both tasks. This might imply that the learners allocated more attention to enriching their task content by creating more topics and providing supporting examples or details. Second, the participants strove to complete their tasks within the given time by means of exchanging turns frequently to keep the conversation moving forward quickly (Wang, 2015).
Comparing the two tasks, the jigsaw task (story completion) evoked more topic and turn management. In the story-completion task, which required more negotiation and had only one correct solution (i.e., decide the right order of the six pictures), participants had to agree/disagree on the order proposed by the other interlocutor and demonstrate an engaging conversation by having more turn interruptions and overlapping. The opinion-exchange task was open-ended and did require interlocutors to achieve a final agreement on the prompt. Therefore, interlocutors’ personal opinions were heavily involved. Participants had to compare all their alternatives and make decisions. This task naturally elicited more moves of prompting, topic connection, agreement questions, and opinion questions. In the free discussion task, participants might feel less stressed with the open-ended setting and have more time to make comments on each other’s ideas and to provide more back-channel signals (a feature of interactive listening that was more frequent in the opinion-exchange task).
The second research question of the study sought to find the interactional competence components (topic management, turn management, non-verbal behavior, breakdown repair, and interactive listening) that Iraqi elementary EFL learners more frequently used in opinion exchange and jigsaw tasks in the virtual learning environment. The findings indicated that topic management was the most influential component for Iraqi elementary EFL learners when doing the opinion exchange task in the virtual learning environment; also, topic management was the most influential component for Iraqi elementary EFL learners when doing the jigsaw task in the virtual learning environment. Besides, the opinion exchange task resulted in higher scores of Iraqi elementary EFL learners in the topic and turn management and interactive listening in the virtual learning environment.
Topic management is important for any speech exchange system, but it is particularly vital for institutional talk, where the interaction is goal-oriented and task-related, and the success depends, to a large extent, on the accomplishment of the talk (Liu, 2022). Though no similar study was found on virtual learning environments and interactional competence, the finding about topic management might somehow be explained by Nisa et al. (2021), who observed and surveyed students learning speaking via small group discussions in Zoom breakout rooms. The findings reveal that students can exchange ideas and opinions about teaching discussions in small group discussions to be more confident in their speaking skills. In breakout rooms, students not only exchanged views and opinions but also produced different knowledge, experiences, and talents that each student shared in the small group. This finding reflects the online community of inquiry (Garrison, 2017), where participants shared concepts related to a topic at hand and interacted in written chat and answering questions. These might justify the highest frequency of the topic management component.
Besides, looking at topic management from the perspective of tasks, in opinion exchange, managing topics helps learners express their views coherently and build on each other’s contributions. In jigsaw tasks, effective topic management facilitates collaboration and helps learners connect piece to piece of information, enhancing the learning experience (Wells, 2002). In jigsaw tasks, participants often need to piece together different parts of information to form a complete picture (Syukur, 2006). Topic management ensures that each participant’s contributions are relevant and the conversation progresses logically.
Additionally, elementary EFL learners might not yet possess well-developed interactional skills, making topic management more critical in their exchanges. Their developmental stage may mean they rely more on the scaffolding provided by clear topic boundaries to guide their contributions and responses.
In terms of virtual learning, online platforms may provide less immediate feedback compared to face-to-face interactions, making the clarity of topic management even more essential (Baralt, 2013). In online environments, managing the topic becomes essential as responses can be delayed. Participants need to steer the conversation to maintain active engagement and coherence. Online communication often lacks the non-verbal cues present in face-to-face interactions. This absence makes it even more critical to manage topics explicitly to ensure all participants are aligned and understand each other’s contribution.
In summary, justifying topic management as the most frequent interactional competence component in online jigsaw and opinion-exchange tasks hinges on its role in facilitating effective communication, enhancing engagement, and maintaining clarity amidst the complexities of online interactions. By framing these conversations around well-managed topics, participants can collaborate effectively, promote understanding, and achieve desired learning outcomes.
The third research question of the study intended to find whether there was any difference between the competence components employed by Iraqi elementary EFL learners in opinion exchange and jigsaw tasks used in face-to-face and virtual learning environments. The findings indicated that the virtual learning environment prompted more topic management and interactive listening competences in opinion exchange task while the face-to-face learning environment promoted turn management; also, the face-to-face learning environment prompted more topic management, turn management and interactive listening competences compared to the virtual one in terms of the jigsaw task.
Compared with face-to-face communication, individuals engaging in online communication have lower liking and ability to evaluate each other (Baker et al., 2020), take more time to communicate, and with lower efficiency (Baltes et al., 2002). Related to the jigsaw task, the participants are required to come to an agreement on the arrangement of the pictures and finding one response; therefore, the interlocutors should check the comprehension of each other and give or receive feedback to ensure that the conversation and the speech have been understood (features of interactive listening). This might justify why interactive listening was more frequently used in the jigsaw task in the face-to-face learning environment.
As the finding revealed, turn-taking or management were more frequently used in face-to-face rather than virtual learning environments. The rapid turn-taking between interlocutors in face-to-face conversations may rely on an automatic, neural oscillatory mechanism that synchronizes between interlocutors on syllable rate (Wilson & Wilson, 2005). There may also be differences in the fluency of turn-taking between online and face-to-face communication. One reason for this may lie in the different ability of media to convey cues. According to the media richness theory, different media convey different cues (Meredith, 2020; Zhu et al., 2018). Among them, face-to-face communication is generally regarded as the richest medium (Geiger, 2020). When communicating in this way, interlocutors are usually not only able to hear verbal cues but can also see non-verbal cues, such as gestures, gaze, and body movements (Binder et al., 2019; Ramirez & Burgoon, 2004). Although interlocutors can obtain many turn-taking cues (e.g., lexico-syntactic information, prosodic information) in online audio communication, some non-verbal cues are lacking (e.g., body information) (Wang et al., 2018).
Given that these non-verbal cues are important in the process of turn-taking (Duncan, 1972), this means that different media may have different fluencies of turn-taking. In addition, network latency may also affect turn-taking fluency. In face-to-face communication, interlocutors share the same time and space and speak in real time (Seuren et al., 2021), which means that conversation will naturally occur and continue. In online communication, network latency is an inevitable problem (Rusk & Pörn, 2019), and the latency duration may range from tens to hundreds of milliseconds (Seuren et al., 2021). This means that online communication is not simultaneous, such that the timing of turn-taking is generally longer online than in face-to-face communication (Seuren et al., 2021).
Finally, previous studies have suggested that face-to-face communication be regarded as a multimodal phenomenon that is largely multi-channel and nonverbal (Levinson & Holler, 2014; Mondada, 2016). This means that interlocutors engaged in face-to-face communication not only hear the voice of the other side but also see the non-verbal information they transmit, such as facial expressions and body movements.
CONCLUSION
This study provided evidence to enhance our understanding of the complex nature of interactional competence in the context of English-speaking assessment. In general, most of the selected interaction features were helpful in describing interactions of different communication functions. The findings also suggest that task type differences could possibly impose a certain influence on the distribution pattern of interaction features, which can be considered a potential variable in measuring interaction performance.
Though the findings provided crucial information about Iraqi EFL learners’ interactional competence, they should be interpreted cautiously, and the limitations of the study should be acknowledged. The virtual learning environment data were in the audio mode, while non-verbal behavior, such as facial expression, body posture, and general direction of gaze, were not recorded or analyzed despite the fact that such non-linguistic behavior can provide important information regarding the discourse. Furthermore, the data were collected from EFL learners in one city and country, and the sample size was also limited. As such, the generalizability of the findings to EFL learners in other contexts may be limited.
This current study included two task types. Future studies can refer to literature on task-based language teaching and second language assessment to seek alternative perspectives, such as the cognitive approach to classify paired speaking tasks. Future studies can also explore the effects of various task characteristics and conditions on participants’ interaction performance.
Last but not least, the findings of this study provided useful suggestions for instructional practice in the classroom. First, successful communication is a two-way street. Learners should be aware that an efficient listener often uses features such as filling a silence for their partner, making comments on what their partner said, and providing back-channeling signals to maintain the conversation (Ducasse & Brown, 2009). Second, it is recommended that L2 learners should use more questions in conversations to increase the level of co-construction in the future (e.g., Brooks, 2009; Dimitrova-Galaczi, 2004).
References
Al Ruheili, H., & Al-Saidi A. (2015). Students’ perceptions on the effectiveness of using Edmodo in EFL Classes. Journal of Global Academic Institute, 1(1), 23-33.
Alhawiti, M. M. F. (2017). The effect of virtual classes on the student’s English achievement in Tabuk Community College. International Journal of Learning, Teaching and Educational Research, 16(5), 17-25.
Al-Qahtani, M. H. (2019). Teachers’ and students’ perceptions of virtual classes and the effectiveness of virtual classes in enhancing communication skills. Arab World English Journal, Special Issue: The Dynamics of EFL in Saudi Arabia, 7, 223-240.
Al-Said, K. (2015). Students’ perceptions of Edmodo and mobile learning and their real barriers towards them. The Turkish Online Journal of Educational Technology, 14(2), 167-180.
Andyani, H., Setyosari, P., Wiyono, B. B., & Djatmika, E. T. (2020). Does technological pedagogical content knowledge impact on the use of ICT in pedagogy? International Journal of Emerging Technologies in Learning (iJET), 15(3), 126-139.
Atkinson, D. (2011). Alternative approaches to second language acquisition. Taylor & Francis.
Atkinson, J. M., & Heritage, J. (1984). Structures of social action. Cambridge University Press.
Babbie, E. R. (2010). The practice of social research. (12th ed.). Belmont, CA: Wadsworth Cengage.
Baker, D. A., Burns, D. M., & Reynolds-Kueny, C. (2020). Just sit back and watch: Large disparities between video and face-to-face interview observers in applicant ratings. International Journal of Human–Computer Interaction, 36(4), 1–12.
Balasubramanian, K., Jaykumar V., & Fukey L. N. (2014). A study on “student preference towards the use of Edmodo as a learning platform to create responsible learning environment.” Procedia - Social and Behavioral Sciences, 144(2014), 416-422.
Baltes, B. B., Dickson, M. W., Sherman, M. P., Bauer, C. C., & LaGanke, J. S. (2002). Computer-mediated communication and group decision making: A meta-analysis. Organizational Behavior and Human Decision Processes, 87(1), 156–179.
Baralt, M. (2013). The impact of cognitive complexity on feedback efficacy during online versus face-to-face interactive tasks. Studies in Second Language Acquisition, 35(4), 689-725.
Bataineh, R. F., & Mayyas, M. B. (2017). The utility of blended learning in EFL reading and grammar: A case for Moodle. Teaching English with Technology, 17(3), 35-49
Bicen, H. (2014). The role of social learning networks in mobile-assisted language learning: Edmodo as a case study. Journal of Universal Computer Science, 21(10), 1297-1306.
Binder, J. F., Cebula, K., Metwally, S., Vernon, M., Atkin, C., & Mitra, S. (2019). Conversational engagement and mobile technology use. Computers in Human Behavior, 99, 66–75.
Bøhn, H. (2015). Assessing Spoken EFL Without a Common Rating Scale: Norwegian EFL Teachers’ Conceptions of Construct. SAGE Open, 5(4), 27-36.
Borger, L. (2019). Assessing Interactional Skills in A Paired Speaking Test:
Raters’ Interpretation of The Construct. Apples - Journal of Applied Language
Studies, 13(1), 151-174.
Brooks, L. (2009). Interacting in pairs in a test of oral proficiency: Co-constructing a
better performance. Language Testing, 26, 341-366.
Brouwer, C. E., & Wagner, J. (2004). Developmental issues in second language conversation. Journal of Applied Linguistics, 1(1), 29-47.
Creswell, J. W. (2009). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Pearson.
Dayag, J. (2018). Reaching out: Facilitating EFL learning through Edmodo. International Journal of Advanced Multidisciplinary Scientific Research, 1(2), 1-7.
Dimitrova-Galaczi, E. (2004). Peer-peer interaction in a paired speaking test: The case of the First Certificate in English. (Doctoral dissertation), Columbia University.
Ducasse, A. M., & Brown, A. (2009). Assessing paired orals: Raters' orientation to interaction. Language Testing, 26(3), 423-443.
Duncan, S. (1972). Some signals and rules for taking speaking turns in conversation. Journal of Personality and Social Psychology, 23(2), 283–292.
Emrani, F., & Hooshmand, M. (2019). A conversation analysis of self-initiated self-repair structures in advanced Iranian EFL learners”. International Journal of Language Studies, 13(1), 57-76.
Galaczi, E. D., & Taylor, L. B. (2020). Measuring interactional competence. In P. Winke & T. Brunfaut (Eds.), The Routledge handbook of second language acquisition and language testing (pp. 338-348). Routledge.
Galaczi, E., & Taylor, L. (2018). Interactional competence: Conceptualisations, operationalisations, and outstanding questions. Language Assessment Quarterly, 15(3), 219-236.
Garrison, D. R. (2017). E-learning in the 21st century: A community of inquiry framework for research and practice. Taylor & Francis.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in Cognitive Sciences, 8(1), 8-11.
Geiger, I. (2020). From letter to Twitter: A systematic review of communication media in negotiation. Group Decision and Negotiation, 29(3), 207–250.
Hamouda, A. (2020). The effect of virtual classes on Saudi EFL students’ speaking skills. International Journal of Linguistics, Literature and Translation, 3(4), 175–204
Herrera Mosquera, L. (2017). Impact of implementing a virtual learning environment (VLE) in the EFL classroom. Íkala, revista de lenguaje y cultura, 22(3), 479-498.
Jungheim, N. O. (2001). The unspoken element of communicative competence:
Evaluating language learners’ nonverbal behavior. In T. Hudson & J. D. Brown
(Eds.), A focus on language test development: Expanding the language
proficiency construct across a variety of tests (pp. 1-34). University of Hawai'i, Second Language Teaching and Curriculum Center.
Kear, K. (2007). Communication aspects of virtual learning environments: Perspectives of early adopters. Retrieved from: http://oro.open.ac.uk/8623/1/KearCommunicationAspects2.pdf
Kern, R., Ware, P., & Warschauer, M. (2004). 11. Crossing frontiers: New directions in online pedagogy and research. Annual Review of Applied Linguistics, 24, 243–260.
Kim, Y. (2009). The effects of task complexity on learner-learner interaction. System, 37(2), 254-268.
Knapp, M.L., & Hall, J.A. (2002). Nonverbal communication in human interaction.
Crawfordsville, IN: Thomson Learning.
Kukulska-Hulme, A., Pettit, J., Bradley, L., Carvalho, A. A., Herrington, A., Kennedy, D. M., & Walker, A. (2011). Mature students using mobile devices in life and learning. International Journal of Mobile and Blended Learning, 3(1), 18-52.
Lee, J., & Choi, H. (2017). What affects learner’s higher-order thinking in technology-enhanced learning environments? The effects of learner factors. Computers & Education, 115, 143–152.
Levinson, S. C., & Holler, J. (2014). The origin of human multi-modal communication. Philosophical Transactions of the Royal Society, 369(1651), 20130302.
Liu, L. (2022). Interactional features of Chines EFL learners' discourse in a paired speaking test: implications for L2 teaching and testing (Doctoral dissertation), Macquarie University.
MacKey, A., & Gass, S. (2006). Second language research: Methodology and design. Routledge.
Martin, F., Stamper, B., & Flowers, C. (2020). Examining student perception of their readiness for online learning: Importance and confidence. Online Learning, 24(2), 38-58.
Mathew, N. G., Sreehari, P., & Al-Rubaat, A. M. (2019). Challenges and implications of virtual e-learning platform in EFL context: Perceptions of teachers. International Journal of English Language Teaching, 7(2), 100–116.
May, L. (2011). Interactional competence in a paired speaking test: Features salient to raters. Language Assessment Quarterly, 8(2), 127–145.
McCarthy, M. (2002). Good listenership made plain: British and American non-minimal response tokens in everyday conversation. In R. Reppen, S.M. Fitzmaurice, & D. Biber (Eds.), Using corpora to explore linguistic variation (pp. 49–72). Philadelphia: John Benjamins.
Mensah, R. O., Quansah, C., Oteng, B., & Nii Akai Nettey, J. (2023). Assessing the effect of information and communication technology usage on high school student’s academic performance in a developing country. Cogent Education, 10(1), 1–16.
Meredith, J. (2020). Conversation analysis, cyberpsychology and online interaction. Social and Personality Psychology Compass, 14, Article e12529.
Milis, K., Wessa, P., Poelmans, S., Doom, C., & Bloemen, E. (2008). The impact of gender on the acceptance of virtual learning environments. KU Leuven Association.
Mondada, L. (2016). Challenges of multimodality: Language and the body in social interaction. Journal of Sociolinguistics, 20(3), 336–366.
Morales, M. C., & Lee, J. S. (2015). Stories of assessment: Spanish-English bilingual children's agency and interactional competence in oral language assessments. Linguistics and Education, 29, 32-45.
Muijs, D. (2010). Doing quantitative research in education with SPSS. (2nd ed.). London, England: Sage.
Mukasheva, M., Kornilov, I., Beisembayev, G., Soroko, N., Sarsimbayeva, S., & Omirzakova, A. (2023). Contextual structure as an approach to the study of virtual reality learning environment. Cogent Education, 10(1), 1–22.
Nayak, D. (2021). E – learning during COVID -19 and student’ s satisfaction. Education, 4, 287–299.
Nisa, L. Z., Prameswari, T. N., & Alawiyah, Y. I. (2021). The effect of using small group discussions through zoom breakout room to increase the frequency of individual speaking participation in the speaking courses. Journal of Digital Learning and Education, 1(3), 109-117.
Nordquist, R. (2019). The cooperative principle in conversation. ThoughtCo.
O’Leary, R. (2004). Virtual learning environment. Retrieved from: https://www.alt.ac.uk/sites/default/files/assets_editor_uploads/documents/eln002pdf.
Pekarek Doehler, S., & Pochon-Berger, E. (2015). The development of L2 interactional
competence: evidence from turn-taking organization, sequence organization, repair organization and preference organization. In T. Cadierno, & W. S. Eskildsen, (Eds.), Usage-Based Perspectives on Second Language Learning (pp. 233–268). De Gruyter.
Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-Based Virtual Learning Environment: A research framework and a preliminary assessment of effectiveness in basic IT skills training. MIS Quarterly, 25(4), 401–426.
Raman, A., & Rathakrishnan, M. (2018). FROG VLE: Teachers’ technology acceptance using utaut model. International Journal of Mechanical Engineering and Technology, 9(3), 529–538.
Ramirez, A., Jr., & Burgoon, J. K. (2004). The effect of interactivity on initial interactions: The influence of information valence and modality and information richness on computer-mediated interaction. Communication Monographs, 71(4), 422–447.
Rashid, A. H. A., Shukor, N. A., Tasir, Z., & Na, K. S. (2021). Teachers’ perceptions and readiness toward the implementation of virtual learning environment. International Journal of Evaluation and Research in Education (IJERE), 10(1), 209–214
Rusk, F., & Pörn, M. (2019). Delay in L2 interaction in video-mediated environments in the context of virtual tandem language learning. Linguistics and Education, 50, 56–70.
Schegloff, E. A. (1986). The routine as achievement. Human Studies, 9(2-3), 111-151.
Schegloff, E. A., Jefferson, G., & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53(ii), 361-382.
Seuren, L. M., Wherton, J., Greenhalgh, T., & Shaw, S. E. (2021). Whose turn is it anyway? Latency and the organization of turn-taking in video-mediated interaction. Journal of Pragmatics, 172(4), 63–78.
Syukur, H. (2016). Building up students’ speaking achievement through jigsaw technique. Jurnal Adabiyah, 16(2), 122-137.
Tuzlukova, V., Al Busaidi, S., Coombe, C. & Stojkovic N. (2016). Research on technology-based language education in the Sultanate of Oman: Perspectives for student skills’ enhancement. Journal of Teaching English for Specific and Academic Purposes, 4(1), 1-8.
Varonis, E. M., & Gass, S. M. (1985). Miscommunication in native/nonnative conversation. Language in society, 14(3), 327-343.
Wahjono, H., Wiyono, B. B., Maisyaroh, & Mustiningsih. (2021). Development of blended-learning-based semester credit system implementation model to improve learning service. Information, 12(12), 511. https://doi.org/10.3390/info12120511
Wang, L. (2015). Assessing interactional competence in second language paired speaking tasks. (Doctoral Dissertation), Northern Arizona University.
Waring, H. Z. (2018). Teaching L2 interactional competence: Problems and possibilities. Classroom Discourse, 9(1), 57-67.
Warner, R. (2013). Personal and professional skills of TESOL practitioners of the future. In P. Davidson, M. Al Hamly, C. Coombe, S. Troudi & C. Gunn (Eds.), Proceedings of the 18th TESOL Arabia conference: Achieving Excellence through Life Skills Education (pp. 22-28). Dubai: TESOL Arabia.
Wells, G. (2002). Learning and teaching for understanding: The key role of collaborative knowledge building. Advances in Research on Teaching, 9, 1-42.
Wilson, M., & Wilson, T. P. (2005). An oscillator model of the timing of turn-taking. Psychonomic Bulletin & Review, 12(6), 957–968.
Yildiz, L. M. (2011). English VG1 level oral examinations: how are they designed, conducted and assessed? (Master's thesis), Universitetet i Oslo.
Zhu, M. X., Yan, X. L., & Yuan, Q. J. (2018). A review of researches based on media richness theory in MIS discipline. Journal of Modern Information, 38(09), 146–154.
Biodata
Karwan Othman Azeez Zanganah, an assistant Lecturer at kirkuk University,was completed bachelor in English language from college of education at university of Kirkuk. Then in August 2019 he completed his master degree in methods of teaching English language. Later He started lecturing in October 2019. He gave lectures in many private and governmental universities and institutions. He taught different subjects in many departments, he taught general English, methods of teaching, media texts, and translation. His main interests are, second language acquisition, assessment, technology and translation. Pandemic in 2020, she served as the head of her department for two years. He has published articles on TEFL and translation issues, with a strong interest in applied linguistics, psycholinguistics, and sociolinguistics. Elahe has supervised numerous M.A. and Ph.D. students in TEFL and translation, resulting in many dedicated teachers and translators who share their passion for English with joy.
E-mail: Karwanothman@uokirkuk.edu.iq
Elahe Sadeghi-Barzani, an assistant professor at Islamic Azad University, Khorasgan Branch, began her teaching career at the age of 22. During the COVID-19 pandemic in 2020, she served as the head of her department for two years. She has published articles on TEFL and translation issues, with a strong interest in applied linguistics, psycholinguistics, and sociolinguistics. Elahe has supervised numerous M.A. and Ph.D. students in TEFL and translation, resulting in many dedicated teachers and translators who share their passion for English with joy.
E-mail: elahesadeghi20@yahoo.com
Parween Shawkat Kawther Qader, an assistant professor at Salahaddin University, began her teaching career in 1990 at Tuz secondary school, and lectures from 2006 till present at alahaddin University/ college of education/ English department. She supervised many Maser and PhD students. she served as examiner and evaluator of many academic researches. She headed assessment center at English department for many years. She wrote many researches and working on many others at present.
E-mail: Parween.Kawther@su.edu.krd
Fatemeh Karimi is a faculty member of Islamic Azad University, Isfahan branch. She received her M.A. degree in TEFL from Tarbiat Moallem University of Tabriz in 2006 and her PhD from Islamic Azad University, Isfahan Branch in 2018. She has been the Head of the English department at Islamic Azad University, Isfahan branch since 2021 to present. Her research interests are language testing and research.
E-mail: Fatinaz.karimi@yahoo.com