Identifying Factors Affecting the Success of Digital Formative Assessment in E-Learning Based on Meta-Synthesis Method
Mitra Omidvar
1
(
PhD Student in Educational Management, Faculty of Humanities, Islamic Azad University of Qom, Qom, Iran.
)
Seyed Mohammadbagher Jafari
2
(
Associate Professor, Department of Industrial and Technological Management, Faculty of Management and Accounting, Farabi Colleges, University of Tehran, Qom, Iran
)
Gholamreza Sharifirad
3
(
Professor, Department of Educational Management, Faculty of Humanities, Islamic Azad University of Qom, Qom, Iran
)
hosein karimian
4
(
Associate Professor, Department of Educational Management, Faculty of Humanities, Islamic Azad University of Qom, Qom, Iran
)
Keywords: Electronic assessment, formative assessment, digital formative assessment, learning assessment, e-learning,
Abstract :
Introduction
Like other systems, the education system has its own fundamental principles and distinctive characteristics. One of its essential and inseparable elements is the assessment and evaluation of students' learning processes. Web-based instruction is a form of learning that has become increasingly prominent. One of the major challenges of e-learning systems is assessing learners’ levels of understanding (Alizadeh, 2016). The lack of proper assessment mechanisms in e-learning environments has led administrators to adopt temporary and costly interventions (Rahimi, 2017). This issue often stems from the failure to properly identify key evaluation components in online education. Accordingly, the present study aims to identify the factors contributing to the success of digital formative assessment in e-learning.
Materials and Methods
To address the research problem and develop a comprehensive model, a qualitative meta-synthesis approach was employed by systematically reviewing the literature. Meta-synthesis involves the search, appraisal, integration, and interpretation of quantitative or qualitative studies within a specific domain (Catalano, 2013). This study followed the seven-step method proposed by Sandelowski and Barroso (2006). The research population included articles published between 2000 and 2022, selected through a multi-stage screening process using reputable academic databases. Thematic analysis was used for data analysis, involving iterative comparison between raw data, coded summaries, and analytical themes.
To determine the weight and priority of the extracted and coded factors, Shannon entropy method (Shannon, 1948) was applied. This allowed the calculation of each indicator’s weight based on the degree of emphasis found across sources. The total weight of each category was calculated, and final importance coefficients were determined accordingly.
The validity of the meta-synthesis process was verified using the CASP checklist. To ensure reliability, five coded articles were randomly selected and reviewed by an expert. The inter-rater reliability was measured using the Cohen’s Kappa coefficient.
Given the research objective—to identify factors influencing the success of digital formative assessment in e-learning—the study is categorized as applied research. Furthermore, since the data were naturally gathered without manipulation, the study falls under the category of descriptive (non-experimental) research.
Results and Findings
Using a qualitative meta-synthesis approach and content analysis of 76 selected articles, the study identified two main categories, five dimensions, and 80 specific indicators that contribute to the success of formative assessment in digital learning contexts. Findings revealed that previous studies lacked such a comprehensive, systematic, and integrative approach. Most prior research focused on limited aspects of formative assessment, whereas the current study examined its diverse dimensions in a dynamic and structured manner.
The identified factors were classified into two main categories: human and technical. The human category included teacher- and student-related dimensions, while the technical category encompassed instructional content, digital infrastructure, and learning environment.
In the quantitative phase, Shannon entropy was used to measure the distribution of emphasis across sources for each indicator. This enabled the prioritization of the 80 indicators based on their importance and influence.
Specifically, 20 indicators were related to the teacher dimension, 21 to the student, 9 to instructional content, 11 to digital infrastructure, and 19 to the learning environment. Among all indicators, the most frequently cited one was "providing appropriate feedback by the teacher to the student", appearing 26 times across the reviewed studies. This indicator received the highest priority based on the entropy calculations.
Additionally, when comparing the total frequencies of each dimension—teacher (80), student (84), digital infrastructure (82), environment (80), and instructional content (38)—it was evident that the student element received the most emphasis. However, there was a general balance among most dimensions, except for instructional content, which appeared with about half the frequency of others.
Discussion and Conclusion
Among the key success factors of digital formative assessment, the highest emphasis was on providing effective feedback—especially real-time and task-specific feedback—aligned with findings from Vogelzang (2016), Reynolds (2020), Reis (2011), Mayer (2016), Hattie (2007), Cowie (2013), and William (2016). The results suggest that designing and implementing supportive structures and processes in virtual learning environments, aligned with the identified priorities, can play a significant role in achieving educational goals.
Another notable finding was the limited attention paid in previous studies to individual learner-related factors, such as low self-confidence and high stress (Taghavinia, 2017), reduced psychological pressure (Klimova ,2019), educational equity (Taghizadeh, 2018), and the avoidance of learner comparisons (McMillan, 2010). Based on the indicator rankings presented in this study, it is recommended that future e-learning models place greater emphasis on personal learner variables. This approach may enhance the quality and effectiveness of formative assessments in digital environments and create fertile ground for further research.
."
Abasi, H., & et al. (2023). Solutions to improve formative assessment in e-learning environments. New Approaches in Educational Administration. doi:10.30495/JEDU.2023.28524.5723
Aldon, G. e. (2015). Which support technology can give to mathematics formative assessment? The FaSMEd project in Italy and France. .Quaderni di Ricerca in Didattica (Mathematics), 25, 631-641.
Alizadeh, S. e. (2016). Analyzing the quality of classroom assessment of teachers; A mixed research study. Quarterly journal of research in school and virtual learning, 17(5), 63-84. [in perseian].
Baker, R. e. (2011). Detecting learning moment by moment. International Journal of Artificial Intelligence in Education, 21(1-2), 5-25.
Beesley, A. e. (2018). Enhancing formative assessment practice and encouraging middle school mathematics engagement and persistence. School Science & Mathematics, 118(1), 4-16.
Bennet, R. (2011). Formative assessment acritical review assessment in Education. principles,policy and practice, 18(1), 5-25.
Bicer, A. e. (2017). Integrated STEM assessment model. EURASIA Journal of Mathematics Science and Technology Education, 13(7), 3959-3968.
Black, P. W. (2004). Teachers developing assessment for learning: impact on student achievement. Assessment in Education, 11(1), 49-65.
Black, P. W. (2018). Classroom assessment and pedagogy. Assessment in Education: principles, policy, & practice, 25(6), 551-575.
Burkhardt, H. S. (2019). Formative assessment in mathematics. In H. B. Andrade (Ed.), Handbook of formative assessment in the disciplines (pp. 35-67). New York: Routledge.
Burns, M. e. (2010). The effects of technology-enhanced formative evaluation on student performance on state accountability math tests. Psychology in the Schools, 47(6), 582-591.
Catalano, A. (2013). Patterns of graduate students’information seeking behavior:a meta-synthesis of the literature. Journal of Documentation, 69(2), 243-274.
Cherner, T. S. (2017). Reconceptualizing TPACK to meet the needs of twenty-first-century education. The New Educator, 13(4), 329-349.
Clark, R. K. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2).
Cowie, B. e. (2013). Expanding notions of assessment for learning inside science and technology primary classrooms. Rotterdam, The Netherlands: Sense.
Crompton, H. e. (2019). Mobile learning and student cognition: A systematic review of PK-12 research using Bloom’s Taxonomy. British Journal of Educational Technology, 50(2), 684–701.
Decristan, J. e. (2015). Embedded formative assessment and classroom process quality: How do they interact in promoting science understanding? American Educational Research Journal, 52(6), 1133-1159.
Dolin J., B. P. (2018). Exploring relations between formative and summative assessment. Transforming assessment through an interplay between practice, research and policy. In E. R. Dolin J. (Ed.), Contributions from science education research (pp. 53-80). Cham , Switzerland: Springer.
Dolin, J. E. (Ed.). (2018). Transforming assessment through an interplay between practice, research and policy. Contributions from science education research. Switzerland: Springer.
Dukuzumuremyi, S. S. (2018). Interactions between pupils and their teachers in collaborative and technologyenhanced learning settings in the inclusive classroom. Teaching and Teacher Education, 76, 165-174.
Dunn, K. M. (2009). A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education. Practical Assessment, Research, & Evaluation, 14(7), 1-11.
Duschl, R. (2019). Learning progressions: framing and designing coherent sequences for STEM Education. Disciplinary and Interdisciplinary Science Education Research, 1(4). Retrieved February 4, 2020.
European Commission. (2016). FaSMEd summary report. Retrieved February 4, 2020, from https://cordis.europa.eu/docs/results/612/612337/final1-finalfasmed- summary-report-final.
Faber, J. e. (2017). The effects of a digital formative assessment tool on mathematics achievement and student motivation: Results of a randomized experiment. Computers & Education, 106, 83-96.
Feldman, A., & Capobianco, B. M. (2008). Teacher learning of technology enhanced formative assessment. Journal of Science Education Technology, 17, 82-99.
Finlayson, O. M. (2017). Building teacher confidence in inquiry and assessment: Experiences from a Pan- European Collaboration. In A companion to research in teacher education (M. Peters, B. Cowie, I. Menter ed., pp. 825-838). Singapore: Springer.
Gail Morreim, J. (2016). How Digital Formative Assessment Increases student Achievment and Motivation. Saint Paul.Minnesota: Hamline University.
Geer, R. e. (2017). Emerging pedagogies for the use of iPads in schools. British Journal of Educational Technology, 48(2), 490-498.
Griffin, P. (2015). Assessment and teaching of 21st century skills: Methods and approaches. (E. Care, Ed.) Dordrecht: Springer.
Griffin, P. M. (2012). Assessment and teaching of 21st century skills. (E. Care, Ed.) Dordrecht: Springer.
Harris, J. e. (2017). TPCK/TPACK research and development: Past, present, and future directions. Australasian Journal of Educational Technology, 33(3), i-viii.
Haßler, B. e. (2016). Tablet use in schools: A critical review of the evidence for learning outcomes. Journal of Computer Assisted Learning, 32(2), 139-156.
Hattie, J. T. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.
Hayward, L. (2015). Assessment is learning: the preposition vanishes. Assessment in Education: principles, policy & practice, 22(1), 27-43.
Hickey, D. e. (n.d.). Assessment as learning: enhancing discourse understanding, and achievement in innovative science curricula . Journal of Research in Science Teaching, 49(10), 1240-1270.
Hmelo-Silver, C. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235-266.
Hondrich, A. e. (2018). Formative assessment and intrinsic motivation: The mediating role of perceived competence. eitschrift für Erziehungswissenschaft, 21, 717-734.
Hooker, T. (2017). Transforming teachers’ formative assessment practices through ePortfolios. eaching and Teacher Education, 67, 440-453.
Jennings, L. (2010). Inquiry-based learning. CA, Encyclopedia of Educational Reform and Dissent: SAGE.
Jönsson, A. (2020). Definitions of formative assessment need to make a distinction between a psychometric understanding of assessment and “evaluate judgements.”. Frontiers In Education, 5(2), 1-4.
Khanifar, H. e. (2015). Designing an entrepreneurial process model in Iran's food industry. Entrepreneurship Development, 9th, , 219-237. [in perseian].
Khodadahosseini, S. (2013). Designing an entrepreneurial branding process model in small and medium businesses in the food industry. Brand Management Quarterly, 1, 13-45 [in perseian].
Kimbell, R. (2012). Evolving project e-scape for national assessment. International Journal of Technology and Design Education, 22(2), 135-155.
Kimbell, R. a. (2007). E-scape portfolio assessment: Phase 2 report. Goldsmiths. University of London.
Kingston, N. N. (2011). Formative assessment: A meta-analysis and call for research. Educational Measurement: Issues and Practice, 30(4), 28-37.
Kippers, W. e. (2018). Teachers’ views on the use of assessment for learning and data-based decision making in classroom practice. Teaching and Teacher Education, 75, 199-213.
Kirschner, P. D. (2017). The myths of the digital natives and the multitasker. Teaching and Teacher Education, 67, 135-142.
Knowles, T. K. (2016). A conceptual framework for integrated STEM education. International Journal of STEM Education, 3(11), 1-11.
Koehler, M. J. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60-70.
Laal, M. (2013). Collaborative learning; elements. Social and Behavioral Sciences, 83, 814-818.
Lai, C. (2019). Trends of mobile learning: A review of the top 100 highly cited papers. British Journal of Educational Technology.
Larson, K. T. (2008). Continuous Feedback Pedagogical Patterns. In PLoP’08 Proceedings of the 15th Conference on Pattern Languages of Programs. New York: ACM.
Lashkarblouki, e. a. (2011). Designing a sustainable strategy process model using a hybrid approach. Strategic Management Thought, 6, 121-151. [in perseian].
Lee, H. F. (2012). Factors that affect science and mathematics teachers’ initial implementation of technology-enhanced formative assessment using a classroom response system. Journal of Science Education Technology, 21, 523-539.
Looney, J. (2019). Digital formative assessment: A review of the literature.
Lysaght, Z. O. (2017). Scaling up, writ small: using an assessment for learning audit instrument to stimulate site-based professional development, one school at a time. Assessment in Education: Principles, Policy & Practice, 24(2), 271-289.
Maier, W. W. (2016). Effects of a computer-assisted formative assessment intervention based on multiple-tier diagnostic items and different feedback types. Computers & Education, 95, 85-98.
Marjan Faber, J. (2020). Effect of digital formative assessment tools on teaching quality and student achievement. PhD Thesis, University of Twente.
Martin, M. e. (2017). TIMSS 2019 Assessment Design. In I. M. Martin (Ed.), TIMSS 2019 Assessment Frameworks (pp. 79-91).
McMillan, J; Cauley, K. (2010). Formative assessment techniques to support student motivation and achievement. Educational strategies, 83(1), 1-6.
Mishra, P. K. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.
Molenaar, I. e. (2019). What can moment-by-moment learning curves tell about students’ selfregulated learning? Learning and Instruction. Retrieved February 4, 2020.
Ng, W. (2012). Empowering scientific literacy through digital literacy and multiliteracies. In Hauppauge. NY: Nova Science Publishers.
Nicol, D., & Macfarlane, D. (2006). Formative assessment and self-regulated learning; A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.
Nikou, S. c. (2018). Mobile-based assessment: A literature review of publications in major refered journals from 2009 to 2018. Computers & Education, 125, 101-119.
Nikou, S. E. (2019). Factors that influence behavioral intention to use mobile-based assessment: A STEM teacher’s perspective. British Journal of Educational Technology, 50(2), 587-600.
O’Leary, M. e. (2018). The state of the art in digital technology based assessment. European Journal of Education, 53(2), 160-175.
Qaltash, A. e. (2014). Pathology of the descriptive evaluation model in order to provide a suitable model in the elementary school. Quarterly Journal of Research in Educational and Virtual Learning, 10(3), 7-16. [in perseian].
Rahimi, S. (2017). Review of computer-based assessment for learning in elementary and secondary education. Journal of Computer Assisted Learning, 33, 1-19.
Reis, S. (2011). The Effects Differentiated Instruction and Enrichment Pedagogy on Reading Achievement in Five Elementary Schools. American Educational Research Journal, 48(2), 462-501.
Reynolds, Katherine;et al. (2020). Digital Formative Assessment of Transversal Skills STEM. Dublin City University: ISBN: 978-1-911669-05-0.
Sandelowski, M., & Barroso, J. (2006). Handbook for Synthesizing Qualitative Research: Springer Publishing Company.
Shannon, C. (1948). A Mathematical Theory of Communication. The Bell System 27(4): 623-656.
Shavelson, R. e. (2008). On the impact of curriculum-embedded formative assessment on learning: A collaboration between curriculum and assessment developer. Applied Measurement in Education, 21(14), 295-314.
Shute, V. L. (2016). Advances in the science of assessment. Educational Assessment, 21(1), 34-59.
Spector, J.M., et.al. (2016). Technology enhanced formative assessment for 21st century learning. Educational Technology and Society, 19(3), 58-71.
Stobart, G. (2006). The validity of formative assessment (Assessment and Learning ed.). (J. Garder, Ed.) CA: SAGE Publications Ltd.
Szendey, olivia;et al. (2020). Virtual Learning Environments and Digital tools impementing Formative Assessment and Transversal Slills in STEM. Dublin City University: ISBN: 978-1-911669-14-2.
Taghi zadeh, A., & ed al. (2018). Identifying Capabilities of Formative Assessment in Virtual Learning Environments. Quarterly Journal of Research in School and Virtual Learning, 1(6), 43-62. (In Persian).
Van De Ven, A. (1992). Suggestions for Studying Strategy Process: A Research Note. Strategic Management Journal, 13(S1), 169-188.
Vogelzang, J. A. (2017). Classroom action research on formative assessment in a context-based chemistry course. ducational Action Research, 25(1), 155-166.
Wiliam, D. (2016). The secret of effective feedback. Educational Leadership, 73(7), 10-15.
Wiliam, D. (2019). Why formative assessment is always both domain-general and domain-specific and what matters is the balance between the two. In R. B. H. Andrade, Handbook of formative assessment in the disciplines (pp. 243-264).
Wiliam, D; Thompson.M;. (2007). Integrating assessment with learning: What will it take to make it work? New York: In C.A. Dwyer(Ed.), The future of assessment: Shaping teaching and learning.
Zenisky, A. S. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representation. In S. Downing, & T. Haladyna (Ed.), Handbook of test development (pp. 329-348). New York, NY: Routledge.