A Review of Internet-Centered Language Assessment: Origins, Challenges, and Perspectives
محورهای موضوعی : Journal of Teaching English Language StudiesAdel ESMAEELI 1 , Khalil MIRZAEI 2 , Marjan EBRAHIMI 3
1 - M.A. Candidate, English Dept., Islamic Azad University, Takestan Branch, Iran
2 - PhD, Agri. Extension and Education Dept., Bualisina University, Hamedan, Iran
3 - M.S., Extension and Education Dept., Bualisina University, Hamedan, Iran
کلید واژه: internet-centered language assessment (ICLA), computer-based tests (CBTs), Computer-adaptive tests (CATs),
چکیده مقاله :
This article defines the origin of an internet-centered language assessment (ICLA), how ICLAs are different from the other traditional computer-oriented tests, and what uses and functions ICLAs have in different taxonomies of language testing. After a very short review of computer- oriented testing, ICLAs are defined and categorized in low-tech or high tech categories. Since low-tech tests are the more feasible and practical, they will be mainly focused in this article. Then, item types of low-tech ICLAs are described, and validation concerns that are specific to ICLAs are discussed. Afterwards, the general advantages as well as design and implementation issues of ICLAs are considered before examining the role that testing consequences play in deciding whether a ICLA is an appropriate assessment instrument or not. It is argued that ICLAs are among the most appropriate functions in low-stakes testing situations; but with proper supervision, they can also be used in medium-stakes situations although they are not generally recommended for high-stakes situations.
Alderson, C. (Organizer). (2001, March). Learning-centred assessment using information technology. Symposium conducted at the 23rd Annual Language Testing Reserach Colloquium, St. Louis, MO.
Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. New York: Cambridge University Press.
Bachman, L. (1990). Fundamental considerations in language testing. Oxford, UK: Oxford University Press.
Bachman, L. F., & Palmer, A. (1996). Language testing in practice. Oxford, UK: Oxford University Press.
Bachman, L. F., Carr, N., Kamei, G., Kim, M., Llosa, L., Sawaki, Y., Shin, S., Sohn, S-O., Vongpumivitch, V., Wang, L., Xi, X., & Yessis, D. (2000, March). Developing a web-based language placement examination system. Poster session presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada.
Brown, J. D. (1997). Computers in language testing: Present research and some future directions. Language Learning & Technology, 1(1), 44-59. Retrieved April 1, 2001 from the World Wide Web: http://llt.msu.edu/vol1num1/brown/default.html.
Carr, N., Green, B., Vongpumivitch, V., & Xi, X. (2001, March). Development and initial validation of a Web-based ESL placement test. Poster session presented at the 23rd Annual Language Testing Research Colloquium, St. Louis, MO.
Chalhoub-Deville, M., & Deville, C. (1999). Computer-adaptive testing in second language contexts. Annual Review of Applied Linguistics, 19, 273-299.
Chapelle, C. (1998). Construct definition and validity inquiry in SLA research. In L. F. Bachman & A. D. Cohen (Eds.), Interfaces between second language acquisition and language testing research (pp. 32-70). New York: Cambridge University Press.
Chapelle, C. (1999). Validity in language assessment. Annual Review of Applied Linguistics, 19, 254-272.
Dempster, F. N. (1997). Using tests to promote classroom learning. In R. F. Dillon (Ed.), Handbook on testing (pp. 332-346). Westport, CT: Greenwood Press.
Douglas, D. (2000). Assessing languages for specific purposes. New York: Cambridge University Press.
Drasgow, F., & Olson-Buchanan, J. B. (Eds.). (1999). Innovations in computerized assessment. Mahwah, NJ: Lawrence Earlbaum Associates.
Dunkel, P. A. (1999). Considerations in developing or using second/foreign language proficiency computer-adaptive tests. Language Learning & Technology, 2(2), 77-93. Retrieved April 1, 2001 from the World Wide Web: http://llt.msu.edu/vol2num2/article4/index.html.
Fulcher, G. (2001). Resources in language testing page. Retrieved April 1, 2001 from the World Wide Web: http://www.surrey.ac.uk/ELI/ltr.html
Gruba, P. (2000, March). The role of digital video media in response to task demands. Paper presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada.
Hambleton, R., & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston: Kluwer-Niijhoff.
Hambleton, R., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage.
Kirsch, I., Jamieson, J., Taylor, C., & Eignor, D. (1998). Computer familiarity among TOEFL examinees.
(TOEFL Research Report No. 59). Princeton, NJ: Educational Testing Service.
Malone, M., Carpenter, H., Winke, P., & Kenyon, D. (2001, March). Development of a Web-based listening and reading test for less commonly taught languages. Work in progress session presented at the 23rd Annual Language Testing Research Colloquium, St. Louis, MO.
McNamara, T. (1996). Measuring second language performance. London: Longman.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (pp. 13-103). New York: Macmillan.
Norris, J. M., Hudson, T., Brown, J. D., & Yoshioka, J. (1998). Designing second language performance assessments. Honolulu: University of Hawai'i at Manoa, Second Language Teaching and Curriculum Center.
Roever, C. (2000, March). Web-based language testing: opportunities and challenges. Paper presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada.
Roever, C. (2001). A Web-based test of inter-language pragmatic knowledge: Implicatures, speech acts, and routines. Unpublished manuscript, University of Hawai'i at Manoa.
Sawaki, Y. (2001, March). How examinees take conventional versus web-based Japanese reading tests. Work in progress session presented at the 23rd Annual Language Testing Reserach Colloquium, St. Louis, MO.
Shohamy, E. (1992). Beyond performance testing: A diagnostic feedback testing model for assessing foreign language learning. Modern Language Journal 76(4), 513-521.
Shohamy, E. (1995). Performance assessment in language testing. Annual Review of Applied Linguistics, 15, 188-211.
Stahl, J. A., & Lunz, M. E. (1993, April). Assessing the extent of overlap of items among computerized adaptive tests. Paper presented at the annual meeting of National Council for Measurement in Education, Atlanta, GA.
Stocking, M. L. (1994). An alternative method for scoring adaptive tests (Research Report #94-48). Princeton, NJ: ETS.
Stocking, M. L., & Lewis, C. (1995). Controlling item exposure conditional on ability in computerized adaptive testing (Research Report #95-24). Princeton, NJ: ETS.
Taylor, C., Jamieson, J., Eignor, D., & Kirsch, I. (1998). The relationship between computer familiarity and performance on computer-based TOEFL test tasks (TOEFL Research Report No. 61). Princeton, NJ: ETS.
Wainer, H. (1990). Introduction and history. In H. Wainer (Ed.), Computerized adaptive testing: a primer (pp. 1-22). Hillsdale, NJ: Lawrence Earlbaum.
Wang, L., Bachman, L. F., Carr, N., Kamei, G., Kim, M., Llosa, L., Sawaki, Y., Shin, S., Vongpumivitch, V., Xi, X., Yessis, D. (2000, March). A cognitive-psychometric approach to construct validation of Web-based language assessment. Work-in-progress report presented at the 22nd Annual Language Testing Research Colloquium, Vancouver, BC, Canada.