Does the Rasch Model work for Equating?
Subject Areas : روان درمانگریRassoul Sadeghi 1 , Jim Tognolini 2
1 - PhD student University of New South Wales
2 - PhD ACER General Manager,Sidney Office
Keywords: Rasch model, equating, item-response theory, psychometric,
Abstract :
The advent of modern psychometric theory, Item Response Theory (IRT), has enabled performance to be compared over time, across academic year levels where different tests (different items assessing the same construct) have been used for different student groups on different occasions. In order for this to occur, the tests have to be equated. Once they are equated, the students’ performances can be represented along the same scale. Once they are on the same scale they can be directly compared e.g. comparison of Year 3, 5 and 7 in a subject and their performances can be compared to predetermined cut-scores. Test equating of this type is currently used widely in Australia to identify the percentage of students to be ‘at risk’ (below benchmark). The results from two equating procedures (relative anchoring and concurrent equating) used with Rasch (1960) measurement models are compared, as fit to the model gets progressively worse. The research question is what happens to students’ marks as fit to the model varies? Data in this study were generated from the one-parameter logistic model using the Simulation Program for Rasch Data (RUMMSims). The findings of the present study indicate that when data fit the Rasch model there is no significant difference between results produced from the different equating procedures. However, as data fit to the model gets progressively worse, the equating results that emerge from applying different equating procedures generate significant variations
_||_