Abstract
Item Response Theory (IRT) provides the accepted framework for examining student responses to individual test items so as to assess their quality. Moreover, IRT proves especially valuable in both improving items, which will be reused in future tests, and eliminating the ambiguous or misleading ones. However, to ensure all IRT parameters are correctly estimated, every single item needs to be tested on a large number of examinees to define its properties. As a result, IRT tends to be shunned by teaching staff who only have access to a relatively small number of students. Nevertheless, the accuracy of parameter estimates is of lesser importance in assessment calibration, when items whose parameters exceed a threshold value are flagged for revision. This study uses simulated data sets under various simulation conditions and introduces two new quality indices together with their respective IRT goodness-of-fit tests as a means to explore the feasibility of applying IRT-based assessment calibration to small sample sizes.
Original language | English |
---|---|
Title of host publication | Proceedings of the 5th International Multi-conference on Computing in the Global Information Technology ICCGI 2010 |
Place of Publication | Valencia, Spain |
Publisher | IEEE |
Pages | 214-219 |
Number of pages | 6 |
ISBN (Print) | 9781424480685 |
DOIs | |
Publication status | Published - 20 Sept 2010 |
Event | Proceedings of the 5th International Multi-conference on Computing in the Global Information Technology ICCGI 2010 - Valencia, Spain, 2010 Duration: 20 Sept 2010 → … |
Conference
Conference | Proceedings of the 5th International Multi-conference on Computing in the Global Information Technology ICCGI 2010 |
---|---|
Period | 20/09/10 → … |
Keywords
- psychometrics
- assessment calibration
- goodness-of- fit test
- item response theory
- calibration
- data models
- estimation
- indexes
- computational modeling
- accuracy