Polytomous item explanatory IRT models with random item effects: Concepts and an application
Introduction
The primary role of educational measurement and assessment is to provide necessary information about ways to facilitate teachers’ instruction and students’ learning [6]. This in turn emphasizes the capability and quality of informative measurement and assessment. Explanatory measurement [15] provides various explanatory inferences from the assessments so that it can strengthen the kind of feedback that could be given to both teachers and students as well as to test developers and educational researchers. In item response theory (IRT), explanatory item response models (EIRM; [15]) aim to explain the person and/or item side of the assessment data in order to enrich inferential information and enhance the feedback. Among person explanatory, item explanatory, and doubly explanatory models of the EIRM approach, this paper will focus on item explanatory models in which item properties are incorporated to explain and predict the item effects. In measurement and assessment practices, item explanatory models have various methodological advantages in extracting essential and meaningful elementary components, testing constructs hypothesized in item design and item generation, and predicting item difficulties of newly developing items as well as in measuring the effect of various testing conditions such as item presentation position, item exposure time, and testing occasions [13], [19], [40], [58]. This item explanatory approach is also useful to examine the effect of item properties such as item design variables, item response format, content-specific learning, task characteristics, and cognitive operations in various assessment contexts [31], [40]. Thus, item explanatory models can serve to provide useful and practical information for enhancing item design, item generation, and test development in educational measurement and assessment.
A typical approach to item explanatory modeling is the Linear Logistic Test Model (LLTM; [21]), which decomposes the difficulties of specific items into linear combinations of elementary components related to item properties or features [18]. The original LLTM approach has an underlying assumption that predictors of the observed item properties can perfectly account for item difficulties. However, “perfect” explanation is hardly possible because substantive theories behind the measurement model may not be flawless and/or the item difficulty parameter may be a random variable by nature [16], [35]. Considering the uncertainty in explanation and/or the random nature of item parameters, as in an ordinary regression model, it is reasonable to add a random error or residual term into the item regression component of item explanatory models. This approach is the Linear Logistic Test Model with item error (LLTM + ; [50], [35]), which enhances prediction of the item difficulty parameters of a Rasch model by allowing for residual variation [16], [31]. That is, an item error term in the LLTM + can compensate for the discrepancy between the freely estimated item difficulty in the Rasch model and the item difficulty calculated by the estimated item property effects in the LLTM.
Although item explanatory models have the methodological advantages and diverse potential uses, most of their applications have investigated dichotomous data (e.g., [16], [17], [31], [34], [40], [58]). In particular, the LLTM and the LLTM + are existing and widely used item explanatory approaches to dichotomous items. In a wide range of educational, psychological, and sociological measurement and assessment contexts, however, it is common to have ordered-category responses which are regarded as polytomous data [15]. For example, an educational assessment is often developed under a learning progression framework, which generates the sophisticatedly different and ordered levels of student achievement. In measurement practices, partial credit items and rating scale items are frequently used item types, which are typically scored as ordered-category responses. Therefore, extensions and applications of item explanatory models to polytomous data, referred to as polytomous item explanatory models, need to be further investigated.
To develop polytomous item explanatory models, it is reasonable and appealing to extend the LLTM + approach to polytomous data with consideration for the uncertainty in explanation and/or the random nature of item parameters. For polytomous extensions of the LLTM + ε, two steps of item explanatory modeling are required: (1) item explanatory extensions of polytomous item response models using the LLTM approach—polytomous item explanatory models, (2) conceptualizing polytomous random item effects and adding them as item error terms to the item regression component of polytomous item explanatory models using the LLTM + ε approach—polytomous item explanatory models with random item effects.
For the first modeling step, polytomous item parameters in polytomous item response models need to be reparameterized by incorporating item properties, which is complicated. However, a few studies have investigated polytomous extensions of the LLTM approach. Glas and Verhelst [29] imposed linear restrictions on the item parameters of a polytomous item response model, but their reparameterization necessarily requires a complicated translation to interpret the estimated item parameters. Linacre [43] decomposed the polytomous item parameters into a linear combination of the effects of facets but the facet effects models haven’t been used for item explanatory modeling and continuous item predictors cannot be incorporated in the models. Fischer and Parzer [22] and Fischer and Ponocny [23] extended the LLTM approach to polytomous item response models by using a normalization constant and basic parameters for item parameterization, however, these item parameters are difficult to interpret, and the models are complicated to incorporate item properties. Meanwhile, Kim [39] investigated item explanatory extensions of polytomous item response models under a general statistical modeling framework recently, building on the previous studies. Two polytomous item explanatory models were proposed using different item explanatory approaches to polytomous data, and the two models showed methodological and practical differences in incorporating item properties and interpreting their effects. In this paper, these two polytomous item explanatory models are employed for the first step of incremental extensions.
The second step is our main concern of item explanatory modeling for polytomous data. This modeling step to add item error terms seems straightforward, however, it is rather difficult so that polytomous extensions of the LLTM + ε approach and their applications have hardly been investigated. The difficulty originates from mainly two issues regarding polytomous random item effects: a conceptual issue and an application issue. In IRT, the LLTM + is a type of random item effects model in which items are treated as random and item difficulties are regarded as random effects and hence the item difficulty parameter is a random variable [16]. The LLTM assumes that the item difficulties are perfectly predicted by the fixed item property effects, whereas the LLTM + relaxes this assumption by allowing for random variation across items. Random item effects models are a rather new area in educational and psychological measurement research [16]. In particular, the concepts of random item effects involve a random error interpretation for the uncertainty in explanation and a random sampling interpretation for the random nature of item parameters [35]. Since these two interpretations are two sides of the same coin, polytomous random item effects and their distributional assumptions should be investigated to add item error terms into the polytomous item explanatory models. However, random item effects for polytomous items have been barely examined and/or conceptualized. A few studies have discussed them (e.g., [36], [55], [56]), but they have mainly investigated item selection techniques or item family calibration methods for polytomous items rather than underlying distributions for polytomous random item effects.
For applications of polytomous random item effects models in practice, it is necessary to figure out different types of polytomous item explanatory models with random item errors and to select a model between them. An overarching framework that summarizes the polytomous item explanatory models with random item errors is helpful to facilitate the understanding and applications of them, however, it has not yet been investigated. Furthermore, treating both items and persons as random makes for crossed random effects [16], [35]. Estimation of random item effects models with the crossed random effects is demanding due to the complexity and difficulty in numerical integration [9], [69]. Such demanding estimation becomes more difficult for polytomous data practically due to a lack of statistical software which can estimate polytomous random item effects models.
This research aims to develop and apply polytomous item explanatory models with random item errors by extending the LLTM + ε approach to polytomous data, considering the uncertainty in explanation and/or the random nature of item parameters. To specify the models, the two modeling steps of incremental extensions will be discussed in the following sections. For the first step, we will review the existing models for polytomous item explanatory extensions. Building up on polytomous item response models, the two polytomous item explanatory models that Kim [39] suggested are described. For the second step, we will examine the concepts and types of polytomous random item effects in terms of a random sampling interpretation, which enables to figure out the underlying distributions of random item errors on the polytomous item parameters. Then, we will add those random item errors to the polytomous item explanatory models in terms of a random error interpretation. Next, in addition to summarizing an overarching framework of the polytomous item explanatory models with random item errors, we will also discuss estimation methods for those models. Lastly, we will demonstrate an empirical application of the proposed models to the Verbal Aggression data to show their practical implications, interpretations, and methodological advantages.
Section snippets
Polytomous item response models
This section reviews existing polytomous item response models for the first modeling step of polytomous item explanatory extensions. Given a context of ordered-category responses regarded as polytomous data, adjacent-categories logits are employed for polytomous item response models and their item explanatory extensions. Since most of the ordered-category responses in educational assessment or cognitive development contexts are subjectively assigned scores between categories,
Polytomous random item effects
For the second modeling step, the concepts and types of random item effects for polytomous items should be investigated to incorporate random item errors into the two polytomous item explanatory models. The concepts of random item effects are related to the random nature of item parameters in terms of a random sampling interpretation as well as to the uncertainty in explanation in terms of a random error interpretation [35]. Since random item effects parameters are the same random variables
Data
For an empirical example, we used the Verbal Aggression data set [70], which is publicly available at the BEAR center website page (see [15]; the data set can be downloaded from http://bearcenter.berkeley.edu/EIRM/). The data were collected from the first-year psychology students of a Belgian university. They were asked to answer the behavioral questions about verbally aggressive reactions to frustrating situations. In total, 7,584 observations from 316 persons responding to the 24 Verbal
Conclusion and discussion
This paper has investigated how to extend the LLTM + ε approach to polytomous data. Considering the uncertainty in explanation and/or the random nature of item parameters, the concepts and types of polytomous random item effects were examined and then they were incorporated into the existing polytomous item explanatory models, the item location explanatory MFRM and the step difficulty explanatory LPCM. Through the two modeling steps of polytomous extensions of the LLTM + ε, the three polytomous
Acknowledgments
The authors would like to thank Sophia Rabe-Hesketh for her careful comments on model specification and thank anonymous reviewers for their helpful comments on an earlier draft.
Declaration of conflicting interests
The authors declared no potential conflicts of interests with respect to the research, authorship and/or publication of this article.
Funding
Development of Stan codes for estimating the proposed models in this paper was supported in part by Grant R305D140059 from the Institute of Education Sciences (IES), U.S. Department of Education. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the IES.
References (75)
- et al.
Alternating imputation posterior estimation of models with crossed random effects
Comput. Stat. Data Anal.
(2011) The linear logistic test model as an instrument in educational research
Acta Psychol.
(1973)- et al.
Computer adaptive testing improved accuracy and precision of scores over random item selection in a physical functioning item bank
J. Clin. Epidemiol.
(2006) - et al.
Automatic item generation of probability word problems
Stud. Educ. Eval.
(2009) - et al.
Generating random correlation matrices based on vines and extended onion method
J. Multivariate Anal.
(2009) - R.J. Adams M. Wu M. Wilson, ConQuest 3.0 [computer program]. ACER, Hawthorn, Australia,...
- et al.
Using SAS PROC MCMC for item response theory models
Educ. Psychol. Measur.
(2015) Regression and ordered categorical variables (with discussion)
J. R. Stat. Soc.
(1984)A rating formulation for ordered response categories
Psychometrika
(1978)- et al.
A pairwise likelihood approach to generalized linear models with crossed random effects
Stat. Modell.
(2005)
Assessment and classroom learning
Assess. Educ.: Principles, Policy, Pract.
Generalizability in item response modeling
J. Educ. Meas.
Stan: A probabilistic programming language
J. Stat. Softw.
Additive multilevel item structure models with random residuals: item modeling for explanation and item generation
Psychometrika
Parameter estimation of multiple item response profiles model
Br. J. Math. Stat. Psychol.
Advances in combining Generalizability Theory and Item Response Theory (Unpublished doctoral dissertation)
Assessing change with the extended logistic model
Br. J. Math. Stat. Psychol.
BUGS code for item response theory
J. Stat. Softw.
Random item IRT models
Psychometrika
The estimation of item response models with the lmer function from the lme4 package in R
J. Stat. Softw.
Item response theory for psychologists
Improving construct validity with cognitive psychology principles
J. Educ. Meas.
Mixture models
An extension of the rating scale model with an application to the measurement of change
Psychometrika
An extension of the partial credit model with an application to the measurement of change
Psychometrika
Bayesian item response modeling: Theory and applications
RIM: A random item mixture model to detect differential item functioning
J. Educ. Meas.
Bayesian and frequentist cross-validation methods for explanatory item response models (Unpublished doctoral dissertation)
Inference from iterative simulation using multiple sequences
Stat. Sci.
Computerized adaptive testing with item cloning
Appl. Psychol. Meas.
Extensions of the partial credit model
Psychometrika
An application of explanatory item response modeling for model-based proficiency scaling
Educ. Psychol. Measur.
Multinomial logit random effects models
Stat. Modell.
Introduction to domain-referenced testing
Educ. Technol.
Models with item and item group predictors
Calibration of polytomous item families using Bayesian hierarchical modeling
Appl. Psychol. Meas.
Cited by (0)
- #
Graduate School of Education, University of California, Berkeley, 4415 Berkeley Way Building, Berkeley, CA 94720, USA.