Dear all,
For the final Quant SIG Seminar meeting this term (Monday, 5th March), we are excited to have the seminar convener Professor Herb Marsh (Department of Education, Oxford University) presenting on the following topic (via video/Skype):
The Use of Item-Parcels in CFAs to Camouflage Misfit at the Item Level: Do Two Wrongs Make a Right?
This promises to be a really excellent talk, and a great final presentation in this series of the Quant SIG. As usual the Quant SIG will meet in Seminar Room J from 12:15pm-2pm, Department of Education, 28 Norham Gardens, Oxford OX2 7PY. If you do not have access to the building, please contact Patrick Alexander ([log in to unmask]) to arrange access.
Best wishes,
Patrick Alexander
**
ABSTRACT:
In SEM studies, it is typically ill-advised to:
(a) retain an independent clusters confirmatory factor analysis (ICM-CFA) model when its assumption of undimensionality is violated (i.e., there are no cross-loadings or correlated uniquenesses); and
(b) use item parcels when an ICM-CFA model does not fit the data.
However, the combined use of (a) and (b) often provides such misleadingly good fit indexes that applied researchers believe that misspecification problems are resolved—that two wrongs really do make a right.
In three studies based on real (self-esteem and big-five personality) and simulated data, we show that the use of item parcels can – and typically does:
• substantially inflate the apparent goodness of fit
• bias substantive interpretations of the results
Purposes
The present investigation has a dual purpose in relation to critical measurement issues that face applied SEM researchers. The first is to explore potentially serious limitations in the use and misuse of item parcels in factor analysis. Item parcels are the sum or mean of responses to several indicators designed to measure the same construct, thus resulting in a smaller number of parcels rather than a larger number of items. Yang, Nay and Hoyle (2010) argue that item parceling is the prevailing approach for including scales with many items in factor analysis and SEM models. The second purpose is to compare the use of exploratory structural equation modelsing (ESEM) and the traditional independent clusters confirmatory factor analysis model (ICM-CFA; with no cross-loadings, secondary factors or correlated uniquenesses). We argue that the two issues are closely related in that it is precisely the situation in which ESEM outperforms ICM-CFA that the use of item parcels is most fraught with potential problems.
Perspectives
More Is Never Too Much. For real data, unidimensionality and pure indicators are an ideal to strive towards (i.e., a convenient fiction), but rarely if ever achieved. As noted by MacCallum (2003, p. 134): Studies based on the assumption that models are correct in the population are of limited value to substantive researchers who wish to use the models in empirical research. For simulated data based on a population generating model that approximates the unidimensionality assumption of ICM-CFA, it might be reasonable to have only a few indicators per factor (e.g., enough to model and control measurement error). However, real data rarely if ever have these ideal properties so that it is better to have more rather than 3 or 4 indicators typical in SEM studies.
Historically, in order to enhance the generalizability of constructs, it was common to have 10-15+ items/scales for the most widely used psychological tests, and standardized achievement tests typically have considerably more than 15 items. At least implicitly, tests are typically constructed under the assumption that the available indicators are a subset of a potentially very large number of of indicators of the same construct (McDonald, 2010). Analogous to concerns about the number of persons, in a perfect (simulated) data world, it might be possible to find "truth" based on only a few participants, but with real data this would seriously undermine the generalizability of the findings. The same argument applies to making generalizations based on a single or only a few indicators of most constructs, leading Marsh et al (1998) to conclude that more is never too much in relation to persons and items.
Use of Item Parcels. It is better to have more indicators per factor, but applied researchers are reluctant to incorporate large numbers of indicators into complex models. One widely employed compromise (see Marsh, et al. 1988) is to collect many items, but to use item parcels in the analyses. In a recent review of the use of parceling strategies, Sterba and MacCallum (2010; also see Bagozzi & Edwards, 1998; Bandalos & Finney, 2001; Little, et al., 2002; Sass & Smith, 2006; Marsh & O’Neill, 1994; Marsh et al., 1998; Williams & O’Boyle, 2008) were generally positive about parceling under appropriate conditions when:
• the focus is on relations between constructs (i.e., factor correlations or path coefficients) rather than scale development and the evaluation of item characteristics, and
• there is good a priori information to support the posited unidimensional factor structure; that each item loads on one and only one factor (i.e., there are no cross-loadings) with no correlated uniquenesses, and no secondary factors (i.e., i.e., an ICM-CFA model fits the data at the item level).
However, Williams and O’Boyle (2008) indicated that in practice, applied researchers frequently do not explicitly test the dimensionality of their constructs, and some use parceling even though assumptions of unidimensionality are violated.
Following Bandalos (2008) and others, we distinguish between homogeneous parcelling and distributed parceling strategies. In homogeneous parcelling strategies, closely related items that share systematic variation are placed in the same parcel. In distributed parceling strategies, items that share a source of systematic variation are distributed across different parcels either randomly or systematically. Little et al. (2002; also see Kishton & Widaman, 1994) recommended against using homogeneous strategies that can result in problems (e.g., unstable solutions or unacceptable parameter estimates), whereas the distributive strategy was less prone to these problems. However, Coffman and MacCallum (2005) concluded that “how the parcels are constructed is less important than the fact that they are used” (p. 253). Indeed, if the constructs are truly unidimensional—a prerequisite for the appropriate use of parcels—then each of these strategies should be similar. However, the use of item parceling is not justified when the unidimensionality assumption is violated.
Methods
The present investigation is based on three studies that each show problems with the use of item parcels when the traditional ICM-CFA model does not fit data based on responses to individual items and the comparison of ICM-CFA and ESEM solutions. Study 1 is based on the responses to the 10-item Rosenberg Self-Esteem instrument collected on four occasions (see Marsh, Scalas & Nagengast, 2010). Study 2 is based on responses to two 12-item scales designed to measure Extraversion and Neuroticism (see Marsh, et al., 2010). Study 3 is based on simulated data for two 12-item scales in which the population structure is purely unidimensional (i.e., ICM-CFA), a good simple structure, or a moderate simple structure. Statistical analyses were done with Mplus version 6.1, using robust maximum likelihood estimation for CFA, ESEM, and SEM solutions (see Asparouhov & Muthén, 2009; Marsh, Lüdtke, et al., 2010; Marsh, Muthén et al., 2009).
Results: Summary and conclusions
Rosenberg Self-Esteem Instrument. The Rosenberg self-esteem instrument is one of the most widely used psychological instruments, but ICM-CFAs are complicated by the problem of positively and negatively worded items. Marsh, et al. (2010) apparently resolved this issue, demonstrating that the best model includes one substantive self-esteem factor and two separate method factors associated with positively and negatively worded items. Failure to include the method factors resulted in systematically diminished test-retest correlations over four ocassions. A variety of parceling strategies all led to apparent support for a one-factor model that completely ignored misspecification associated with these method effects.
Factor Structure of Personality Measure (Neuroticism and Extraversion). Big-five personality factors have dominated recent personality research but the failure of ICM-CFA models to provide an acceptable fit has been a serious limitation. Marsh, Lüdtke, et al. (2010) demonstrated that the problem – at least in part – is reliance on the ICM-CFA model that requires each item to load on one and only one factor (i.e., to be purely unidimensional) and showed that ESEM apparently resolved this problem. Of particular substantive importance to the applied researcher, failure to account for cross-loadings in the ICM-CFA model lead to systematic biases in the estimated factor correlations: .15 for the ESEM model with cross-loadings; .49 for the ICM-CFA model that did not allow cross-loadings; between .47 and .52 for the two-factor solutions based on parcel solutions that ignored the cross-loading misspecification; and an implicit 1.0 for the one-factor solution based on the distributive parcel strategy. In this sense the use of parcels merely camouflaged the misspecification (in relation to goodness of fit), but still resulted in biased estimates of the factor correlation.
Study 3: CFA, ESEM, and the Use of Item Parcels with Simulated Data. For simulated data based on known population generating models, it was possible to demonstrate empirically the substantial positive bias in the ICM-CFA factor solutions based on items and each of the parceling strategies. For simulated data that perfectly met the unidimensional assumption underlying the application of the ICM-CFA model and the use of parcels, both the ICM-CFA and ESEM solutions resulted in good fits to the data and accurate estimates of the parameter estimates. However, even for the good approximation to simple structure in which cross loadings were apparently smaller than likely to be found in applied research, the ICM-CFA item and parcel solutions resulted in substantially biased estimates of the known population correlations even though the fits of parcel solutions were good. Without reference to the ESEM solution, the applied researcher might well conclude that the data were sufficiently close to being unidimensional to justify the use of parcels that resulted in biased parameter estimates.
Scientific Significance: Recommendations For The Use Of Item Parcels
1. Avoid using parcels to camouflage method effects, cross-loadings, and other sources of misspecification at the item level, even if they seem to be trivial and substantively unimportant. As demonstrated here, it is better to systematically model the misspecification at the item level; it might turn out to be substantively important (as in Study 1) and failure to do so is likely to bias parameter estimates (as in all three studies).
2. The use of item parcels is only justified when there is good support for the unidimensionality of all the constructs at the item level. Tests of this assumption should be conducted for the complete model at the item level as the evaluation of each construct separately ignores many forms of misspecification (e.g., cross-loadings, method effects that are common to different constructs). As demonstrated here, a useful test of this requirement is the comparison of the a priori ICM-CFA model and the corresponding ESEM model. Item parcels are only justified when goodness of fit and parameter estimates based on the more parsimonious ICM-CFA model are good, and similar to those based on the corresponding ESEM. If neither ICM-CFA nor ESEM models fit the data, explore alternative (ex-post facto) solutions at the item level with appropriate caution.
In conclusion, a priori use of item parcels is never justified without clear empirical support for the unidimensionality assumption. ESEM provides a potentially useful comparsion to ICM-CFA solutions for testing this assumption, and a more accurate representation of the data when the assumption is not met.
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|