JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for ALLSTAT Archives


ALLSTAT Archives

ALLSTAT Archives


allstat@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

ALLSTAT Home

ALLSTAT Home

ALLSTAT  March 2012

ALLSTAT March 2012

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

University of Oxford Quant SIG meeting for March 5th: PROFESSOR HERB MARSH

From:

Patrick Alexander <[log in to unmask]>

Reply-To:

Patrick Alexander <[log in to unmask]>

Date:

Thu, 1 Mar 2012 10:48:54 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (52 lines)

Dear all,

For the final Quant SIG Seminar meeting this term (Monday, 5th March), we are excited to have the seminar convener Professor Herb Marsh (Department of Education, Oxford University) presenting on the following topic (via video/Skype):

The Use of Item-Parcels in CFAs to Camouflage Misfit at the Item Level: Do Two Wrongs Make a Right?

This promises to be a really excellent talk, and a great final presentation in this series of the Quant SIG. As usual the Quant SIG will meet in Seminar Room J from 12:15pm-2pm, Department of Education, 28 Norham Gardens, Oxford OX2 7PY. If you do not have access to the building, please contact Patrick Alexander ([log in to unmask]) to arrange access.


Best wishes,

Patrick Alexander


**
ABSTRACT:

In SEM studies, it is typically ill-advised to:
(a) retain an independent clusters confirmatory factor analysis (ICM-CFA) model when its assumption of undimensionality is violated (i.e., there are no cross-loadings or correlated uniquenesses); and
(b) use item parcels when an ICM-CFA model does not fit the data.
However, the combined use of (a) and (b) often provides such misleadingly good fit indexes that applied researchers believe that misspecification problems are resolved—that two wrongs really do make a right.
In three studies based on real (self-esteem and big-five personality) and simulated data, we show that the use of item parcels can – and typically does:
• substantially inflate the apparent goodness of fit
• bias substantive interpretations of the results

Purposes
The present investigation has a dual purpose in relation to critical measurement issues that face applied SEM researchers. The first is to explore potentially serious limitations in the use and misuse of item parcels in factor analysis. Item parcels are the sum or mean of responses to several indicators designed to measure the same construct, thus resulting in a smaller number of parcels rather than a larger number of items. Yang, Nay and Hoyle (2010) argue that item parceling is the prevailing approach for including scales with many items in factor analysis and SEM models. The second purpose is to compare the use of exploratory structural equation modelsing (ESEM) and the traditional independent clusters confirmatory factor analysis model (ICM-CFA; with no cross-loadings, secondary factors or correlated uniquenesses). We argue that the two issues are closely related in that it is precisely the situation in which ESEM outperforms ICM-CFA that the use of item parcels is most fraught with potential problems.
Perspectives
More Is Never Too Much. For real data, unidimensionality and pure indicators are an ideal to strive towards (i.e., a convenient fiction), but rarely if ever achieved. As noted by MacCallum (2003, p. 134): Studies based on the assumption that models are correct in the population are of limited value to substantive researchers who wish to use the models in empirical research. For simulated data based on a population generating model that approximates the unidimensionality assumption of ICM-CFA, it might be reasonable to have only a few indicators per factor (e.g., enough to model and control measurement error). However, real data rarely if ever have these ideal properties so that it is better to have more rather than 3 or 4 indicators typical in SEM studies.
Historically, in order to enhance the generalizability of constructs, it was common to have 10-15+ items/scales for the most widely used psychological tests, and standardized achievement tests typically have considerably more than 15 items. At least implicitly, tests are typically constructed under the assumption that the available indicators are a subset of a potentially very large number of of indicators of the same construct (McDonald, 2010). Analogous to concerns about the number of persons, in a perfect (simulated) data world, it might be possible to find "truth" based on only a few participants, but with real data this would seriously undermine the generalizability of the findings. The same argument applies to making generalizations based on a single or only a few indicators of most constructs, leading Marsh et al (1998) to conclude that more is never too much in relation to persons and items.
Use of Item Parcels. It is better to have more indicators per factor, but applied researchers are reluctant to incorporate large numbers of indicators into complex models. One widely employed compromise (see Marsh, et al. 1988) is to collect many items, but to use item parcels in the analyses. In a recent review of the use of parceling strategies, Sterba and MacCallum (2010; also see Bagozzi & Edwards, 1998; Bandalos & Finney, 2001; Little, et al., 2002; Sass & Smith, 2006; Marsh & O’Neill, 1994; Marsh et al., 1998; Williams & O’Boyle, 2008) were generally positive about parceling under appropriate conditions when:
• the focus is on relations between constructs (i.e., factor correlations or path coefficients) rather than scale development and the evaluation of item characteristics, and
• there is good a priori information to support the posited unidimensional factor structure; that each item loads on one and only one factor (i.e., there are no cross-loadings) with no correlated uniquenesses, and no secondary factors (i.e., i.e., an ICM-CFA model fits the data at the item level).
However, Williams and O’Boyle (2008) indicated that in practice, applied researchers frequently do not explicitly test the dimensionality of their constructs, and some use parceling even though assumptions of unidimensionality are violated.
Following Bandalos (2008) and others, we distinguish between homogeneous parcelling and distributed parceling strategies. In homogeneous parcelling strategies, closely related items that share systematic variation are placed in the same parcel. In distributed parceling strategies, items that share a source of systematic variation are distributed across different parcels either randomly or systematically. Little et al. (2002; also see Kishton & Widaman, 1994) recommended against using homogeneous strategies that can result in problems (e.g., unstable solutions or unacceptable parameter estimates), whereas the distributive strategy was less prone to these problems. However, Coffman and MacCallum (2005) concluded that “how the parcels are constructed is less important than the fact that they are used” (p. 253). Indeed, if the constructs are truly unidimensional—a prerequisite for the appropriate use of parcels—then each of these strategies should be similar. However, the use of item parceling is not justified when the unidimensionality assumption is violated.
Methods
The present investigation is based on three studies that each show problems with the use of item parcels when the traditional ICM-CFA model does not fit data based on responses to individual items and the comparison of ICM-CFA and ESEM solutions. Study 1 is based on the responses to the 10-item Rosenberg Self-Esteem instrument collected on four occasions (see Marsh, Scalas & Nagengast, 2010). Study 2 is based on responses to two 12-item scales designed to measure Extraversion and Neuroticism (see Marsh, et al., 2010). Study 3 is based on simulated data for two 12-item scales in which the population structure is purely unidimensional (i.e., ICM-CFA), a good simple structure, or a moderate simple structure. Statistical analyses were done with Mplus version 6.1, using robust maximum likelihood estimation for CFA, ESEM, and SEM solutions (see Asparouhov & Muthén, 2009; Marsh, Lüdtke, et al., 2010; Marsh, Muthén et al., 2009).
Results: Summary and conclusions
Rosenberg Self-Esteem Instrument. The Rosenberg self-esteem instrument is one of the most widely used psychological instruments, but ICM-CFAs are complicated by the problem of positively and negatively worded items. Marsh, et al. (2010) apparently resolved this issue, demonstrating that the best model includes one substantive self-esteem factor and two separate method factors associated with positively and negatively worded items. Failure to include the method factors resulted in systematically diminished test-retest correlations over four ocassions. A variety of parceling strategies all led to apparent support for a one-factor model that completely ignored misspecification associated with these method effects.
Factor Structure of Personality Measure (Neuroticism and Extraversion). Big-five personality factors have dominated recent personality research but the failure of ICM-CFA models to provide an acceptable fit has been a serious limitation. Marsh, Lüdtke, et al. (2010) demonstrated that the problem – at least in part – is reliance on the ICM-CFA model that requires each item to load on one and only one factor (i.e., to be purely unidimensional) and showed that ESEM apparently resolved this problem. Of particular substantive importance to the applied researcher, failure to account for cross-loadings in the ICM-CFA model lead to systematic biases in the estimated factor correlations: .15 for the ESEM model with cross-loadings; .49 for the ICM-CFA model that did not allow cross-loadings; between .47 and .52 for the two-factor solutions based on parcel solutions that ignored the cross-loading misspecification; and an implicit 1.0 for the one-factor solution based on the distributive parcel strategy. In this sense the use of parcels merely camouflaged the misspecification (in relation to goodness of fit), but still resulted in biased estimates of the factor correlation.
Study 3: CFA, ESEM, and the Use of Item Parcels with Simulated Data. For simulated data based on known population generating models, it was possible to demonstrate empirically the substantial positive bias in the ICM-CFA factor solutions based on items and each of the parceling strategies. For simulated data that perfectly met the unidimensional assumption underlying the application of the ICM-CFA model and the use of parcels, both the ICM-CFA and ESEM solutions resulted in good fits to the data and accurate estimates of the parameter estimates. However, even for the good approximation to simple structure in which cross loadings were apparently smaller than likely to be found in applied research, the ICM-CFA item and parcel solutions resulted in substantially biased estimates of the known population correlations even though the fits of parcel solutions were good. Without reference to the ESEM solution, the applied researcher might well conclude that the data were sufficiently close to being unidimensional to justify the use of parcels that resulted in biased parameter estimates.
Scientific Significance: Recommendations For The Use Of Item Parcels
1. Avoid using parcels to camouflage method effects, cross-loadings, and other sources of misspecification at the item level, even if they seem to be trivial and substantively unimportant. As demonstrated here, it is better to systematically model the misspecification at the item level; it might turn out to be substantively important (as in Study 1) and failure to do so is likely to bias parameter estimates (as in all three studies).
2. The use of item parcels is only justified when there is good support for the unidimensionality of all the constructs at the item level. Tests of this assumption should be conducted for the complete model at the item level as the evaluation of each construct separately ignores many forms of misspecification (e.g., cross-loadings, method effects that are common to different constructs). As demonstrated here, a useful test of this requirement is the comparison of the a priori ICM-CFA model and the corresponding ESEM model. Item parcels are only justified when goodness of fit and parameter estimates based on the more parsimonious ICM-CFA model are good, and similar to those based on the corresponding ESEM. If neither ICM-CFA nor ESEM models fit the data, explore alternative (ex-post facto) solutions at the item level with appropriate caution.
In conclusion, a priori use of item parcels is never justified without clear empirical support for the unidimensionality assumption. ESEM provides a potentially useful comparsion to ICM-CFA solutions for testing this assumption, and a more accurate representation of the data when the assumption is not met.

You may leave the list at any time by sending the command

SIGNOFF allstat

to [log in to unmask], leaving the subject line blank.

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager