Print

Print


This is  very interesting point. Recently I wrote a paper with other 
colleagues about the factor structure of a test, in which we refused a 
structural model because, even if it had good fit indexes, however the 
model had nonsignificant factor loadings.

However, this fact, together with other facts, induced me to think that 
the cutoffs of goodness-of-fit indexes is a really questionable problem.

I always use, as references for my cutoffs, two papers:


Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in 
covariance structure analysis: Conventional criteria versus new 
alternatives. Structural Equation Modeling, 6, 1–55.

Schermelleh-Engel, K., Moosbrugger, H.& Müller, H. (2003) Evaluating the 
Fit of Structural Equation Models: Tests of Significance and Descriptive 
Goodness-of-Fit Measures. Methods of Psychological Research Online, 8,23-74.


However, I am starting thinking that more rigorous procedures to 
determine cutoff levels, different form the Montecarlo procedure, would 
be necessary to define these cutoffs. After beginning using Mplus, I 
found more difficult to determine models with good fit indexes. Is this 
only my impression?


Best Regards,

Marco Tommasi


Il 08/06/2017 23:44, Paul Barrett ha scritto:
>
> Well, an article that is going to change practice worldwide, as did 
> (Hu and Bentler’s 2009 article) .. let alone how reviewers and editors 
> now adjudge SEM/CFA articles for publication in future.
>
> Just out ..
>
> McNeish, D., An, J., & Hancock, G.R. (2017). The thorny relation 
> between measurement quality and fit index cutoffs in latent variable 
> models./Journal of Personality Assessment/ 
> (http://www.tandfonline.com/doi/abs/10.1080/00223891.2017.1281286 ), 
> In Press, , 1-11.
>
> *Abstract*
>
> Latent variable modeling is a popular and flexible statistical 
> framework. Concomitant with fitting latent variable models is 
> assessment of how well the theoretical model fits the observed data. 
> Although firm cutoffs for these fit indexes are often cited, recent 
> statistical proofs and simulations have shown that these fit indexes 
> are highly susceptible to measurement quality. For instance, a root 
> mean square error of approximation (RMSEA) value of 0.06 
> (conventionally thought to indicate good fit) can actually indicate 
> poor fit with poor measurement quality (e.g., standardized factors 
> loadings of around 0.40). Conversely, an RMSEA value of 0.20 
> (conventionally thought to indicate very poor fit) can indicate 
> acceptable fit with very high measurement quality (standardized factor 
> loadings around 0.90). Despite the wide-ranging effect on applications 
> of latent variable models, the high level of technical detail involved 
> with this phenomenon has curtailed the exposure of these important 
> findings to empirical researchers who are employing these methods. 
> This article briefly reviews these methodological studies in minimal 
> technical detail and provides a demonstration to easily quantify the 
> large influence measurement quality has on fit index values and how 
> greatly the cutoffs would change if they were derived under an 
> alternative level of measurement quality. Recommendations for best 
> practice are also discussed.
>
> From the final paragraph of the article:
>
> "As a final note to put the implications of these findings into 
> perspective, consider again the two sets of AFIs /[Approximate Fit 
> Indices] /mentioned near the beginning of the article. As a reminder, 
> in Model A, RMSEA = 0.040, SRMR = 0.040, and CFI = 0.975; and in Model 
> B, RMSEA = 0.20, SRMR = 0.14, and CFI = 0.775. Under current practice 
> where the HB /[Hu and Bentler]/ criteria have become common reference 
> points, Model A would be universally seen as fitting the data better 
> than Model B, which would likely be desk-rejected at many reputable 
> journals. However, if one does not somehow condition on measurement 
> quality, this assertion can be highly erroneous. If the factor 
> loadings in Model A had standardized values of 0.40 and the factor 
> loadings in Model B had standardized values of 0.90, Model B actually 
> indicates better data–model fit and has higher power to detect the 
> same moderate misspecification in the same model based on the results 
> of our illustrative simulation study (assuming multivariate 
> normality). Reverting back toTable 1, about 25% of moderately 
> misspecified models produced SRMR below 0.04, about 5% of models 
> resulted in CFI values below 0.975, and nearly 95% of models produced 
> an RMSEA value below 0.04 with poor measurement quality. Conversely, 
> with excellent measurement quality, essentially none of the 
> misspecified models produced an SRMR value less than 0.14, an RMSEA 
> value less than 0.20, or a CFI value less than 0.775. Even though the 
> AFI values of Model B appear quite poor on first glance, under certain 
> conditions, even these seemingly unsatisfactory values could indicate 
> acceptable fit with possibly only trivial misspecifications present in 
> the model. More important, the seemingly poor Model B AFI values 
> better classify models with excellent measurement quality compared to 
> the seemingly pristine Model A AFI values when measurement quality is 
> poor. *To put the thesis of this article into a single sentence, 
> information about the quality of the measurement must be reported 
> along with AFIs for the values to have any interpretative value*."
>
> And that bit in bold is what I’ll be demanding in future from every 
> article author who uses SEM in their analyses, whether as reviewer or 
> associate editor.
>
> Adjudging model fit has moved from the application of simple ‘golden 
> rules’ to more tricky thoughful analytical overview.
>
> Regards .. Paul
>
> /Chief Research Scientist/
>
> *Cognadev.com*
>
> *__________________________________________________________________________________*
>
> *W*:www.cognadev.com <http://www.pbarrett.net/>
>
> *W*:www.pbarrett.net <http://www.pbarrett.net/>
>
> *E*:[log in to unmask] <mailto:[log in to unmask]>
>
> *M*:+64-(0)21-415625
>

-- 
Dott. Marco Tommasi, Ph.D.
Dipartimento di Scienze Psicologiche, della Salute e del Territorio
Università degli Studi di Chieti-Pescara

Department of Psychological, Health and Territorial Sciences
University of Chieti-Pescara

via dei vestini 31
66100 Chieti
Italy

tel.: +39 0871 355 5890
e-mail: [log in to unmask]



---
Questa e-mail è stata controllata per individuare virus con Avast antivirus.
https://www.avast.com/antivirus