Would anyone like to contribute thoughts on this problem? I'm
critiquing a study in which adolescents treated for behavior problems
at a public facility are compared to normal high school kids at Time
1 and after 1 year. There are 4 continuous measures of behavior
problems. The means scores for the treated group decrease over 1 year
but the mean scores for the controls do not decrease. However, the
initial levels are very different:
Time 1:
Tx group: .12, .12., .47, .37 respectively
Controls: .01, .03, .05, .04 respectively
1-Year follow-up:
Tx group: .06, .06, .23, .23 respectively
Controls: .01, .04, .02, .04 respectively
One school of thought holds that these problems resolve
spontaneously; however the authors claim a treatment effect is
demonstrated, pointing to their RM-ANOVA showing significant
interaction of Group X Time as evidence. Intuitively I want to say
that the controls are already at a baseline level that is relatively
normal for their age and further decreases should not be expected;
hence the interaction proves nothing. Is there a more mathematical
way to critique this study? Is the linear model violated in some way
by there being a 'floor' to the phenomenon?
David Klein
|