> From [log in to unmask] Wed Apr 26 14:39:59 2000
> Date: Wed, 26 Apr 2000 09:33:42 -0400
> Reply-To: [log in to unmask]
> Organization: CIP PET Centre
> X-Accept-Language: en
> Subject: power estimates
Dear Doug,
> One of the major issues in the use of any analysis technique is its
> "real-life" ability to detect change. We wanted to get an idea of the
> degree of change (i.e. % change) between two groups that can be
> detected using SPM and the usual number of subjects.
>
> To achieve this we took [18F]-setoperone scans (measuring cortical
> 5-HT2 receptors) of 18 normal subjects. We randomly assigned them to
> two groups. We observed that SPM did not report any group differences
> at baseline. We then added 5%, 15%, 35% and 50% increase in one of the
> groups in a bilateral frontal region in each original image of the
> group member. We compared the two groups using single subject, no
> covariates and age as a nuisance since there is a strong decline with
> age for [18F] Setoperone. Scans were normalized with a ligand specific
> template and smoothed at 12 mm. Since these were parametric images no
> global scaling was used.
>
> Much to our surprise, SPM did not detect these changes using its
> conventional levels of significance. Only the 50% increase showed
> p<0.01 at a corrected cluster level. Please see attachment
>
> This raises some interesting issues regarding the power of SPM to
> detect these changes. Looking at the result outputs however, it
> suggests that the activation had actually started appearing in the
> uncorrected images, much ahead of the false positives. Could it be that
> the conventional criteria (corrected values for K and voxel) are far
> too conservative for this iteration.
The power of any analysis (including SPM) depends on (i) the efficiency
of the design and estimation, (ii) the error variance of the observation
model and (iii) the specification of the alternate hypothesis (i.e.
what you want the analysis to be sensitive to). You simulations are
very nice and suggest that the error variance in your data is large
relative to 10% changes.
You are right that corrected p values are very conservative (i.e.
engender low power) but if you knew where to look you would use
uncorrected or small-volume-corrected p values. I agree with the
previous advice that normalization would be appropriate: Even if these
are parametric data you are still interested in regionally-specific
effects (Global effects on binding can be assessed with a t test on the
global variates themselves). It is likely that error variance will be
reduced after global normalization. Increasing the smoothing will also
increase sensitivity, as will increasing the number of subjects.
The critical thing here is that your power analysis is specific to your
data and does not generalize to other data types or aquistition
parameters. Your analysis is very useful for you because you can now
explore the parameter space of analysis factors (e.g. smoothing FWHM,
subject numbers, etc) to find the most sensitive analysis.
I hope this helps - Karl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|