Hello to the SPM community
As I was reading Andreas question's, I felt concerned with this problem.
The original study consist of 250 scans * 12 subjects * 2 sessions (6000
scans)
As I thought previously that SPM99b may have the same limitations as winSPM
(~800 scans), I first performed a "data reduction" namely by making 1 mean
image out of 10 after realignment, and before performing the rest of the
analysis (~500 scans remaining).
By the way, does anybody have a clear idea of the higher limits (4800 was
too high in Richard's mail
http://www.mailbase.ac.uk/lists/spm/1999-07/0129.html)
Unfortunately as there is a large amount of covariates, there is a
tremendous loss of degree of liberty with some influence on the level of
statistical inference (~250 residual ddl).
We have interesting results that looks very much stable using even high
uncorrected thresholds (less statistically significant). Unfortunately the
used thresholds remain weak (alfa < 0.001 uncorrected for the main
subtraction, alfa < 0.025 for the factorial design - gloups !).
Sure that the right way should be to perform a true random effect study,
but it may be that some of the interesting results of the factorial study
may not survive to this further loss of ddl.
This will probably sounds absurd for statisticians, but I was wandering
whether if could be possible to reset the ddl at 10* the number of scans -
the covariates.
As it should only be used while performing the {T} statistic, it could be
of limited work to change it ?
This may sounds as tinkering, but is it really that absurd ?
Thanks for any comment.
Best regards
Jack
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|