Print

Print


Dear FSlers,

in the preprocessing for (time concat) group PICA one of the steps is normalisation/multiplying of the mean value of the whole
time series of a subject to a fix value (I think 10.000), with the variance increasing proportionally. As later mean percentage signals are calculated (anyway), is there a specific reason for this step?

A second, totally unrelated question, regards the now more recommended high model order group PICA approach to not oversee signal sources in the data.
From studying a number of papers (e. g Kiviniemi et al. HBM 2009) I wonder how the "overfitting" problem is generally seen.

We clearly pull more components at a model order of 70 (about 40 that seem to be neuronal, as suggested) that also seem to carry disease effects in respective comparisons in clinical samples, but how does this fit to the automated estimation of e. g. 19 components in our case?

To put it more simply: How does the estimated number of components generally go together with high order PICA in which this estimation is overrun?

thanks a lot for any opinions here,
Philipp