I don’t know why you would split the dataset if your main goal is to remove noise from your data and do further analysis, rather than to replicate training and testing FIX.  

That being said, if you are happy with FIX’s current performance, you wouldn’t need to train it again.  

Peace,

Matt.

From: Andreas Werner <[log in to unmask]>
Reply-To: FSL - FMRIB's Software Library <[log in to unmask]>
Date: Tuesday, March 25, 2014 at 12:43 PM
To: <[log in to unmask]>
Subject: [FSL] Question on FIX usage

Dear FSL Experts,
Thank you for developing FIX, its a very useful tool!
I have used it on a standard resting state dataset with good success using the training file Standard.RData supplied.
I checked the FIX result on a few subjects and FIX does indeed a very good job in identifiying bad components.
The acquistion parameters of our dataset and that used for Standard.RData (the latter given in parenthesis as reported on the FIX wikipage) are quite similar:
TR=2 (TR=3)
Res:3x3x3.5 (3.5x3.5x3.5)
Length: 6 min (6 min)
TE: 30ms (?)
6mm smoothing and 100s HighpassFilter (default FEAT preprocessing, i.e. 100s HighPassFilter and 5 mm smoothing, I assume).
 
I understand that ideally one would train FIX manually on any new protocol - but I do not have an independent dataset for training FIX (and I hesitate to split the study data itself into test and training sets).
Given the similarity in acquisition parameters and FIX's  good performance on the data at hand, would it be an acceptable approach to rely on the supplied Standard.RData training file ?
 
Thank you for any advice!
Best regards,
AW