Hey all,
I was trying to run a SVM classification scheme on ICA decomposed single trial estimates for MRI data similar to this (https://www.ncbi.nlm.nih.gov/pubmed/22227050) study. Using the Haxby 2001 data, the SVM could predict when people are looking at faces vs houses.
The weird thing was I had ran a permutation test, training the SVM on data randomly assigned to labels and testing on truly assigned labels and data, and found that the SVM was doing significantly worse than chance. If you take a look here (https://github.com/jbdenniso/Dim_Reduction/blob/master/Decomposition_try.ipynb) 4 dimensionality reduction schemes (PCA, ICA, Factor Analysis, and Dictionary Learning) were run on the data.
Each graph shows an analysis for each patient on differently reduced data. The green line is cross validated accuracy, the black line is what should be chance and the blue bars are a histogram of accuracy scores from models trained on randomly assigned labels.
Does anyone knows what's going on here? I don't understand how it's systematically doing worse than the coin flip. Does ICA just make the algorithm choose 'bad' support vectors? Is this effecting the accuracy of the real data?
Best,
Jeff
########################################################################
To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1
|