And 0.01 level as well.
But will disappear at 0.001 level.


miao wei <[log in to unmask]>于2014年10月8日星期三写道:
And this is at 0.05 level in FSLview

miao wei <[log in to unmask]');" target="_blank">[log in to unmask]>于2014年10月8日星期三写道:
Good news! I tried to view the uncorrected FSL vbm results and found out that before any correction it only show significance in the parietal lobe, which replicate our SBM results in the parietal lobe!


miao wei <[log in to unmask]>于2014年10月8日星期三写道:
"use fslstats or fslmeants to calculate the volumes of GM in each of these 22 ROIs, and see if these are significant after correcting for the 22. Then you can show voxelwise results for the whole brain, and also the regional results"
So extract each ROI's volume in FSL and correct for 22 P values?

But my significance are all in SBM, what if FSL ROIs don't give me significance. They are still two different methods, it is possible that they give different information. 

2014-10-08 13:06 GMT-07:00 miao wei <[log in to unmask]>:
Why this?(but if any of the 22 ROIs isn't in the cortex (e.g., subcortical structures), then SBM isn't an option)
So VBM first? and then? 

2014-10-08 12:50 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

PS: Not sure this needs to be mentioned, but if any of the 22 ROIs isn't in the cortex (e.g., subcortical structures), then SBM isn't an option, and you need to use VBM (or FIRST, but it's a different thing then).


On 8 October 2014 20:36, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Miao,

It looks like this is becoming messy... First thing is that when working with volumes on surfaces (as well as areas) interpolation methods become a key issue, and I'm not sure how each software package is currently handling this. This is a bit more complicated than for thickness, so it needs extra attention. If you go for SBM, my suggestion is to really embrace it, doing thickness and area separately, and forgetting all about VBM. You can then use, e.g., FreeSurfer, although other packages I believe can do lots of interesting things (have a look also at BrainVISA, Caret, SUMA, CIVET, etc). This means essentially re-doing all your study from scratch (!).

However, now that you say that the ROIs were in fact used to compute a summary volume in your SBM (as opposed to being masks inside which the analysis continued to be vertexwise or voxelwise, which from the older emails I believe is what you had originally done with the VBM analysis in randomise), then maybe you can go back to the VBM data (that wasn't significant across the whole brain) and use fslstats or fslmeants to calculate the volumes of GM in each of these 22 ROIs, and see if these are significant after correcting for the 22. Then you can show voxelwise results for the whole brain, and also the regional results (which have less localising power, but have higher statistical power if the effect roughly matches or is larger than each of the respective ROIs). This all would go in the discussion of the limitations of the paper.

My choice would be SBM with area and thickness. It's a lot more work, but the results are also more interesting. Nonetheless, maybe the already preprocessed VBM data could be a low hanging fruit with interesting results for the 22 regions, and that may worth reporting.

All the best,

Anderson



On 8 October 2014 19:38, miao wei <[log in to unmask]> wrote:
Hi Anderson,
Thanks you for the quick reply.
I am stuck in the middle of VBM and SBM I think.
So FSL provide VBM volume and correction method of multiple comparison correction, but the corrected results showed no significance. 

Brainsuite (SBM) has two ways to do this. 
1. whole brain mode can do only Thickness analysis and use FDR to do the correction (no significance after correction)
2. ROI approach (thickness, cortical surface, GM volume, WM volume and Total Volume), GM, WM and total volumes showed significance after FDR correction (corrected on all ROIs selected). Now that looking back, i think we got these significance because we are correcting not the entire brain but some regions. 

"I'd try to do a whole-brain analysis using SBM (there are software that does this), and then use the regions to mask areas of interest from the literature. "

 So you mean do the volume SBM and which software should I use to do whole brain corrections on volumes? Brainsuite has the final product of nii.gz for volumes. 

"In fact, I'd drop volumes altogether, and use separately thickness and surface area. "
I tried thickness in Brainsuite and did correction, but none survived. surface area is the same as volumes, they don't have tools for corrections.

To sum up at this point I should :
1. find a software to do the correction for Brainsuite volumes.
2. the results will be more likely to be negative
3. then base on the paper and select ROIs,
4. report the significant ROIs after FDR.

Sound making more sense?
Really appreciated!!
Miao 



2014-10-08 10:06 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi Miao,

Presenting VBM, then ROI results with SBM (volumes) seems a twisted way of showing results. I don't think a reviewer would be satisfied, and even less the readers of the paper later. I'd try to do a whole-brain analysis using SBM (there are software that does this), and then use the regions to mask areas of interest from the literature. Although VBM has a good relationship with SBM volumes, the correlation isn't perfect, and presence or absence of an effect in one cannot guarantee the same result in the other.

In fact, I'd drop volumes altogether, and use separately thickness and surface area. Given you have SBM, you have this great bonus over what VBM could provide.

All the best,

Anderson


On 8 October 2014 17:41, Miao Wei <[log in to unmask]> wrote:
This makes sense. But we used Brainsuite and it doesn't have the whole brain analysis tool available for volumes. It does for thickness though.
The reviewers all bugging about the issue of us didn't provide a whole brain result. So we need to take care of this somehow before resubmitting.

So a VBM first and then ROI can work for the interest of reviewer?
Miao

Sent from my iPhone 

在 Oct 8, 2014,9:23 AM,"Anderson M. Winkler" <[log in to unmask]> 写道:

Hi Miao,

I don't see where VBM would be needed here. You can do a whole brain analysis using SBM, report it, along with the results for the regions that are important for your hypothesis, and that's it. No need for VBM.

Btw, could you tell what kind of SBM you used, i.e., what software, etc? FreeSurfer, for instance, has tools to provide statistics using surface models (vertexwise), including cluster-level inference, etc.

All the best,

Anderson



On 8 October 2014 16:20, Miao Wei <[log in to unmask]> wrote:
We had a hypothesis based on a theory paper to pre-select 22 ROIs and did the SBM and found significance after FDR correction. We were working on a paper but it was rejected because reviewer thought that we need to have whole brain analysis first. 
So we did Vbm and found no significance.
And we need to submit elsewhere, we want to use vbm first and report uncorrected results (mention the fact that it didn't pass correction) and then report SBM.
Sounds valid? Any suggestions? So I just use cluster directly? What does the command line look like?
By the way, we used volume in SBM.
Thanks!
Miao
Sent from my iPhone 

在 Oct 8, 2014,12:53 AM,"Anderson M. Winkler" <[log in to unmask]> 写道:

Hi Miao,

Please, see below:

On 8 October 2014 00:20, miao wei <[log in to unmask]> wrote:
Dear Anderson,
As a continuation of the data analysis effort of this dataset, even though we found no significance after whole brain voxel based correction (Randomise), but we have found significant ROI analysis results using other method (surface based analysis and corrected for multiple correction).

So the original analysis used VBM, and now with SBM, are you using thickness? If so, thickness doesn't compare well with gray matter volume, and most of the variance of the volume can be explained instead by the surface area of the cortex. This needs to be considered when analysing and interpreting the results, that will have then a different meaning.
 
We now just want to use the uncorrected whole brain analysis results as a reference basis to provide some direction and then report the ROI based results we got from elsewhere.

Sounds like you'd like a mask from VBM to constrain the SBM? Could you explain this better? It may not be valid depending on what's measured in the SBM analysis. Would you give more details?
 
The question is how do we report the uncorrected whole brain results? I understand that "cluster" does some correction and will output the coordinate of significance. In our case, we don't need the correction part. How can we get all clusters that are significant and get their coordinate and positions?

The command cluster doesn't do correction if a smoothness estimate isn't provided, and you can just use the peak coordinates of the statistic (or of the 1-p, or compute a -log(p)). As for randomise, it doesn't provide uncorrected cluster-level p-values, only corrected.

All the best,

Anderson




2014-09-15 10:15 GMT-07:00 miao wei <[log in to unmask]>:

Dear Anderson,
Sorry to bother you again. But I posted on the email list , no one replied. This is the continuation of my previous questions. Basically, I am doing a structural analysis with FSLVBM. 
I have finished the 3 steps as described on wiki and tried Randomise to do the correction,but don't have any results left after the correction (and I tried FDR the results still no luck).

Does the following steps seem to be an valid alternative of Randomise to deal with the multiple comparison correction?
1. smoothest **_tfce_p_tstat1.nii.gz -m mymask  (can't find wiki of smoothest)
2. then run cluster to get clusters above 5% threshold
 cluster -i myResult -t 2.3 -p 0.05 -d DLH --volume=**
not sure what should be put to -t to replace the arbitrary 2.3 value. 

3. then to report the results from cluster as the (GRF) corrected results.

Seems making sense?
Thank you.

2014-08-11 16:08 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi Miao,

As I understand if I have plenty of results before permutation and none after either permutation of FSL fdr, the reason is that the effect in my dataset is probably small.
 
Indeed, the effect may be small, looking "significant" in the uncorrected, but unable to survive correction.
 
I tried to give it a smaller mask (only HG), but still no results.

Sounds another reason to suspect there's no actual effect.

Just wondering , are there any other ways that I could try? Or do you have other suggestions for the data processing procedure of mine?

It seems you run fdr. A suggestion is to make sure you use the option --othresh, which produces an image already thresholded, or, if you use the option -a, remember that it saves p-values (as opposed to 1-p), even if the option --oneminusp is used (yes, the help text is incorrect, and we'll certainly fix this in the next patch).

Another suggestion, always useful, is to make sure you use a generous number of permutations (say, 5000 or 10000). More permutations won't increase power, but it makes the p-values more exact, so you can be more confident about the results.

All the best,

Anderson

 

2014-08-09 11:16 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi Miao,
I don't think the mask needs to explicitly be binarised (it should be binarised internally), although it won't hurt in doing so beforehand.
All the best,
Anderson



On 7 August 2014 17:58, Miao Wei <[log in to unmask]> wrote:
I found nothing significant even though I only have one region of interest (HG). I didn't Binarise the mask.Could this be the reason?
Thanks.
Miao

Sent from my iPhone 

在 Aug 7, 2014,1:19 AM,"Anderson M. Winkler" <[log in to unmask]> 写道:

Use fslmaths to add masks together and binarise them.


On 6 August 2014 23:25, Miao Wei <[log in to unmask]> wrote:
And how do I add in multiple regions in the mask in fslview?

Sent from my iPhone 

在 Aug 6, 2014,12:33 PM,"Anderson M. Winkler" <[log in to unmask]> 写道:

Hi Miao,

Nice to hear that the masking solved.

Regarding the correction, the procedure in randomise isn't strict at all -- the idea that correction would be stringent is a myth. The correction produces exact p-values, that is, the amount of errors is right at the nominal level of the test, i.e., the "alpha" of the test, being not strict, conservative, or any other name that could be given, even if the data is smooth and/or has a high degree of spatial interdependence between voxels.

That said, it is easier to detect significant effects if fewer tests are performed, so a smaller mask to constrain the search region could be useful. However, such a smaller mask must come from independent studies, or from well defined a priori hypothesis, supported by literature. It is not correct to use surviving clusters from one analysis and reuse that to further mask regions in the same study.

All the best,

Anderson



On 6 August 2014 19:30, miao wei <[log in to unmask]> wrote:
Hi Anderson,
Thanks for the reply.
I got the mask right and now i don't have results outside brain. Yeah!!
But after Randomise, there are no significant results left and I tried fdr from FSL , nothing left as well. But before corrections, there are plenty of results. So I am thinking this is because the corrections are tend to be very strict in fsl.
As I am thinking about the solutions, can i somehow add a extent limitation. 
So 1. get my uncorrected p map and then apply a 125 voxel spatial limitation. All clusters have to be at least 125 voxels big to be entered to next step
2. take the results from step 1 and then get a mask of those clusters.
3. use this cluster in either Randomise or FDR, to only correct for those clusters. This way we are reducing the voxels we are testing and increase our power.
Does this seem right? And how can i do it technically? Thanks a lot for any help.


Miao


2014-08-06 3:48 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi Miao,

Check if the mask is appropriate and tight around the cortex. Also make sure you are looking at the corrected p-values, and note that the files are saved as 1-p, rather than p itself.

The manual for FSLView is here: http://fsl.fmrib.ox.ac.uk/fsl/fslview/

All the best,

Anderson



On 4 August 2014 18:13, miao wei <[log in to unmask]> wrote:
Hi Anderson,
Thanks for the continuos help.
I got the result from Randonmise. However, why some of the results are out of the brain? 
I checked my template_4D_GM.nii.gz and GM_mod_merg_s3.nii.gz and they looks fine in the movie model.
What could cause this and how can I correct it to get meaningful results?
Plus, I found out that sometimes when I overlay my tfce results to MNI template, it is red and sometimes it is blue (the same one file) which is really weird.

Could you please suggest?
Thanks a bunch!!
Miao 


2014-08-01 17:31 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi Miao,

The TFCE is more interesting, more powerful, and more spatially specific than cluster extent, and I'd focus my attention to it. The tstat1 and tstat2 are the results for contrasts C1 and C2 respectively. You should also have files tstat3 and tstat4, for contrasts C3 and C4. If these weren't produced, this means that these contrasts actually don't exist in your contrast file (i.e., the file that you give to randomise with the option -t). You may want to double check that.

About FSLview, if the orientation labels disappear, this means that the information in the header of your NIFTI files wasn't sufficient to determine exactly the orientation for display. There are various ways to fix this, but if you know which side is which (or pay attention to small asymmetries), the labels may not be all that necessary, and you can figure out what the correct sides are.

All the best,

Anderson


On 1 August 2014 18:54, miao wei <[log in to unmask]> wrote:
Dear Anderson,
I was able to make it run by adding in ~ as you suggested!
Now I have 
stats_clustere_corrp_tstat1.nii
stats_clustere_corrp_tstat2.nii
stats_tfce_corrp_tstat1.nii
stats_tfce_corrp_tstat2.nii

which one should i pay more attention on? I understand the clustere one is the cluster extent based one. 
And why there is 1 and 2? Recall that my contrast was 
C1 1000
C2-1000
C3  0001
C4  000-1
Which contrast do they represent?

One odd thing is that after i select the standard MNI 152 template and overlay with these results, the Left and right sign which can tell me which hemisphere shows the effect disappeared. 

Forgive me for so many questions.
Really appreciated.
Miao



2014-08-01 3:32 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi Miao,

Yes, the added EV and the new two contrasts are fine (for some reason the numbers are glued together in your email, but I believe they were entered correctly in the GLM GUI).

About the error, are you sure that there is a directory called "/Downloads" and even if it exists, you have writting permissions to it? Try instead putting a tilde symbol (~)
in front of the output string, like this:

randomise -i GM_mod_merg_s3.nii.gz -o ~/Downloads/fsl_vbm/E_assembled -d fsl_vbm.mat -t fsl_vbm.con.txt -n 10000 -D -T

It's probably not the cause of an error, but in general there's no .txt extension on these design files. It'd be good to make sure that they look ok inside (i.e., weren't screwed up by some text editor).

All the best,

Anderson



On 1 August 2014 06:32, miao wei <[log in to unmask]> wrote:
Hi Anderson, 
Thank you for getting back to me.
I just constructed the .mat and .con files as described above. And I added another regressor which is one of my cognitive scores.
So I entered all the values for .mat file and the contrast was
C1 1000
C2-1000
C3  0001
C4  000-1
Where I am interested to test the correlations of the brain volume of each voxel with the first and last variables. (1. learning rate, 2. age, 3. IQ, 4. how fast subjct can subject read words) while other non-interested variables were controlled. Am I right?

Then I ran 
randomise -i GM_mod_merg_s3.nii.gz -o /Downloads/fsl_vbm/E_assembled -d fsl_vbm.mat -t fsl_vbm.con.txt -n 10000 -D -T

After running for hours, it now gave me the error message:

Starting permutation 10000

Error: failed to open file /Downloads/fsl_vbm/E_assembled_tstat1.nii.gz

Image Exception : #22 :: ERROR: Could not open image /Downloads/fsl_vbm/E_assembled_tstat1

ERROR: Program failed


An exception has been thrown

ERROR: Could not open image /Downloads/fsl_vbm/E_assembled_tstat1Trace: save_basic_volume4D.



Exiting


I am not sure what went wrong. When I ls the folder it only has fsl_vbm.con.txt, not fsl_vbm.con

Or the problem is that I should have put -c option? I want the cluster larger than 125 voxels to show on the image. How should I put it? But most importantly, why this failed?


Please suggest. Thanks a lot.

Miao 




2014-07-31 19:41 GMT-07:00 Anderson M. Winkler <[log in to unmask]>:

Hi,

Please see below:


On 31 July 2014 20:05, Miao Wei <[log in to unmask]> wrote:
Dear FSL experts,
I am doing a structural analysis using FSL and now I am at the step of
fslvbm_3_proc
It requires me to construct a design.mat and a design.con file.
I want to do the correlation between brain volume and a continuous variable (learning rate of a foreign language)
and with partialing out the effect of Age and IQ scores on brain volumes.

I understand that by "brain volume" you mean the volume of gray matter on each voxel, as assessed through VBM, as opposed to an overall measurement of brain volume as a whole, is this right? I'll take from here that this is a VBM study.

 
1.i understand that I need to use glm to get .mat and .con file. How many EVs should I set up? 3 ?

Do I just put a column of values of each regressor that I am interested
in the glm Gui? EV1 for learning rate, EV 2 for age and EV 3 for IQ.

Yes, exactly.
 

2. How about the contrast and F test tab?

It sounds that age and IQ are nuisance variables, and aren't to be tested. The contrasts then would be:

C1: 1 0 0
C2: -1 0 0
 
An F-test isn't needed here (but if you want, you can divide the significance level of these tests by 2, that is, consider as significant p-values equal or below 0.05/2 = 0.025).


3. I know that there is an -D option to demean the data in the next
step- randomise. Should i use -D?

Yes, you definitely want to use the -D option here, because in the model you are not including an intercept (i.e., a column full of a constant non-zero value, such as 1).

All the best,

Anderson




--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/




--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/




--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/




--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/






--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/



--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/



--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/






--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/





--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/



--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/


--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/



--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/



--
Miao Wei
Graduate Student, Brain and Cognitive Science
Psychology Department
University of Southern California
3620 South McClintock Ave, SGM 501
Los Angeles, CA 90089-1061
http://lobes.usc.edu/