JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for FSL Archives


FSL Archives

FSL Archives


FSL@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

FSL Home

FSL Home

FSL  June 2007

FSL June 2007

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: randomise questions

From:

Steve Smith <[log in to unmask]>

Reply-To:

FSL - FMRIB's Software Library <[log in to unmask]>

Date:

Wed, 6 Jun 2007 08:43:02 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (164 lines)

Hi,

On 5 Jun 2007, at 17:40, Antonios - Constantine wrote:

> Dear fsl users,
>
> I have 8 control groups and 14 Parkinson disease subjects and after  
> using
> siena and sienax i wanted to use the randomise tool in order to  
> localize
> where there's a difference in atrophy in these groups..
> I've read both the randomize manual (including the example in the  
> practical
> instructions) and the Holmes &Nichols paper for nonparametric  
> permutation
> test, though i still have a lot of questions about the  
> “translation” of the
> results after using the randomise tool. I would really appreciate  
> your help
> once more...
> My questions are the following:
>
> 1)Is it a problem if the number of controls is not the same with the
> patients number? if yes is it recommended to reduce the number of  
> patients to 8?

No, it's fine for the numbers to be different.

> 2)The outputs of randomise are 16 different statistics. 4  
> sienar_tstat, 4
> sienar_maxc_tstat, 4sienar_max_tstat and 4 sienar_vox_tstats. If i
> understood well, first we check the p-values (which are filtered  
> through
> fslview in the range [0.949,1] in order to see p-values <0.05) from
> sienar_maxc_tstat1 and sienar_maxc_tstat2 and we detect if there's  
> a group
> difference. In your example there are no clusters with voxels in
> sienar_maxc_tstat2 with p-value <0.05 while there are some clusters in
> sienar_maxc_tstat1 with p<0.05..So this means that the control- 
> patient group
> shows a positive value?

Yes, within those voxels.

> What would be the conclusions if there were also
> clusters in sienar_maxc_tstat2 with p<0.05?

patient>controls in _those_ voxels (they will not be overlapping with  
the first set above!)

> What about the
> sienar_maxc_tstat3 and sienar_maxc_tstat4?what kind of info can we  
> extract
> from them?

It depends what contrasts you selected for tstat3 and 4. If they are  
the two group means they are just asking - where is the group mean  
different from zero.....probably not interesting in this case.

> 3)The next step is to check the sienar_tstat3 and sienar_tstat4 and
> disambiguate what's going on where there's a group difference.Are  
> these
> stats corrected for multiple comparisons?

Yes - see the randomise manual http://fsl.fmrib.ox.ac.uk/fsl/ 
randomise/ - that's what maxc and max are.

> According to your example these
> stats gives us the info about what each group is doing  
> separately...when i
> load them in fslview independently i see 2 different colors for
> sienar_tstat3(red yellow filtered with fslview in a range of  
> [0.5,3].why do
> we need to filter the output in this range?and why do we need two  
> colors to
> depict them in fslview?these intensities  are definitely not p- 
> values but
> what exactly are?)

You can choose whatever colourmap you like. If you choose "Red" then  
you will just see different brightnesses of red.
If you click on the (i) and select Red and then turn on the second  
(negative LUT) just below that and select Blue, then set the  
intensity display range to say 2:5 you will see negative values <-2  
in Blue and positive values >2 in Red.

The tstat3 image is the raw t-statistic before randomisation is  
actually run to get conversion to p-values.


> and intensities with bigger magnitude in yellow and lower
> magnitude in red...Exactly the same in the sienar_tstat4 with  
> colors blue
> and light blue...How can i “explain” these results ? What kind of
> information does these sienar_tstats gives us? what about the  
> sienar_tstat1
> and sienar_tstat2?what kind of info can we have from them?

tstat1 (group difference) shows you where one group has atrophy  
values that are greater than the other - to interpret fully you also  
need to look at 3 and 4 to find out whether each is actually positive  
or negative (you can't tell that just from the difference).

> 4)The cluster threshold in your example is set to 1...why did you  
> choose
> this threshold?

Because in that example the effect was weak. setting the cluster- 
forming threshold is arbitrary and I'm afraid there's no good answer  
to how to set it - many people just set it at say 2.3 or 3 and leave  
it at that. Probably a good idea to search the email list archives  
for more on this - it's a big subject!

> what's the range of the values that this threshold can
> take?Is it a low threshold or a large one?do we have to use more  
> than one
> cluster thresholds each time? I read that if we choose a low one we  
> loose
> intense focal signals, while if we choose a high one then we loose low
> intensity signals..
> 5)What about the sienar_max_tstats?what kind of information can we  
> extract
> from these stats?we use them only to localize the voxels of  
> significance
> since this is the weakness of a suprathreshold tests?

Again - see the manual - these are voxelwise p-values corrected for  
multiple comparisons - if you get results there then fine - but most  
likely this won't show up anything significant, which is why we also  
use the cluster-based stats.

> As for the
> sienar_vox_tstats how useful these data can be since there're  
> uncorrected
> for multiple comparisons and what kind of info we extract from them?

If you knew exactly where you were interested in looking then you  
wouldn't need to do multiple comparison correction - so you could  
then use these p-values.
Alternatively, you can feed these uncorrected p-values into FDR for  
an alternative method of multiple comparison correction.

> thanks a lot in advance and i'm really sorry if i tired you with this
> extended e-mail but i couldn't find some information to help me on my
> previous questions.

No problem - cheers, Steve.


>
> Antonios-Constantine Thanellas


------------------------------------------------------------------------ 
---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
[log in to unmask]    http://www.fmrib.ox.ac.uk/~steve
------------------------------------------------------------------------ 
---

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager