Print

Print


Hi Keith,

> I did get FLOBS working in FSL ver 3.3, which we also had running  
> on our
> analysis machine.  I had two conceptual questions about the  
> mechanics of
> using a set of custom basis functions that I could not gleam from  
> the cited
> paper.
>
> First, based on the range of m1, m2, etc. one specifies in FLOBS, are
> different HRF shapes (delays, widths, etc.) fit to different voxels  
> in the
> GLM analysis?  If so, is there a way to generate a map of which  
> shape was
> best fit to each voxels

The half-cosine HRF model with parameters m1, m2 etc. along with  
specified ranges on those parameters is used to generate the basis  
set. This basis set is optimised to well-represent the HRF shapes  
that can be created from the half-cosine HRF model with the specified  
parameter ranges. It is this basis set that is then fit to the data  
at each voxel, and not the half cosine HRF model.

In fact what gets fit to the data is a design matrix containing what  
I will refer to as a basis function regressors, where each basis  
function regressor is the experimental stimulus convolved with a  
different HRF basis function.
For example, let's assume we have a basis set with 3 basis functions,  
h_1, h_2, h_3 (note that typically with FLOBs it turns out that these  
would approximate the canonical HRF (the mean HRF shape), the  
temporal derivative, and the dispersion derivative), and one  
experimental stimulus, S. Our design matrix, X, therefore contains  
three basis function regressors, X_1, X_2, X_3:

X_1=h_1 conv S
X_2=h_2 conv S
X_3=h_3 conv S

where conv is convolution.

So Feat then fits this design matrix to the data at each voxel:

Y=XB+e

In our example B will contain 3 parameter estimates at each voxel,  
B_1, B_2 and B_3 - one for each of the basis function regressors. And  
so we can use these to reconstruct the best HRF shape fitted to each  
voxel. It is simply a case of applying these parameter estimates to  
the basis functions themselves, e.g.:

HRF shape at voxel = B_1*h_1 + B_2*h_2 + B_3*h_3

In practice you can find the parameter estimates in the stats  
directory inside the feat directory (the param estimate for regressor  
1 in the design matrix is pe1.nii.gz), and the HRF basis functions  
are in an ASCII file in the flobs directory, e.g. /usr/local/fsl/etc/ 
default_flobs.flobs, and are called hrfbasisfns.txt. So you can load  
them into something like matlab and calculate the HRF shape at any  
voxel.

To get the temporal resolution of the HRF basis functions that are in  
hrfbasisfns.txt, look at to see what the --res option is set to in  
the logfile in the flobs directory (units are seconds). Typically  
this should be 0.05secs.


> Second, is using only one basis function with a range of timing  
> (m1, m2,
> etc.) values a valid HRF?

If you create a basis set that contains a single basis function then  
this will give you an HRF equal to the mean shape that can be  
generated from the half-cosine HRF model with the specified parameter  
ranges. The shape is valid, but with only one basis function this HRF  
shape is then fixed to be the same at every voxel with no flexibility.

> If not, what is the best way to combine, in a
> higher-level analysis, the individual contrast activation maps  
> generated for
> each basis function when several are used?  When using a custom  
> basis set,
> the design matrix seems to appear automatically with only one  
> contrast per
> basis function and no way to specify a different design matrix,  
> such as
> creating a contrast that averages the basis functions.  Could all  
> the COPE
> images for the different basis functions for all subjects be averaged
> together in a single group analysis, or is that statistically  
> inappropriate
> for some reason?
>
> Some insight on these issues would be greatly appreciated.

The design matrix and contrasts are generated automatically, but you  
can go in and change them if you desire - just change the button at  
the top of the GLM contrasts setup gui from "Original EVs" to "Real  
EVs" - although there is rarely a good reason to do so.

There are three options you might consider to do group analysis with  
basis functions:

(1) Pass up the regression parameter estimates for all basis  
functions into the higher-level group analysis, obtain the group  
average for each basis function separately and then perform an F-test  
across them at the group level. However, it is not clear what benefit  
there would be to doing this. The non-canonical basis functions, such  
as the temporal and dispersion derivatives, tend to average out to  
zero at the group level due to the  different subject HRF shape  
variations.

(2) Subsequently, the typically taken approach is to only pass up to  
the group level the canonical HRF regression parameter estimates.  
This makes for a simple group analysis, and the benefits of including  
the basis function at the first level are still felt in terms of  
accounting for HRF variability that would otherwise cause increased  
noise in the first level analysis.

(3) Another option that can be taken is to calculate a size summary  
statistic from the single session analyses (e.g., the root mean  
square of the basis function regression parameter estimates), and  
pass that up to the group level. However, it is important to note  
that this would then require non-parametric (e.g. randomise) methods  
at the group level than is generally used, as the population  
distribution of such a summary statistic is likely to be non-Gaussian.

Most people use option 2.

Cheers, Mark.

----
Dr Mark Woolrich
EPSRC Advanced Research Fellow University Research Lecturer

Oxford University Centre for Functional MRI of the Brain (FMRIB),
John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK.

Tel: (+44)1865-222782 Homepage: http://www.fmrib.ox.ac.uk/~woolrich

>
> Thank you,
> ~Keith
>
>
> -----Original Message-----
> From: FSL - FMRIB's Software Library [mailto:[log in to unmask]] On  
> Behalf
> Of Steve Smith
> Sent: Thursday, October 30, 2008 5:28 AM
> To: [log in to unmask]
> Subject: Re: Error loading FLOBS directory in GLM
>
> Hi - yes, this is a bug that was reported last week - we'll announce
> the fix as soon as it's sorted.
> Cheers.
>
>
> On 29 Oct 2008, at 15:07, Keith Vogt wrote:
>
>> Hello,
>>
>> I am attempting to create a set of optimal basis functions using
>> FLOBS.  The
>> flobs report seems to be created OK and a directory with 10 files is
>> made.
>> However, when I try to load this directory for convolution as
>> an "Optimal/custom basis function" in the GLM window, I get an error:
>> wrong # args: should be "feat5:checkbfcustom w i"
>> More details are copied in the attached log file.  If I ignore this
>> error and try to
>> proceed, there is an error in processing the model: "F-test 1 isn't
>> valid...",
>> even though I haven't explicitly set an F-test.  This error seems to
>> consistently occur regardless of the number of basis functions used
>> in FLOBS,
>> the number of EVs set up, or the basic shape specified.  This is my
>> first time
>> using the FLOBS utility, so any help is greatly appreciated.
>>
>> I'm running FLS version 4.0.3, with FEAT version 5.92 on a Linux
>> platform.  I
>> also tried this on a Windows PC with VMware running the same version
>> of FSL
>> and the same result occurred.
>>
>>
>> Thanks,
>> ~Keith
>> <FLOBS_in_GLM.log>
>
>
> ---------------------------------------------------------------------- 
> -----
> Stephen M. Smith, Professor of Biomedical Engineering
> Associate Director,  Oxford University FMRIB Centre
>
> FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
> +44 (0) 1865 222726  (fax 222717)
> [log in to unmask]    http://www.fmrib.ox.ac.uk/~steve
> ---------------------------------------------------------------------- 
> -----
>