Print

Print


That plot is most likely the output from and FIR model.  That is typically
how those are achieved.  Otherwise, if you use copes you'll just have
barplots (or boxplots).

I'm not sure how to take FSL's FIR model and get that sort of plot out of
it though.  I *think* you could simply plot out the beta estimates (pe's)
and that's your time course, but the exact interpretation I do not know.

What is the goal of your analysis?  If you just want to compare the
magnitude of the response, the FIR model is not necessarily the way to go
and I would just use bar plots of the cope values (BOLD magnitude).  I find
this type of analysis more straightforward and easier to describe.  If
there's something else, like the duration of the response or delay until
onset, that you're trying to compare you'll need the FIR model.  I'm not
sure how people go about quantifying things like delay until onset, etc.  I
typically do not do this type of analysis.

Cheers,
Jeanette

Cheers,
Jeanette

On Tue, May 29, 2012 at 11:55 AM, Daniel Cole <[log in to unmask]>wrote:

> Hi Jeanette,
>
> Thanks for all the help!
> What that picture shows is exactly what I want.  Would it then be possible
> (as well as accurate) to convert the values given in these text files or in
> the filtered_func_data to % signal change?  FEAT gives very nice plots of
> the signal time-course but not in terms of % signal change which I would
> need to make comparisons across subjects.
>
> What I'm thinking is to-
> 1) Define seeding region in standard-space and transform to subject-space.
> 2) For each run, calculate mean activation of seeding region for each
> point in the time-series. (fslmeants -i filtered_func_data -m mask -o
> output.txt)
> 3) Convert mean activation into terms of % signal change and plot over the
> time-series.
> 4) Average across runs and subjects.
>
> Thanks again,
> Daniel
>
>
>
> On Tue, May 29, 2012 at 12:24 PM, Jeanette Mumford <
> [log in to unmask]> wrote:
>
>> Hi,
>>
>> I don't see how you could get a time course without going back to the
>> first level, as the way you've currently modeled it relies on the canonical
>> HRF.  If you simply want to work with the BOLD magnitude, then you can work
>> with the cope images.  Are you trying to get something like this<http://www.radiology.northwestern.edu/system/images/BAhbBlsHOgZmSSJJMjAxMS8wMS8xNC8xMF81Nl8wOF81MDFfQk9MRF90aW1lX2NvdXJzZV9vZl9iaWxhdGVyYWxfbW90b3JfdGFzay5qcGcGOgZFVA/10_56_08_501_BOLD_time_course_of_bilateral_motor_task.jpg>(I just randomly grabbed that from the web)?  That comes from the first
>> level time courses.  If you simply want to plot out your measures over the
>> multiple runs, then use the copes.
>>
>> Hope that helps,
>> Jeanette
>>
>>
>>
>> On Tue, May 29, 2012 at 9:12 AM, Daniel Cole <[log in to unmask]>wrote:
>>
>>> Hello,
>>> Thank you for your great explanation.  I think I finally understand the
>>> height scaling now.
>>> Also I'm not sure what I was thinking with that equation..
>>>
>>> However I am hesitant to do this FIR basis set modeling since it would
>>> be time consuming to back to first level.  This does seem to simplify some
>>> things however.
>>>
>>>
>>> Is there no way then without using basis sets to obtain time-courses
>>> from an ROI in terms of % signal change?  Perhaps a way of converting the
>>> filtered_func_data file into terms of % signal change?
>>> Sorry to be a bother about this, but I want to be certain so that I
>>> don't waste time going back to 1st level if it isn't necessary.
>>> I will continue to look into this and see if maybe I can find a more
>>> explicit methods explanation of a research group that has done this before.
>>>
>>> Thanks so much for your time and for your patience,
>>> Daniel
>>>
>>>
>>> On Fri, May 25, 2012 at 3:17 PM, Jeanette Mumford <
>>> [log in to unmask]> wrote:
>>>
>>>> Hi,
>>>>
>>>> Sounds like you want an FIR model or something along those lines (basis
>>>> sets) if you want to look at the BOLD signal over a window of time around
>>>> each trial.  So, maybe look into that.  I've never used the FIR model in
>>>> FSL, so I can't say much about that and units of the estimates.
>>>>
>>>> The BOLD signal is roughly linear, so a 2s long stimulus has about
>>>> twice the BOLD activity of a 1s long stimulus.  So if I just say my
>>>> activation was 0.25, it makes a big difference if my stimulus "ruler" was
>>>> 1s, 2s, or something else, right?  So, that's why we need to say what
>>>> duration of trial was used for the calculation and why it doesn't matter if
>>>> it was really in the study.  The height you choose for your reference trial
>>>> is a ruler.  Of course if you use a different ruler, you'll get a different
>>>> number, but as long as you state what type of trial was used for the
>>>> calculation it has meaning.  If I tell you my living room is 30 long, that
>>>> doesn't mean anything.  If I say it is 30 feet long, that means something.
>>>> Sure, if I used meters, the number would be 9.144 meters, which is a
>>>> different number, but we're using a different ruler.  The lengths are
>>>> exactly the same.
>>>>
>>>> In your case it sounds like all trials have the same duration and it
>>>> sounds like you want to use an FIR model to display how the BOLD signal
>>>> progresses over time, in a time series plot.  This will just be percent
>>>> signal change for whatever your stimulus is.  You aren't using a canonical
>>>> HRF in this case and so that takes regressor height out of the problem.
>>>>
>>>> Hope that helps,
>>>> Jeanette
>>>>
>>>>
>>>>
>>>> On Fri, May 25, 2012 at 1:53 PM, Daniel Cole <[log in to unmask]>wrote:
>>>>
>>>>> Hello, thanks for taking the time to respond.  Let me see if I have a
>>>>> better grasp of this concept now.
>>>>>
>>>>>  I think we probably would want to look at each of the runs separately
>>>>> since the events don't always occur at the same time in each run.  We're
>>>>> interested in looking at how the PSC changes over the duration of the SOA
>>>>> period.  Ideally we would have a PSC value at each TR.  Please let me know
>>>>> if I am, but I am probably missing something conceptually here.
>>>>>
>>>>> I've seen PSC plotted as a function of time in papers before, this is
>>>>> in general what I'm looking to find.
>>>>> Would the following be a potential way of doing that? :
>>>>>
>>>>> In one of the runs-
>>>>> %SC of TR1 = [  (100*COPE*0.0211) / filtered_func_data_1st_timepoint  ]
>>>>>
>>>>> I'm still a little confused about height selection since choosing
>>>>> different heights would give you different PSC values.  Does it not matter
>>>>> which height you choose as long as you are consistent?
>>>>>
>>>>> Thanks a lot for helping me in understanding this.
>>>>>  Daniel
>>>>>
>>>>>
>>>>> On Fri, May 25, 2012 at 2:19 PM, Jeanette Mumford <
>>>>> [log in to unmask]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> See below...
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, May 25, 2012 at 1:39 PM, Daniel Cole <
>>>>>>> [log in to unmask]> wrote:
>>>>>>>
>>>>>>>> Hello FSLers,
>>>>>>>> I have a 2 group, 8 run per-subject event-related design with cue
>>>>>>>> (0.5sec), SOA (7.2-12.8sec), and stimulus (0.1sec) periods.  I'm looking to
>>>>>>>> calculate percent signal change (PSC) at each time-point of the experiment
>>>>>>>> and average across runs and subject group.
>>>>>>>>
>>>>>>>> Are you looking at the 8 separate runs in any way or do you simply
>>>>>> plan on averaging things together?  If it is the latter, then I would
>>>>>> extract the values after Feat has already averaged together the 8 runs, as
>>>>>> Feat is using a better model that will downweight the runs with more
>>>>>> variability.
>>>>>>
>>>>>>>
>>>>>>>> My questions are:
>>>>>>>>
>>>>>>>> 1) Is featquery capable of giving you percent signal change for
>>>>>>>> each time-point instead of just the mean?
>>>>>>>>
>>>>>>>
>>>>>> What do you mean each time point?  For each run?  I'm unsure about
>>>>>> featquery, but I do know the code in my writeup can be used on run-specific
>>>>>> data.  What I would do is if you ran a second level analysis (within
>>>>>> subject, averaging runs within a single subject using fixed effect), use
>>>>>> the filtered_func_data from this analysis.  It is a 4d image which should
>>>>>> have your 8 first level copes concatenated (fslnvols should output the
>>>>>> value 8).  Why use this?  Already in standard space and you can get all 8
>>>>>> runs through 1 fslmeants command.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> 2) Should I be using each run's PPheight from the design.con file
>>>>>>>> as the height in the formula:
>>>>>>>> %SC = [  (100*COPE*Height) / mean_func  ]
>>>>>>>> Or should I be scaling with the mean PPheight from the design.mat
>>>>>>>> file for that given cope?
>>>>>>>>
>>>>>>>> Jeanette Mumford's guide at (
>>>>>>>> http://mumford.fmripower.org/perchange_guide.pdf) states:
>>>>>>>> "A really bad thing to do: Running featquery on multiple first
>>>>>>>> level designs from an event related and then combining results."
>>>>>>>> Does this mean I shouldn't be using each run's PPheight for scaling
>>>>>>>> as this leads to incomparable/non-combinable PSC values for each run? Or is
>>>>>>>> this only referring to differing designs at 1st-level?  Our designs are the
>>>>>>>> same for each run but the timing files are different, would this create
>>>>>>>> problems in comparability?
>>>>>>>>
>>>>>>>
>>>>>> Yes, this means you should *not* scale using the ppheight if the
>>>>>> ppheights differ.  If the ppheights differ at the first level it is my
>>>>>> understanding that featquery will weight the different runs/subjects
>>>>>> differenty, when the value for "Height" should be the same across all runs
>>>>>> and subjects for comparability.  My writeup hopefully makes this clear with
>>>>>> the example I had there.  The Height can be for any type of isolated trial
>>>>>> even if it wasn't in your design.  This is just translating the height into
>>>>>> what the height would be for a trial of X seconds.  So, you could just use
>>>>>> 0.2088 (assuming you used the double-gamma HRF), which is the baseline/max
>>>>>> height of a 1s long stimulus.  Doesn't matter if you did or did not have a
>>>>>> 1s long stimulus, your percent signal change will reflect a 1s long
>>>>>> stimulus.  OR you could use 0.0211 and this is for a 0.1s long stimulus.
>>>>>> Just clarify what your "ruler" was.  It is akin to measuring the length of
>>>>>> a room.  You don't just say the length was 15, you also add what ruler you
>>>>>> used (feet, etc).
>>>>>>
>>>>>> Hope that helps,
>>>>>> Jeanette
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> 3) I've ran featquery on COPE images in a 1st level feat directory
>>>>>>>> with a standard space mask and with convert to PSC checked.  When I look
>>>>>>>> into the log though something strikes me as odd, particularly this line:
>>>>>>>> "~/fsl/bin/fslmeants -i filtered_func_data -o
>>>>>>>> .../Run1/01052012.feat/featquery/max_cope1_ts.txt -c 20 16 20"
>>>>>>>> The fslmeants help tells me this means it is only extracting the
>>>>>>>> mean time series from the coordinate [20,16,20].  Why would this be if I
>>>>>>>> used a mask as my input?
>>>>>>>> The log shows that it uses different coordinates for each cope I'm
>>>>>>>> analyzing, is this intended?  Perhaps there is something that I am missing
>>>>>>>> or misunderstanding.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks for your future advice,
>>>>>>>> Daniel
>>>>>>>>
>>>>>>>> --
>>>>>>>> Daniel Cole
>>>>>>>> University of Rochester
>>>>>>>> Brain and Cognitive Sciences
>>>>>>>> [log in to unmask]
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Daniel Cole
>>>>>>> University of Rochester
>>>>>>> Brain and Cognitive Sciences
>>>>>>> [log in to unmask]
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Daniel Cole
>>>>> University of Rochester
>>>>> Brain and Cognitive Sciences
>>>>> [log in to unmask]
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Daniel Cole
>>> University of Rochester
>>> Brain and Cognitive Sciences
>>> [log in to unmask]
>>>
>>>
>>
>
>
> --
> Daniel Cole
> University of Rochester
> Brain and Cognitive Sciences
> [log in to unmask]
>
>