>>Its always a trade off. With only one brain, the results can be skewed
>>towards that brain's particularity. However, if you start averaging
>>brains, the brain regions become more fuzzy as there is a smoothing
>>effect. Not to mention that your transform parameter might matter.
>The "smoothing effect" arises because brains /are/ different and
>therefore cannot be fitted together in any "affine" manner.
Agree. But I would not go so far as saying non-affine transform is
better than affine transform. I believe that non-affine transform is OK
only if you have a person looking at the transform parameter, and
transformed image and make a judgement on whether the transformation is
valid or not. This is because in practice, whichever transformation you
use the computer try to find the best match between the reference and
the transformed image and don't actually care whether the result is
valid or not. In fact, in a automated processing I will stick back to
affine transform. At least I know that things are unlikely to go
>opinion, averaging implicitly takes into account that (when doing a
>group study) individual brains cannot be matched at "full resolution".
The way I see it is that if the smae brain regions (accross different
subjects) overlaps, then averaging is not a problem. What happens when
brain regions do not overlaps is that we build up a simple statistics
picture of what should happen there. Say we take 10 subjects: at
template voxel (3,3,3), 3 subjects has it as region A and 7 other at
region B. Implicitly, we are saying that template voxel (3,3,3,) has
probability 0.3 of being region A and 0.7 at region B.
Now, the best way to do a template, given my theory, is a two step process.
First, for every subject in the template, you have to identify the brain
Then, get the average image, lets just call this the template image.
Affine/non-affine transform the subject image to the average image. Keep
the parameters of the transformation.
After this, use this to transform the identified brain region for
individual subject to the template space.
Right now you will have basic statistics about the % of voxel x,y,z
belonging to region A, B and C.
When one do a image study, we can say that if your mapping to the
template is 100%, this is the probability that voxel x,y,z belonging to
region A, B, C. I think this is more useful then saying that voxel x,y,z
is region A.
I think someone in MRC-CBU did this a long time ago (3 years ago).
>>Now, if you are worry about formadehyde distorting the brain (which as a
>>layman, I think it will), and I worry about averaging having a
>>detrimental effect, why not we try using the two brains from the
>>"Visiable Human Project". At least the male subject (god bless him), is
>>as "fresh" as you can get, and the resolution the best. Combining him
>>with her, we might have a very detail map with good labelling. At such
>>resolution, we might be able to introduce engineering principles such as
>My point with the preservation was perhaps wrongly emphasized. What I
>was trying to say was that taking a brain out of the skull is bound to
>affect it's shape. This is another confound of using it as a yard stick
>for other brains (still inside their respective skulls).
>I have to disagree on the usefulness of a "high-res standard".
My point about matching lower resolution images to a high-res standard
is a remnant of my work in a Precision-Calibration Laboratory for
instrumental calibration. At that time, I was calibrating machine to
NIST/NPL standard. One rule in the lab is that you cannot calibrate
instrument using calibration standard that is equal to the uncertainty
of the machine being calibrated, i.e., you must use a higher resolution
standard. If we did not employ this strategy , BAE Hawks (a lousy
aircraft if you ask me, no pun intended) from the RMAF cannot even take off.
See later for an example of why I think a high-res standard helps.
>problem remains: my brain, for example, will never "fit" that high-res
>brain without nonlinear warping.
Not necessarily. There is nothing that says a non-linear warping
performs better than linear warping. Some exercise of judgement is required.
>Even with warping, how can I be sure
>that the crown of a certain gyrus of mine will align with the
>"corresponding" (whatever that means) crown on the high-res?
You cannot, but you stand a better chance.
Imagine your high res standard is about 4x your existing resolution.
Taking plannar image to simplify discussion, this means ideally, your
one pixels should match 16 pixels on your standard. Now, if the image is
imperfect, the warping to the high res standard will not hit all 16
pixels. However, instead of saying hit/miss as you would with a same-res
standard. I can say that pixel hit <n> out of 16 of the high res
pixels.. This can give some rough measurement about goodness of the
>impression is that there is too much individual variation. As for
>labelling, I believe the only acceptable way is to use probabilistic
>(multi-subject) templates, of which there exist more or less official
>examples. These give any point a probability of belonging to a certain
>structure, with the corresponging uncertainty esimate.
>>The biggest problem as I can see with any altas is the accuracy. I know
>>of people looking at areas 3mm across, but in imaging accept anything
>>6mm around the area of interest to be spot on the area. I even went into
>>a bitter discussion with somebody on it (I say 6mm is spot on, (s)he
>>says, no, its 3mm away and therefore on the wrong region). I think what
>>we need is a measure of confidence, say 3mm +/- 1mm at 90% confident
>>interval. That, together with good labelling on the template, will give
>>me more confident on my result.
>First there is the physiological issue of co-localization of neuronal
>firing and the BOLD signal, then partial volume effects.
Well, If we start introducing these variables, things are going to be
complicated. Hence, i was restricting myself to the matter of
measurement/computation error introduced in transforming the images to
the templates. I assumed that the activated voxels is exact for the sake
>smoothing to sensitise statistics, you're certainly not in the 1mm
>range. That is, you have a parametric map coregistered to the
>individual's anatomy, but there's still significant uncertainty as to
>where the blob actually is. In group studies there is the added layer of
>individual differences (the "fuzzyness" in the average). This is
>regardless of any templates or atlases, right?
Agreed. I also achieved my objective here of turning the argument a
full circle. If the fuzzyness in the actual location is bad, say 3mm,
does the actual distortion in the templates matter?
What I think we need is not a "finger print" standard (His finger print
is on the scene) but a "DNA" standard (I'm 99% not the father of the child).
>>So with transformation the other way round. The thing is, the more
>>transformation you do, the less accurate your result. Transforming from
>>subject space -> MNI ->Talairach is going to introduce more uncertainty,
>>then a direct subject space -> Talairach alone.
>Are you saying you do subject space -> "true" Talairach directly?
In a sense yes. What I'm trying to say is that the more transformation
you do, the less accurate one's result is. Hence, if you can, try
skipping the intermediate transformation.
>Starting with AC-PC-line etc.? If you do it automatically, where did you
>find a Talairach-MR?
Not really sure how John Suckling did it. But over here our
transformation directly goes to Talairach. Having said that, I know he
is experimenting with MNI.
>>>Why are we so bent on giving coordinates relative to the AC of a single
>>>old woman? Because we want to use the atlas to give names to our blobs,
>>Conversely, why so bent on giving corrdinates relative to a smooth (read
>>fuzzy) average brain image of 152 subjects? When transforming to a
>>single brain, although the brain is distorted (with or without
>>formadihyde), might be better if the actual brain coordinate can be
>>determine accurately. Ever consider this?
>I'm not sure I follow here.
Well, if I read you correctly, your original statement says that it is
not a good idea to use a single person's brain as a standard. I turn the
argument around and ask you why do you think using a fuzzy average brain
>Any single coordinates in either of the two
>"spaces" are just as "accurate".
>Using an average reference, we can
>state the coordinate and leave it there.
I disagree. You ignore the fuzziness of an average reference. What I
would prefer is a confident measure to go with the coordinate.
>Names matter to us, so we might
>try to give a name to the location on the basis of the "fuzzy" average
>(with appropriate reservations as to how accurately we are able to do
>it), or we might look at the results on individual brains in the same
>coordinate system as the results; see if we can more easily identify a
>particular sulcus/gyrus/whatever in them. If we're really lucky, we
>might have a probabilistic label atlas in the same space and get a 90%
>CI of being were we think we are (in a population-wise average sense).
>If we're extremely lucky, we may have an experienced neuroradiologist
>nearby to give us the answer!
Well, I for one do not have that luxury. ;(
Everything we do is a tradeoff. As an engineer I'm always thinking about
automation. Automation is the quasi-science field where the engineers
try to reduce a fuzzy set of measurement to a yes/no answer.
Its like FDA software standard. It requires a yes/no test (no may be)
for machine. But, to qualify for the standard, the engineer must argue
his case why a fuzzy set of values can be interpreted into the yes/no
answer his giving. The idea is that technicians are less well-trained
for this kind of decision. Hence, it is necessary for experts to argue
the case out, then reduce it to a simple decision for them.
Now, given this example, are we, the software developers, going to treat
our end users as technicians. If so, we must give them an answer: Yes,
its in region A. If not, we can give them fuzzy answer (well, 90% sure
it is region A). I prefer the latter.
>I'm going to stick with my opinion that any of the above approaches is
>"better" than squeezing my brain into an arbitary, well-labelled
>individual (Talairach) to read off names on the basis of coordinates.
>should point out that I'm all for any kind of atlas to guide the
>identification of brain structures. There are many good ones around that
>provide sufficient landmarks (in individuals) for me to be able to "find
>my way" in my own MR-brains.
>>I suggest adding an extra dimension, since you claim to be a geek:
>>Understand the error introduce in the acquisition of data, how it
>>affects transformation etc. This is because in the end, it comes back to
>>how well can you trust the data.
>You lost me hear, I'm afraid. I don't think this relates to Talairach VS
>MNI, does it?
No. But it means you get a feel of how well-behave your data is. I can
give you a golden standard of what a circle is, but if your data cannot
distinguish between circle, oval or square, its useless.
Moreover, its your data that counts. If its bad, no standard can help
you. Its always the raw-data that rules the day!
>>Let's just say Talairach is the de-facto standard. One way of seeing it
>>as the "standard space" is that it is the reference to wich all other
>>coordinate system can be compared to. It might or might not be the
>>lowest standard, but at least its a standard useful for comparing. As a
>>universal "standard", it is going to be much better then everyone
>>finding their own standard to compare to.
>I think I can see your point, and mostly agree. But I don't think
>Talairach coordinates can be compared accross papers, unless the exact
>method by which they were obtained is reported (such as mni2tal).
I think even with the exact method, we might not be able too. Its true
that using the exact analysis method will remove more uncertainty about
the software analysis process, but there is still a big variable we call
the scanner. That's the raw-data argument I'm making above and why I
think you should consider the error in acquisition.
>is more than one way to fit a square into a circle (please excuse the
>There is one thing (at least) I have overlooked: If by "Talairach", one
>simply means that the origin is at AC (true for MNI and Talairach), then
>one might say such coordinates are comparable. I still think it's a bit
>confusing, though, since the names one gives to the coordinates
>certainly aren't comparable. If you take your MNI (x,y,z)mm, say in the
>inferior temporal lobe, and look it up in the Talairach atlas, you will
Purely from designing a standard, I don't really care where your origin
is. May be what we need is a table that says (x,y,z) in MNI is (a,b,c)
in Talairach. I'm an software engineer so I don't get involve in
interpreting the data. Its troubling to say that you cannot find the
correspondance in Talairach coz we are actually comparing human brain
with human brain. But I would agree that a 1 to 1 mapping is impossible
for the outer most layer of the brain.
>Despite using the same standard coregistration reference (the MNI152),
>even SPM and FSL "standard" coordinates might not be /precisely/
>comparable (very close, no doubt). This is because SPM(2) adds a
>"little" nonlinearity at the end of the fitting, whilst FSL sticks to
>the affine model. I'm not saying this bothers me too much, though.
Hopefully not to much difference. I would say that if there is any
difference, I stick to the FSL one. I'm always troubled by non-linearity
coz it relaxes a lot of real world constrain.
Therefore, FSL get my vote on the issue of "safety".
>>The questions here is whatever everyone else is comfortable in doing
>>this. If everyone is using "Talairach" and you are using "MNI", your
>>coordinate system might be superior, but if others have to convert it
>>back to "Talairach", that makes discussion difficult.
>What do you mean BACK to Talairach? Actually, what do you mean by
Suppose I'm not an MNI person. If you talk to me about MNI, I have to
flip my chart of MNI2tal conversion and figure out which part of the
brain you are talking about. It will be an handicap in the diccussion
coz I have an additional dimension to worry about before I can
>Perhaps I have been too blunt, let me explain. If I read an FMRI paper,
>in which either FSL or SPM has been used, I know that standard space
>registration has been performed using the MNI152 as reference. If I then
>find that coordinates are given in "Talairach space" and no mention of
>what MNI->Talairach procedure has been used, I get suspicious.
You should and I would. But what we need here is just a standard MNI ->
Talairach coordinate map, isn't it?
>alternatives: 1) they are actually "pure" MNI coordinates, 2) mni2tal
>(or something similar) has been used, or 3) by "Talairach", they simply
>mean that the origin is at AC.
>I maintain that, for the reasons I've stated, the MNI152 provides a more
>comparable coordinate system for FMRI activations than the Talairach.
Haven't use MNI before, therefore cannot comment. Sorry.
>related matter is that of labelling. In the case of an individual, I
>would currently "manually" label my activations or perhaps use a
>surface-based template (a la FreeSurfer). In group studies, I would look
>at the group blobs on co-registered individuals and on the "fuzzy"
>average and report coordinates as MNI.
As long as there is an official MNI->Talairach conversion, I don't care
which one are you working on. But what I would like is a measure of
>Perhaps we don't quite understand each other, Cinly?
I understand your point. I think we are looking from different stand
point. Yours will be the Neuroanatomical and its interpretation. Mine is
how do you characterise the accuracy of your localization. For me, any
of the template is useless if you are considering areas smaller than 1/2
the dimension of your acquisation data, as scientifically, you cannot
infer anything less than 1/2 its dimension unless you do something
fancy. the I once argued with the Science Editor of a reputable magazine
about the GPS precisely on different viewpoints: His, a layman and mine,
an engineer where accuracy/reliability is paramount.
Aside: Even wonder why Precision Bombs seems to miss their targets?
Thats because the standard for a perfect hit is hitting a target 50% of