just as an update, i tried using read_hdr, manually changing the offset in matlab and write_hdr_raw to change it, but when i looked at the image in display, it was all white and didn't work. so i used a workaround that i believe is valid (someone please speak up if they think otherwise), namely i segmented the single subject T1 in the canonicals directory, and used that as a "transform" for the ROI's into template space. it looks like it worked very well to me.
i am using a program called medinria for dti fiber tracking, and using spm to normalize the data into mni space prior to running analyses. i also used wfu_pickatlas to define my ROIs for the analyses. i segmented the initial dti image to obtain transformation parameters and normalized it into the template space (using the default normalization parameters). i also used a script found here (http://www.cs.ucl.ac.uk/staff/gridgway/vbm/resize_img.m) to reslice the pickatlas ROIs into the same bounding box as the normalized dti image. however, when i loaded the images in medinria, i found that in the 2d views, they lined up perfectly, but in the 3d views, the ROI was quite a distance outside of the brain. as i investigated further (using spm_read_hdr), i noticed that the offset for the normalized dti images was 352, and for the ROIs was 0. that was the only difference i could find, and i assume that is what is causing my problem. how can i change this value in the hdr file, and should i change the dti image, or the ROI? thanks.