2009/5/15 Jane Buckle <[log in to unmask]>:
> Hi David,
>
> I've been using USEDETPOS=FALSE to make my scripts close to the output of
> the ORAC-DR pipeline, which, as Brad informed me in January:
>
> "MAKECUBE didn't have its default changed, I don't think. ORAC-DR was
> modified to explicitly set usedetpos=no maybe a month ago. "
In theory, the value used for USEDETPOS shouldn't make any difference
to the output cube. It's really just there as a sanity check, since
any significant discrepancy between cubes produced using different
USEDETPOS values indicates a bug somewhere. Exactly as it did in this
case.
David
> Cheers,
>
> Jane
>
> David Berry wrote:
>>
>> I think I may have got to the bottom of this...
>>
>> Our statement that makecube has never assumed 16 detectors is true,
>> but prior to 3/12/09 makecube had a built in assumption that the group
>> of input files was internally consistent. That is, all input files
>> contained data for the same set of detectors, whatever that set may
>> be.
>>
>> So if you run just the 12-detector data through makecube you would get
>> a decent map, and if you ran just the 16 detector data through
>> makecube you also would get a decent map. But if you process both 12
>> and 16 detector data together in a single invocation of makecube, the
>> output cube is bad.
>>
>> This is caused by the fact that, pre-3/12/09, makecube calculated the
>> input pixel->focal plane position transformation for the first input
>> file, and then cached this transformation for use with all later input
>> files. After 3/12/09, makecube re-calculates the transformation each
>> time a new input file is encountered.
>>
>> A further twist is that all this only applies if you over-ride the
>> default value for parameter USEDETPOS, as Jane was doing. The default
>> value of TRUE for USEDETPOS results in a different scheme being used
>> to calculate the transformation, which was not affected by this
>> caching problem. The problem seen by Jane disappears - even when using
>> the lehuakona release - if you retain the default value for USEDETPOS.
>>
>> David
>>
>>
>> 2009/5/12 David Berry <[log in to unmask]>:
>>
>>>
>>> 2009/5/12 tim.jenness <[log in to unmask]>:
>>>
>>>>
>>>> On May 12, 2009, at 4:17 AM, Jane Buckle wrote:
>>>>
>>>>
>>>>>
>>>>> Hi David,
>>>>>
>>>>> I've tried it again without using mfittrend. Obviously there is more
>>>>> striping in both cubes, since baselines are not removed at any stage,
>>>>> but
>>>>> the version with makecube on all the files (pictureA) still has the
>>>>> checkerboard structure where emission is strong, while the
>>>>> makecube+wcsmosaic version (picture B) does not. So, the cause of this
>>>>> is
>>>>> not mfittrend.
>>>>>
>>>>> What is the problem that I see in the reduced data? It looks to me like
>>>>> makecube is not spatially gridding the data correctly if it is supplied
>>>>> with
>>>>> files that have different numbers of pixels along the spectral axis.
>>>>>
>>>>
>>>> by default the spectral extent in the output cube will be the
>>>> intersection
>>>> of all the overlaps and not the union (see the SPECUNION and BADMASK
>>>> parameters).
>>>>
>>>>
>>>>>
>>>>> The other change that has been made between the two dates this data was
>>>>> taken is that HARP receptors that have been turned off are no longer
>>>>> written
>>>>> to the file. So, one set of raw data has 12 receptors, while the other
>>>>> set
>>>>> has 16. Could this be causing a problem?
>>>>>
>>>>>
>>>>
>>>> MAKECUBE has never cared about the number of receptors.
>>>>
>>>>
>>>>>
>>>>> Are you able to run the files through the released version of the
>>>>> software
>>>>> to try and find out?
>>>>>
>>>>
>>>> we can get hold of lehuakona and use it. From the sound of it David has
>>>> already tried.
>>>>
>>>
>>> Yep. Lehuakona failed for me in the same way Jane describes. Just
>>> before close of play today, I started to do a git bisect to pin down
>>> the revision at which the problem disappeared. I had a look through
>>> all the smurf changes that I have introduced since last October, and I
>>> only found one (61acbe78525cce8d553d69a4f20fb3a259becbb6) that seemed
>>> like a likely culprit - if that's the right word. But when I tried
>>> going back to the version just before, there was still no sign of the
>>> problem Jane describes. Which is why I'm trying a git bisect.
>>>
>>> David
>>>
>>>
>>>
>>>>>
>>>>> Do I need to stop reducing any data obtained on the GBS survey to date
>>>>> until the new starlink software collection is released?
>>>>>
>>>>>
>>>>
>>>> I thought that MRAO were going to rsync the JAC version (as discussed
>>>> back
>>>> on March 25th in a conversation with Dave T)? That's got fully
>>>> up-to-date
>>>> versions of starlink and oracdr ready for testing at any time.
>>>>
>>>> If any one else is interested you can use:
>>>>
>>>> $ rsync starlink.jach.hawaii.edu::
>>>>
>>>> starlink.i386 Starlink software for i386 (32 bit) systems
>>>> starlink.x86_64 Starlink software for x86_64 (64 bit) systems
>>>>
>>>> so for example something like
>>>>
>>>> $ rsync -avz --delete --exclude=local
>>>> starlink.jach.hawaii.edu::starlink.x86_64/ star/
>>>>
>>>> should get all the JAC 64-bit system (including starjava).
>>>>
>>>> With the caveats that
>>>>
>>>> 0. You have to be running a linux that is compatible with CentOS5
>>>> (RHEL5).
>>>>
>>>> 1. This is bleading edge so I can't guarantee not to have broken
>>>> something
>>>> for any given rsync. If something is broken try again a little later and
>>>> if
>>>> breakage continues let us know.
>>>>
>>>> 2. the 32-bit version is not kept as up-to-date since it is not a
>>>> version
>>>> that we use at JAC very much. In particular oracdr may not run reliably
>>>> as I
>>>> don't always instantly update the perl distribution.
>>>>
>>>> --
>>>> Tim Jenness
>>>>
>>>>
>
> --
>
> Dr Jane V. Buckle
> Astrophysics Group, Cavendish Lab, J J Thomson Avenue, Cambridge CB3 0HE
> Tel: +44 (0)1223 337298
>
|