Print

Print


Brian,

  I think I'm getting the hang of it. The code in the fortran interface is 
the culprit since it relies on HDS_PTYPE == F77_INTEGER_TYPE.

  The comment in hds1.h concerning changing the file format if HDS_PTYPE
is set to long long worries me a bit since it implies that if I create a 
file with HDS_PTYPE == long long that it won't be readable on the same 
computer where HDS was compiled with HDS_PTYPE == int.

  If that is true then HDS_PTYPE is extremely dangerous. Does HDS spot the 
problem? Does the file format say whether HDS_PTYPE is 8 bytes or 4 so 
that it can convert?

  Can a new format file be read on a system that does not have an 8 byte 
INT_BIG? (but is compiled with the current codebase) Or does this switch 
to HDS64 mean that we are now locked into 8 byte integers? Currently the 
test for INT_BIG == long long does not allow for a 'long long' being 
unavailable. Should the compiler switch to 4 byte long for INT_BIG or 
should it refuse to compile because it won't be able to read the files 
anyway?

  I've had a go at building with HDS_64 defined. It fixes the signedness 
problem since HDS_64 seems to be entirely designed for the case where 
sizeof(HDS_PTYPE) != sizeof(Fortran int). Once I'd fixed a problem in the 
fortran interface (dat_shape was not getting it's return values) I've now 
got hds_test.f core dumping when it annuls the first HDS locator.

#0  0x00a92d40 in dat1_cvt_dtype (bad=-1207970304, nval=200, 
imp=0x8de8908, exp=0x0,
     nbad=0xbfebe874) at daucnv.c:183
183                          des[n] = (_INTEGER) src[n];        /* Overflow? */
#1  0x00a96bb4 in dat1_cvt (bad=1, nval=200, imp=0x8de6d7c, exp=0x8de6d88, 
nbad=0xbfebe874)
     at dautypes.c:148
#2  0x00a95085 in dau_flush_data (data=0x8de6d40) at dauflush.c:109
#3  0x00a8f551 in datUnmap (locator_str=0xb7ffd600 "", status=0xbfebe92c) 
at datmap.c:532
#4  0x00aa3631 in dat_unmap_ (locator=0xbfebe930 
"8mb\177\177\177\177\002",
     status=0xbfebe92c, locator_length=15) at fortran_interface.c:1905
#5  0x08048a58 in MAIN__ () at hds_test.f:83

I'm assuming the problem is that exp is a null pointer on entry and 
daucnv.c does not check for this error condition. 'exp' is defined in the 
routine above but not by the time it is passed to dat1_cvt_dtype. This is 
well out of my depth though.

  So to summarise:

   * Must HDS_PTYPE be identical on all systems that use HDS64 for the
     files to interoperate?

   * Should HDS_PTYPE always be a fortran int?

  I hope you have time to answer my worries.

Tim

On Wed, 7 Sep 2005, Tim Jenness wrote:

> Okay. The problem seems to be in HDS_PTYPE. Since it was a dimension I made 
> it unsigned rather than signed but that has caused big problems.
>
> Brian: any comment?
>
> I don't know why but I'll revert that change in CVS and then try to see if 
> it's an obvious problem with the fortran interface somewhere (which has a 
> signed/unsisgned issue).
>
> Tim
>
> On Wed, 7 Sep 2005, Tim Jenness wrote:
>
>> 
>> Warning: I think HDS (HDS64 after merge and tweaks) broke in the past few 
>> days. I haven't worked out when but if I do an fits2ndf on
>> 
>> http://www.jach.hawaii.edu/~timj/c18o.fit
>> 
>> I get a file that is listed as 137439196672 bytes. It really did take up 
>> that much disk space but a reboot and jorunal fix listed the file as 245760 
>> bytes (the correct size) but still took up 137MB until it was rm'ed.
>> 
>> I'm going to have to try to rebuild HDS from the past few days to work out 
>> exactly when it broke.
>> 
>> 
>
>

-- 
Tim Jenness
JAC software
http://www.jach.hawaii.edu/~timj