On 4 November 2014 23:21, Tim Jenness <[log in to unmask]> wrote:
> With this patch and my most recent commits, I can now run ccdexercise and
> the kappa, ccdpack, pisa and specdre benchmarks from starbench.
>
> That's the good news. Bad news is that v5 is about 20 percent slower.
Hmm - not too bad. I'd feared it may be worse than that.
> Some
> of that is from datVec slices (which I knew might be an issue and I know how
> to solve them if I think a lot) and some is from datMap not actually memory
> mapping anything. datMap is a tricky one. H5Dget_offset will get me the
> offset into the file that I can mmap() but that only works if chunking is
> disabled. Unfortunately, in order to be able to resize data arrays (happens
> a lot in places like ARY, AGI and PCS, trust me) you need to flag HDF5 data
> sets with a maximum size that is not the same s the initial allocated size
> (I mark them as unlimited but the value is irrelevant). As soon as you mark
> a dataset for growth you lose the ability to mmap() it. HDS has no way with
> datNew to allow you to indicate whether a primitive will be expected to grow
> or not so I have to allow for growth. Bit of a conundrum. Adding a new
> datNewFixedSize() API won't help as it would be a huge task to audit every
> call to datNew to find out which ones can be allocated a fixed size...
>
> I need to run a profiler: what's a good single application that does a lot
> of HDS I/O that I can attach a profiler to? SMURF makemap isn't bad but most
> of the I/O is at the start and end of the run time.
You can get makemap to do HDS I/O on every iteration by using the
diag.out or itermap config parameter.
David
|