On Tue, 22 Apr 2008 16:42:46 +0100, Kathy Pearson <[log in to unmask]> wrote:
>I have resolved the problem of slow smoothing locally at the command line
>calling Linux to create a new subdirectory each time to receive the
>output files. Smoothing then takes .12 seconds per volume or about one
>minute in total for 460 input volumes. It takes an additional 10 seconds
>for a Linux call to move the generated swaf* files into the same directory
>with the waf* files.
>Profiling the smoothing function reveals that much of its time is spent in
>the MATLAB exist function, so I'm not sure what a general solution across
>platforms might be for faster i/o.
I'm not a high-level Linux guru, but I've played with Linux/Unix for some
time now, and the idea that the speed depends on how many files are in the
directory sounds very strange to me, unless there's something weird going
on with your hard disk.
I often see things proceed more quickly at the beginning of a calculation,
and then a slow down (I haven't plotted it, but I assume it's reaching
some kind of steady-state rate asymptotically). I've always assumed it's
a virtual memory thing. Something like pages getting sucked into RAM, but
then in the steady-state pages have to be swept both in and out of RAM.
Again, I don't know enough to say for sure that it shouldn't depend on the
number of files in the directory, but it sounds very strange to me.
>UAB Psychology Dept.