Hi Jens,
Since we never had time at this morning's meeting, I'd just like to add
that I got Matt Hodges at RAL to change the STAR-IC FTS channel so that it
would use srmCopy rather than the usual 3rd party GridFTP (urlcopy). This
was done since IC-HEP are seeing poor transfer performance using FTS to
transfer into and out of their dCache, but things were OK when srmcp or
phedex was used.
Before the change, I was getting a transfer rate of ~138Mb/s from
Edinburgh to IC-HEP. With srmCopy enabled, the rate jumped up to ~240Mb/s
(both for a single stream, multiple files). There was also significantly
less inter-disk server traffic since in srmCopy mode the data goes
directly to the disk pool where it will reside and does not first get
routed through a GridFTP door.
However, there are a couple of issues still to be resolved:
1. There is a problem with the FTS server running in srmCopy mode as it
causes file descriptor leaks. This is known about, but I'm not sure what
the status is of a fix. Maybe in gLite 3.0...
2. The destination IC dCache started to produce some errors in its logs
during the transfer and eventually all transfer stopped. We only noticed
this yesterday so I still need to investigate what is causing the problem.
Also IC are still seeing the problem with high CPU IO wait on their disk
servers. This was seen with FTS in urlcopy mode and is still seen in
srmcopy mode, only the transfer rate is now higher. They do not see high
IO wait if they use the dCache srmcp client. It is strange though since no
other GridPP dCache site is seeing this effect when using FTS.
I'll keep on investigating.
Cheers,
Greig
On Tue, 16 May 2006, Jensen, J (Jens) wrote:
> http://agenda.cern.ch/fullAgenda.php?ida=a062408
>
> Now uploaded.
>
> Cheers,
> --jens
>
--
=======================================================================
Dr Greig A Cowan http://www.ph.ed.ac.uk/~gcowan1
School of Physics, University of Edinburgh, James Clerk Maxwell Building
TIER-2 STORAGE SUPPORT PAGES: http://wiki.gridpp.ac.uk/wiki/Grid_Storage
=======================================================================
|