My statistics are low but I have a feeling that the OX->Lancs problem
might originate within Lancaster.
I have two iperf servers:
fal-pygrid-46.lancs.ac.uk port 50001
stor007.hec.lancs.ac.uk port 50001
Corresponding to our two "pool node zones", but otherwise both are
close to identical w.r.t. hardware. fal-46 is on SL5, stor007 on
Some simple 1-minute iperf test results to Oxford:
stor007 -> t2se18: 2.08 GBytes 298 Mbits/sec
fal-46 -> t2se18: 63.8 MBytes 8.88 Mbits/sec
stor007 -> t2se08: 1.17 GBytes 168 Mbits/sec
fal-46 -> t2se08: 34.1 MBytes 4.74 Mbits/sec
The difference in rates between the two targets is interesting, but
the difference between the sources is astounding.
I tried to send my iperf packets down to Chris's server but the
connection was refused. I'll be poking some involved in person over
lunch at HEPSYSMAN about this!
On 22 June 2011 17:27, Alessandra Forti <[log in to unmask]> wrote:
> I completely agree with you. I've been running tests for a week because FZK
> was so kind to leave their server up. And if I change configuration it will
> be still great to be able to test without bothering someone to turn on the
> On 22/06/2011 17:09, [log in to unmask] wrote:
>> I said RAL was prepared to run iperf because its probably something that
>> would be useful, just as being able to ping site servers is useful. Its all
>> well and good to start things on demand when requested, but it adds a fair
>> bit of inertia into debugging network performance problems which we have
>> seen take many months to resolve. Being able to just ping or iperf a site
>> server when you need to is useful and speeds up debugging no end.
>> There are obviously issues like security exposure and bandwidth
>> consumption to consider.
>> iperf does somewhat increase a site's exposure as its adding another
>> daemon that comes with its set of risks and exposures, but it is a modest,
>> unprivileged server so the exposure is relatively low compared to some of
>> the other things we have sticking through the firewall. There are no doubt
>> ways of locking things down further if necessary.
>> There is also clearly a risk of hogging bandwidth - something to watch for
>> but 1Gb/s is hardly the end of the world for the Tier-1 anyway. I must admit
>> it took us some months to spot a CMS test flow of about 1Gb/s on the CMS
>> CASTOR instance.
>> A personar deployment will no doubt be very useful, but sometimes you just
>> want to do a few quick tests under certain special conditions.
>> On balance - although this seems to be a request from ATLAS rather than
>> WLCG it seemed worth doing and even if the payoff wasn't huge it seemed no
>> big deal. If something better comes along it will die a death and we can
>> move on to the latest and greatest.
>> -----Original Message-----
>> From: GRIDPP2: Deployment and support of SRM and local storage management
>> [mailto:[log in to unmask]] On Behalf Of Mingchao Ma
>> Sent: 22 June 2011 16:39
>> To: [log in to unmask]
>> Subject: Re: Iperf tests
>> Hi All,
>>>> "Running open iperf-servers would be nice but perhaps not sane in an
>>>> hostile environment.
>>> Of all the services that a Tier 1 has to run and have open to
>>> the internet, they're worried about iperf? Really?
>> Sorry to butt in, but I think they are taking a very sensible approach.
>> issue here is you introduce extra risk which you can avoid completely if
>> have the choice of not running the service
>> Similar to we just discussed Torque issue, if the torque server does not
>> open to Internet, the exposure surface reduces significantly.
>> It would be too late to start to worry about an online service after a
>> vulnerability was made public.
>> If you do not need it, do not install it; if you do not use it, do not run
>> it. It will save you a lot of trouble.