Hi Ian,
I don't get surprised if it happens. Sometimes recommendations are made
thinking about one use case. I don't know what the effects of the
bandwidth tests maybe. Perhaps they are going to be small enough that
they can be ignored and they might be different if the bandwidth tests
are scheduled all at the same time than there should be a peak every 4h
or if they are randomly distributed their effect should also be more
distributed. And if, as you say, the effects of the bandwidth tests are
not that small that they can be ignored or identified by peaks, there is
always time to ignore the tests or turn them off compared to rushing to
install them if there is a problem.
I reason in terms of effects vs usefulness. If I had only one machine
I'd still want to set them up at my site because when there is serious
packet loss I assume it would be way more visible than a background
30-40s bandwidth tests might cause. Bham is currently having 0.04% and
0.10% packet loss with two sites non with others. Can this be considered
a background? If I think about Manchester packet loss due to the rack
switches not using all the links that was in the region of 75% -
measured manually - so 0.04% or 0.10% are negligible.
This is of course my view.
cheers
alessandra
very often the explicit raccommendations might need some investigation.
On 12/07/2012 11:41, [log in to unmask] wrote:
> Of course you _can_ do whatever you like.
>
> However, if you insist on going against the explicit recommendations of the test suite developers, in ways that a moments thought tells you is likely to give unreliable results, then don't be surprised if you get anomalous results.
>
> --Ian
>
> On 12 Jul 2012, at 11:37, Alessandra Forti wrote:
>
>>> The latency tests will not significantly disrupt the bandwidth tests, so from that point of view there is no problem, but they will look worse than they > should, in a way that is dependent on exactly how many other tests you run. In particular you may see apparent lost packets suggesting you have a problem > where you do not.
>> But then if they really have packet loss they have nothing to measure it with. I think it is better to setup these tests and look at what happens and how much the effect of the bandwidth tests really is than not setting them up at all.
>>
>> cheers
>> alessandra
>>
>> On 12/07/2012 11:19, Ian Collier wrote:
>>> The problem is that the latency tests will be tainted by the bandwidth tests, and end up telling not very much that is useful.
>>>
>>> Even though the individual bandwidth tests run only for a limited time, they add up once you are testing against a community. (And we already increased the time for each of our bandwidth tests to 40 seconds because otherwise they do not have time to ramp up properly.)
>>>
>>> The latency tests will not significantly disrupt the bandwidth tests, so from that point of view there is no problem, but they will look worse than they should, in a way that is dependent on exactly how many other tests you run. In particular you may see apparent lost packets suggesting you have a problem where you do not.
>>>
>>> --Ian
>>>
>>> On 12 Jul 2012, at 11:11, Elena Korolkova wrote:
>>>
>>>> Hi Ian and Ewan
>>>>
>>>> actually it was my understanding at the beginning and I configured the machine for bandwidth tests only.
>>>> It would be nice if we have kind of "official" point of view: what we need to install for sites with one machines.
>>>> Adding latency tests as Alessandra suggesting is not a big deal.
>>>>
>>>> Many thanks to you and Alessandra.
>>>>
>>>> Elena
>>>>
>>>> On 12 Jul 2012, at 10:43, Ewan MacMahon wrote:
>>>>
>>>>>> -----Original Message-----
>>>>>> From: Testbed Support for GridPP member institutes [mailto:TB-
>>>>>> [log in to unmask]] On Behalf Of Ian Collier
>>>>>> Sent: 12 July 2012 10:37
>>>>>>
>>>>>> You don't want bandwidth and latency tests running on/against the same
>>>>>> machine. (The bandwidth tests running are likely to make the latency
>>>>>> results meaningless.)
>>>>>>
>>>>> Indeed, I thought the plan was for the original six sites with
>>>>> a pair of boxes to run both, and the sites with just the one
>>>>> new machine (that was going to be a gridmon node) would now be
>>>>> set up as a PerfSonar bandwidth box, and those sites just wouldn't
>>>>> have latency measurements.
>>>>>
>>>>> In other words, I think Elena's current configuration is what I'd
>>>>> expect it to be.
>>>>>
>>>>> Ewan
>>>> __________________________________________________
>>>> Dr Elena Korolkova
>>>> Email: [log in to unmask]
>>>> Tel.: +44 (0)114 2223553
>>>> Fax: +44 (0)114 2223555
>>>> Department of Physics and Astronomy
>>>> University of Sheffield
>>>> Sheffield, S3 7RH, United Kingdom
>>
>> --
>> Facts aren't facts if they come from the wrong people. (Paul Krugman)
>>
--
Facts aren't facts if they come from the wrong people. (Paul Krugman)
|