Dear All,
This is an appeal for guidance from both those who have put out
tenders for CPU nodes, and those with knowledge of what makes a good
worker node for ATLAS in particular.
I know procedures vary between institutions, but I have been advised
by our Procurement department to do a "mini tender" involving the five
suppliers who have framework agreements to supply servers to UCL,
asking for the greatest possible CPU power for a fixed price.
The HEPSPEC rating is the obvious measure to maximise, but not all
suppliers have the means or inclination to run a specialised benchmark
for a relatively small order, about £40k. How have others done this?
Do you restrict yourselves to the suppliers who already have
experience in dealing with GridPP and can run HEPSPEC themselves, or
do you use other benchmarks or some less direct way of comparing the
CPU rating of the products on offer?
There are of course other factors affecting job throughput, including
hard disks and RAM. Is there some way of measuring the effect of
these, or would you just set a minimum requirement on both and then
maximise the HEPSPEC? If you would take the latter approach, what is a
sensible trade-off between disk performance and price? Presumably
10kRPM SAS disks will be better than 7.5kRPM SATA, but maybe a striped
pair of slow disks would be an alternative? And how much disk space do
you allow per CPU core?
If there is anything else I haven't asked but you think I should
consider, please tell me that too!
Best regards,
Ben
--
Dr Ben Waugh Tel. +44 (0)20 7679 7223
Dept of Physics and Astronomy Internal: 37223
University College London
London WC1E 6BT
|