Print

Print


Jens,
   you make some very relevant points.

Also let me add choice of precision of the mathematics in the problem.
If I am not wrong, an arbitrary limit on 30MW has been proposed for an Exascale machine.
By tthat I mean that at the moment an exascale machine is perfectly possible - but it would just consume a huge amount of electrical power.
I read a very good paper a few years ago which discussed the choice of arithmetic precision for problems: "picowatts per flop"
I believe software authors will have to pay more attention to the coice of presicion (or integer) computation.
For instance we see that Machine Learnign workloads on GPUs quite happily use single precision.
I have lost the reference to this paper (sorry)


On anther note, I believe that Formula 1 teams were consideting a limit of the power budget for CFD simulations.
Of course there is much dancign on pins going on there - do you consider the power to the rack (which you can measure easilty at the PDU)
But how about the power used for cooling?  I guess you have to consider your entire data centre as an adiabatic box and meter the overall power in.
checking the 2017 FIA regulations though there is still a FLOPS cap. 































On 9 January 2018 at 11:58, Jensen, Jens (STFC,RAL,SC) <[log in to unmask]> wrote:
Hi all,

Following up from yesterday's workshop, where Phil invited suggestions
for thoughts on strategy, here's one of mine.

In the future, publicly funded research will get two grants: one in
money and one in CO2. Think of when you book your flight and they tell
you about the CO2 "you" will produce by travelling, to make you feel bad
and donate to their carbon offset fund, so you can forget all about it
and just travel.  Maybe in the future cloud data centres will be forced
to do the same (i.e. advertise CO2 cost and let customers offset it
against their budget or by a carbon offset levy) - we're already seeing
stories in the media about how much of the world's electricity goes into
data centres.

People have researched lots of ways to greenify computing - off the top
of my head:

  * FP7 and H2020 projects have looked into selecting "greener" cloud
    services (OPTIMIS springs to mind, it was one of four basic parameters)
  * Custom-programmed FPGAs, lower powered (and slower) devices for less
    urgent results (e.g. results which would wait for dependencies in
    the workflow anyway); tapes for cold data, and MAID for cool data.
  * Transiently powered computing (e.g. solar)
  * CMS looked at event processing by kW instead of by hour.
  * Choose a different region (far away) and let them have the CO2 ;-P
  * (or rather a region with greener power or cooling, like the Iceland
    data centres.

We saw yesterday how a cluster in the cloud could grow on demand up to a
limit but its aim for doing so was to keep the cost (£ or € or $) down,
not the carbon.

So which is the right kind of green - it can't just be a raw CO2 budget,
or can it?  But finding the roadmap to the right shade of green should
be a strategic objective, so we have a response when they come and ask.

Cheers
--jens

--
Dr Jens Jensen
Mad Scientist, Scientific Computing Department, STFC (www.stfc.ac.uk)
Rutherford Appleton Laboratory, Harwell Oxford Campus, OX11 0QX, UK
T/F +44(0)1235 446104/5945