On 11/03/11 00:14, Markus Schulz wrote:
> Hi,
Hi!
> after some local discussions concerning the usage of PROOF by WLCG
> sites I decided that it is time to cry for help.
looking over the email it is not clear to me if you refer to xrootd
storage OR a having a PROOF cluster... it is true that both use cases
have at basis the same software and daemon (xrootd) and you can also
have BOTH storage and computing on the same hardware. I will answer more
explicitly with regard to this cases.
> The reason I want to learn about the current situation is that I
> need to get an idea on how important it is for WLCG storage to
> support this use-case and whether this should be handled at small and
> large sites alike. I know that ALICE uses PROOF, but I assume that
> they rely on local clusters. I found something about the sites in
> Spain, but I am not sure how I have to understand "co-locating PROOF
> farms in the T2/T1", does this mean that they are share the same
> storage/computing fabric???? Since we are working on DPM I am
> especially keen to hear from the T2/T3 sites what they are
> doing/planning.
>
> Could you please try comment on the following questions/topics:
>
> 0) Do you support PROOF for experiments or have concrete plans to do
> so? If you answer NO, you are done. If you answer YES, please
> continue. How concrete are your plans? Pilot in Y months, Production
> since / in XX months.
we (Institute of Space Sciences, Romania (ISS), RO-13-ISS) are
contributing to ALICE only. as such we have xrootd storage and minimal
(symbolic) gLite storage.
We have a local PROOF cluster but it is not integrated with the rest of
the analysis facilities of ALICE. we plan to change this and to have a
small proof cluster (up to 32 cores) for the needs of local physics
group. (to be done in at most 6 months)
> 1) I am at a T0, T1, T2, T3
T2
> 2) Do you run a dedicated cluster of or does PROOF run on top of your
> production facility? Size?
dedicated cluster, presently at 16 cores.
i don't think that is is possible to run PROOF on top of worker nodes of
the cluster .. if someone solved the problem of balancing __computing__
resources between torque and xrootd PLEASE tell me how and share the
wisdom :) (i underlined computing as you can share the storage of worker
nodes for xrootd storage without many problems but computing is
something else)
> 3) Do you provide dedicated storage? Would it be very desirable to
> integrate this with your general purpose, grid storage? How do you
> move data in and out of the cluster? Is this seen as a problem?
so .. we have a dedicated xrootd storage for ALICE (140TB)(alice jobs
use naturally the data from xrootd) and a gLite storage (2TB)
we report the xrootd storage details to gLite monitoring.
Of course it would be great if the 2 parts (xrootd and gLite storage)
would somehow merge (like having xrootd as a service in gLite storage)
> 4) Do you use PROOF with your WLCG-SE? what type do you run? dCache,
> BestMan, STORM,...... with XX TBytes.
no, we keep different storage systems
Thanks,
Adrian
----------------------------------------------
Adrian Sevcenco |
Institute of Space Sciences - ISS, Romania |
adrian.sevcenco at {cern.ch,spacescience.ro} |
----------------------------------------------
> Thanks to all those who will reply and to all who don't, because
> their answer to question 0) is negative.
>
> markus
>
> p.s. I'll try to summarize the feedback....
|