Hi Jens,
apologies for not replying during the meeting. The EVO window was
hanging and couldn't click the button.
Anyway here is my update. Manchester is going to receive the new storage
next week. We are getting 9 data servers 30(usable)+2(parity)+1(hot
spare)x2TB disks. In total 540 TB of usable space. At the moment
Manchester is using XFS on the 4 24TB data servers we have with ~20TB fs
each and this used to work really well on SL4 but on SL5 we have
experienced the same problem everybody else is experiencing. So I'm
investigating the convenience to move away from XFS.
We kept on discussing after the end of the meeting about xfs vs ext4 and
we said we would transfer this to the mailing list so here are my
thoughts of the day to kickstart the discussion (Sam also said he would
send something on the SL5.5 he'll send later).
I think the sites that have experienced the SL5/XFS problem were all on
SL5.3, but this needs further confirmation. If this is true we should
try to see what happens with more recent versions as xfs kernel support
has been added in SL5.4 and some related bugs corrected in SL5.5.
ext4 allegedly can have 1EB size but nobody has used it because the tool
to create such a big file system is currently limited to 16TiB(14.5TB).
On top of it ext4 doesn't have full 64bit support this might not be a
problem but together with the tools limits tells me it's not yet a
mature system. Michel seems happy with it (or at least they didn't
experience any problem with et4) however their hardware configuration is
10x2TB raid6 with 14TB fs data servers so he hasn't hit this fs size
dilemma. In the UK we are all going to buy or just bought this 36bay
units with a minimum of 60TB usable space and the 14.5TB fs size is an
annoying limit and shaves off usable space.
There is an argument for keeping the fs size relatively small to
implement some flexibility in DPM when distributing data, but I still
have to understand better the convenience of this when the fs are on the
same machine and also in this case I'd prefer to be able to chose the fs
size.
At the moment looking at what I wrote, and without any additional number
that demonstrates the performance gain, I'm leaning towards installing
XFS on SL5.5 and see if there is any improvement.
cheers
alessandra
[log in to unmask] wrote:
> Oops, this should have gone to the list, not to me!
>
>
> Minutes already uploaded! (helps that my 11 o'clock meeting was
> cancelled) Once again a very useful and productive session, I thought.
> Lots of good stuff.
>
> http://storage.esc.rl.ac.uk/weekly/20100804-minutes.txt
>
> New actions include me volunteering more experiments and Matt to work
> with Sam on testing T2K at Lancaster.
>
> So IMHO, the agenda for storage at GridPP could look like this:
> 16.00-16.45 Experiments presenting (including T2K, perhaps 10 mins each?)
> 16.45-17.05 Sam reporting on AmDamJam and IC (with input from everyone
> who went!)
> 17.05-17.20 Me talking tasks, roadmap and stuff, we can discuss content.
> 17.20-17.50 Discussion
>
> Cheers
> --jens
>
--
The most effective way to do it, is to do it. (Amelia Earhart)
Northgrid Tier2 Technical Coordinator
http://www.hep.manchester.ac.uk/computing/tier2
|