Yeah, the internal versus external thing is something we thought hard about.
The OS disks would be less resilient against failure when in the
RAID1, and require a considerable amount of fiddling to get access to.
However, even a total loss of the OS disk contents wouldn't be that
dramatic; sure, the server would be down for a couple of hours whilst
we swapped the disks out and rebuilt the machine, but nothing more
daunting than that.
We already have "different" types of disks for the OS and storage
arrays on our current crop of servers, and this doesn't pose any real
problems - like most sites, I imagine, we keep a significant buffer
stock of spares ready to deploy in the event of failure.
A major benefit of mounting the OS disks internally is that we have to
buy one less disk server to achieve the same total amount of storage.
So, even after adding the cost of 2 x 2TB disks per server (36 now,
rather than 34) we're still ~several Łk up on the deal, which can be
used elsewhere in the procurement (we plan to retro-fit an additional
HDD into each of our worker nodes to provide RAID0 performance gains).
Cheers,
Mike
On 8 June 2010 15:23, Ewan MacMahon <[log in to unmask]> wrote:
>> -----Original Message-----
>> From: GRIDPP2: Deployment and support of SRM and local storage
>>
>> The proposal that's been put to us is to "mount them on an internal
>> caddy". I've had a look at the server chassis docs, and there are a
>> couple of pages describing how to mount (up to 2) internal HDDs.
>>
>> If this jiscmail list allows such things, the gory details are
> attached
>> here...basically the disks appear to live underneath the
>> motherboard...should be nice and cosy for them down there (!)
>>
> OK; that comprehensively answers the question about how it's
> possible, but I'm far from convinced that it's a good idea -
> it seems to me that as compared with putting the OS on the
> main raid you've got the (OK, small) cost of a couple of extra
> disks, you've taken your OS from being redundant against two
> disk failures to only redundant against one failure, and
> ensured that any OS disk failure requires downtime, and (from
> looking at the diagram) a fair degree of fiddly work. Plus
> you've got to keep stock of two different kinds of disks, and
> monitor two different RAID systems.
>
> All to get the OS IO off the data array. Do you really see your
> OS disks getting that absolutely hammered that it's worth it?
>
> Ewan
>
> PS: Also, apologies to Mike for the random misaddressed unfinished
> version of this email a moment ago.
>
|