We also did it this way at Glasgow, but we have an automated kickstart
+ cfengine installer - doing the same thing that Owen did.
Seemed to go okay :D
Sam
On 14 July 2010 09:14, Matt Doidge <[log in to unmask]> wrote:
> Heya,
>
> The last time we upgraded the OS on a bunch of our pools we did the
> same as Simon, disconnecting the attached storage "just in case".
> That's not really an (easy) option with the 24-bay disk servers, so as
> we only have half a dozen pools that need to be upgraded I'm planning
> on doing it "by hand", triple-checking that both myself and the
> installer agree on which volume is "sda" before hitting a big "OK"
> button.
>
> cheers,
> Matt
>
> On 13 July 2010 20:44, Simon George <[log in to unmask]> wrote:
>> We did this at RHUL when our DPM pool nodes were upgraded to SL5 and it
>> worked fine.
>>
>> I think the arrays were temporarily disconnected as a precaution.
>> The hypothetical issue is that if for some reason the o/s drive is not
>> found, your array takes its place as sda and the data is replaced with a
>> nice o/s install.
>>
>> Simon
>>
>> Owen Mcshane wrote:
>>>
>>> Quoting Ewan MacMahon <[log in to unmask]>:
>>>
>>>> Hi all,
>>>>
>>>> It strikes me that is should be possible in principle to
>>>> upgrade the OS on a DPM disk server without draining it by
>>>> simply installing the new OS over the old one and setting
>>>> up a fresh install of the DPM pool node. All the file
>>>> metadata is stored on the head node, and the files' physical
>>>> paths won't change.
>>>>
>>>> However, I'm not aware of anyone doing this. Does this
>>>> sound like it should work and has anyone actually tried?
>>>>
>>>> Or indeed does everyone do this routinely and I've just
>>>> never noticed.....
>>>>
>>>> Ewan
>>>>
>>>
>>> I've been doing this over the past day or so.
>>>
>>> Didn't "install the OS over the new one" though, kickstarted from scratch
>>> to SL5, leaving the storage array intact.
>>>
>>> Owen
>>
>
|