Hi Matt,
On Tue, 10 Jan 2017, Matt Doidge wrote:
> Aha, my expectations were all wrong here (I was treating zfs in my mind like
> a straight raid card replacement), and I used the old style /dev/sdX names -
> so I expect that's why zfs borked on reboot. I'll rebuild my zpools and my
> way of thinking about them!
>
Ah, yes. I just sent the other mail but I see the problem now I think.
ZFS maintains information in the fs about what disks belong to the pool
and also a cache file under /etc with information about the pool config.
Now, if you remove /dev/sda then on reboot /dev/sdb becomes /dev/sda and
so on which could confuse zfs. That could be the source of the problem and
a manual import command will actual force zfs to search for all disks and
try to find all disks, ignoring the cache file and other information which
are probably marked wrong after the first failure on reboot.
If I have the time and system available, I'll make some tests to see what
happens to the file that caches general zpool config information under
/etc in such case and under what circumstances that behaviour is shown.
If you want to reimport the pool using /dev/disk/by-uuid, you don't need
to rebuild the whole pool!
You can simpl export the pool and then do
zpool import -d /dev/disk/by-uuid POOL
With "zpool status" you can check that it worked.
Cheers,
Marcus
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
|