I’ve used ZFS in some form or other for a few years now, starting off with the OpenSolaris based Nexenta and moving over to Ubuntu Server as ZFS matured on Linux.
Recently I’d added a couple of new drives to the pool and all was good until after a reboot. After logging in to the server I could see the pool hadn’t mounted so I attempted it manually.
It was telling me the new drives I’d added were corrupt and the pool was offline. “No Problem” I though as I could just restore from backup and re-create the pool. The only problem with that is that my last backup is 6 months old*
I then noticed that the disk assignment was wrong. My ZFS array was looking for /dev/sdb and /dev/sdc when the new disks were now at /dev/sdf and /dev/sdg.
I then remembered using /dev/disk/by-id when I initially created the pool to avoid this issue but when adding the new drives I never gave it a thought.
To fix this without losing any data I did the following:
1) Rename your zpool.cache file
mv /var/lib/zfs/zpool.cache /var/lib/zfs/zpool.cache_old
2) Now import your pool again
zpool import -d /dev/disk/by-id/ poolname
It should now look through all of your disks and mount the pool as normal with each of the disks added by id.
Taadaa! Pool mounted and a full backup has now started.
*Yes, I know. How stupid of me… 😛