Support

Blog

Browsing all articles from October, 2013

Flattr this!

Looks like Ubuntu 13 has changed the dev id’s for disks!
If you use ZFS, like us, then you may be caught by this subtle naughty change.

Previously, disk-id’s were something like this:
scsi-SATA_ST4000DM000-1CD_Z3000WGF

In Ubuntu 13 this changed:
ata-ST4000DM000-1CD168_Z3000WGF

According to the FAQ in ZFS on Linux, this *isn’t* supposed to change.

http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool

/dev/disk/by-id/: Best for small pools (less than 10 disks)
Summary: This directory contains disk identifiers with more human readable names. The disk identifier usually consists of the interface type, vendor name, model number, device serial number, and partition number. This approach is more user friendly because it simplifies identifying a specific disk.
Benefits: Nice for small systems with a single disk controller. Because the names are persistent and guaranteed not to change, it doesn't matter how the disks are attached to the system. You can take them all out, randomly mixed them up on the desk, put them back anywhere in the system and your pool will still be automatically imported correctly.

So… on a reboot after upgrading a clients NAS, all the data was missing, with the nefarious pool error.
See below:


root@hpnas:# zpool status
pool: nas
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
scan: none requested
config:

NAME STATE READ WRITE CKSUM
nas UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
scsi-SATA_ST4000DM000-1CD_Z3000WGF UNAVAIL 0 0 0
scsi-SATA_ST4000DX000-1CL_Z1Z036ST UNAVAIL 0 0 0
scsi-SATA_ST4000DX000-1CL_Z1Z04QDM UNAVAIL 0 0 0
scsi-SATA_ST4000DX000-1CL_Z1Z05B9Y UNAVAIL 0 0 0

Don’t worry, the data’s still there. Ubuntu has just changed the disk names, so ZFS assumes the disks are broken.

Simple way to fix it is to export the pool, then reimport with the new names.
Our pool is named “nas” in the example below:

root@hpnas:# zpool export nas
root@hpnas:# zpool import -d /dev/disk/by-id nas -f

As you can see, our pool is now a happy chappy, and our data should be back


root@hpnas:# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 5.25T 5.12T 209K /nas
nas/storage 5.25T 5.12T 5.25T /nas/storage
root@hpnas:/dev/disk/by-id# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 5.25T 5.12T 209K /nas
nas/storage 5.25T 5.12T 5.25T /nas/storage
root@hpnas:/dev/disk/by-id# zpool status
pool: nas
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
nas ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST4000DM000-1CD168_Z3000WGF ONLINE 0 0 0
ata-ST4000DX000-1CL160_Z1Z036ST ONLINE 0 0 0
ata-ST4000DX000-1CL160_Z1Z04QDM ONLINE 0 0 0
ata-ST4000DX000-1CL160_Z1Z05B9Y ONLINE 0 0 0

errors: No known data errors

Bit naughty of Ubuntu to do that imho…

Archives

Categories

Tags

PHOTOSTREAM