Wednesday 7 September 2011

ZFS: adding disks and changing from non-redundant pools to mirrored

Lately I switched all (non-virtual) FreeBSD machines to use ZFS for their root file system, and it was a good choice!

Example 1:

On this example machine, I initially only had a single, relatively small HDD in there, running the root-fs on ZFS. The output of zpool status:
#zpool status

pool: firstpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
firstpool ONLINE 0 0 0
gpt/bootdisk-zfs ONLINE 0 0 0

errors: No known data errors


After a while, I had the chance to upgrade the system, so I added a second 500GB HDD.
As I wanted the data on that disk separated from the data on the first disk, I created a second zpool:
#zpool create -m /mnt/data secondpool /dev/ada1

The -m defines the mountpoint and mounts the disks. It was ready for use immediately. Too easy, just great.

After a while again, I got an additional 500GB HDD (yes it still fitted into my machine), and I decided to set up a ZFS mirror for my data dir, for safety against HDD failures.
Again just one simple command, and the mirror was working:
#zpool attach secondpool /dev/ada1 /dev/ada2

It took a while for resilvering the new disk, as I put already quite some data on it, but it was the easiest setting up of a mirrored RAID I ever did.
#zpool status

pool: secondpool
state: ONLINE
scan: resilvered 105G in 0h29m with 0 errors on Fri Aug 5 11:13:40 2011
config:

NAME STATE READ WRITE CKSUM
secondpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0


Example2:

On another machine I migrated from UFS to ZFS for the root-fs, but the HDD with data on it still remained UFS formatted, as I had no space to move the data around (the data HDD is 1TB and 80% full). As soon as I got the opportunity of borrowing a large HDD (2TB), I moved from UFS to ZFS.

The easiest way would have been to create a new zpool with the borrowed disk, copy the contents from the UFS disk over to the ZFS disk, then attach the UFS disk to the new ZFS pool, let it resilver, and finally remove the borrowed disk from the ZFS pool, leaving only the original disk, now ZFS formatted in the machine.

Well, unfortunately this only works with same size disks (of course). So I had to do it the long way round:

  1. Create a new zpool with the borrowed disk
  2. Move the content from the UFS disk to the ZFS disk
  3. Create a new zpool with the old (and now empty) disk
  4. Move the contents of the borrowed disk to the old disk (now both ZFS)
  5. Remove the borrowed device from its pool using zpool remove
  6. Destroy the pool using zpool destroy
Obviously you need to  do some planning first for this, in order to get the desired zpool name and mountpoint right for the second zpool which is the one you want, rather than the first zpool. Of course I didn't, but luckily it's so easy to just create and destroy zpools and add or remove devices in ZFS.

Next thing to do: setting up ZFS rolling snapshots. Recommended tool:

  • sysutils/zfsnap
The following /etc/periodic.conf should do the trick (not tested yet)
 
hourly_zfsnap_enable="YES" 
hourly_zfsnap_recursive_fs="sting2-data sting2/usr sting2/var"
hourly_zfsnap_ttl="1m"
hourly_zfsnap_verbose="NO" 
# Don't snap on pools resilvering (-s) or scrubbing (-S)
hourly_zfsnap_flags="-s -S"                       

reboot_zfsnap_enable="YES"
reboot_zfsnap_recursive_fs="sting2-data sting2/usr sting2/var"
reboot_zfsnap_ttl="1m"
reboot_zfsnap_verbose="NO"
# Don't snap on pools resilvering (-s) or scrubbing (-S)
reboot_zfsnap_flags="-s -S"

I'm a bit confused about the zfsnap_delete directives, as I would suspect that the TTL bit would take care of deleting snapshots after the expiration time.

We'll see how it goes.

No comments:

Post a Comment