Support

Blog

Browsing all articles tagged with zfs

Flattr this!

Behind the scenes we use ZFS as storage for our offsite backups.
We have backups in 2 separate physical locations + original data on the server(s), as data is mui importante!
ZFS is a rather nice storage file system that improves radically on older RAID based solutions, offering a lot more funky options, like snapshot’s (where the OS can store have multiple versions of files, similar to Time Machine backups), and more importantly compression.

At some point we’ll be deploying a SAN (storage area network) on a blade server in the data center for data using ZFS, using lots of three and four letter acronyms –
ESXi for base OS, then a VM running providing ZFS storage and iSCSI targets with hardware passthru for other VM’s, then other blades in the server doing clustering.
Right now we’re waiting on an LSI SAS card (see more anacronyms!), so we can deploy…, but I digress.

Back to ZFS.

The ZFS version we use now allows flags, yay!, and that means we can choose alternate compression methods.
There is a reasonably newish compression algorithm called LZ4 that is now supported, and it improves both read and write speeds over normal uncompressed ZFS, and some benefits over compressed ZFS using the standard compression algorithm(s).

To quote: “LZ4 is a new high-speed BSD-licensed compression algorithm written by Yann Collet that delivers very high compression and decompression performance compared to lzjb (>50% faster on compression, >80% faster on decompression and around 3x faster on compression of incompressible data)”

First up check if your zfs version supports it:


zpool upgrade -v
This system supports ZFS pool feature flags.

The following features are supported:

FEAT DESCRIPTION
-------------------------------------------------------------
async_destroy (read-only compatible)
Destroy filesystems asynchronously.
empty_bpobj (read-only compatible)
Snapshots use less space.
lz4_compress
LZ4 compression algorithm support.

Mine does (well duh!)

So…, I can turn on support.

As a note, lz4 is not backward compatible, so you will need to use a ZFS version that supports flags *and* lz4.
At the time of writing nas4free doesn’t support it, zfsonlinux does though, as does omnios, illumos and other solaris based OS’s.

If you aren’t sure, check first with the command above and see if there is support.
Next step is to turn on that feature.

My storage pools are typically called nas or tank
To enable lz4 compression, its a 2 step process.

zpool set feature@lz4_compress=enabled
zpool set compression=lz4

I have nas and nas/storage so I did -

zpool set feature@lz4_compress=enabled nas
zpool set compression=lz4 nas
zpool set compression=lz4 nas/storage

Once the flag is set though, you can set compression on the pools or volume. If you set at the storage volume level, then new pools inherit the compression setting.

Here are my volumes / pools

zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 6.09T 4.28T 209K /nas
nas/storage 6.09T 4.28T 6.09T /nas/storage

I’ve already set compression on (although it doesn’t take effect till I copy new data onto the pools / volumes).
We can check compression status by doing a zfs get all command, and filtering by compress

zfs get all | grep compress
nas compressratio 1.00x -
nas compression lz4 local
nas refcompressratio 1.00x -
nas/storage compressratio 1.00x -
nas/storage compression lz4 local
nas/storage refcompressratio 1.00x -

If I create a new pool you’ll see it gets created with the same compression inherited from its parent storage volume.

zfs create nas/test
root@nas:/nas# zfs get all | grep compress
nas compressratio 1.00x -
nas compression lz4 local
nas refcompressratio 1.00x -
nas/storage compressratio 1.00x -
nas/storage compression lz4 local
nas/storage refcompressratio 1.00x -
nas/test compressratio 1.00x -
nas/test compression lz4 inherited from nas
nas/test refcompressratio 1.00x -

I’ll copy some dummy data onto there, then recheck.

nas/test compressratio 1.71x -
nas/test compression lz4 inherited from nas
nas/test refcompressratio 1.71x -

Nice!

Obviously, compression ratio’s will depend highly on the data, but for our purposes, most things are web data, mail and other things, so we’re heavy on text content, and benefit highly from compression.

Once we get our SAN up and running, I’ll be looking at whether I should be using rsync still or I should look at zfs snapshots -> zfs storage on other servers.
That though, is a topic for another day.

Flattr this!

I usually do most of my work these days on an Air. While its decent, it is rather dinky spacewise, and I tend to keep it empty of music so I have (barely) enough space for my dev stuff, and various embedded cross compile environments.

I still like listening to music though, so I keep that on secondary and even tertiary storage. This typically is one of the fine HP N36L / N40L NAS devices that I’m fond of buying. I have a few HP N36/N40L nas boxen in various places, some of which contain my music, but most of which contain backups, onsite, and offsite, as you can never be too careful!

I know this quite intimately, as the very machine I’m typing this on had a catastrophic SSD failure only a week ago.
Luckily I didn’t lose too much work!

Enough with the background, and back to the plot.

As I was saying, I needed some tunes to harass the staff with listen to at work.

I looked at a couple of solutions, and decided on forked-daapd, as that allegedly could share music to iTunes from the NAS music folder without too many headaches.

Most instructions were of the sort:

apt-get install forked-daapd

pico /etc/forked-daapd.conf

[edit music folder, save file]

/etc/init.d/forked-daapd restart

In my case it didn’t work.
It also didn’t really give much clue that it wasn’t working, and their website didn’t have much to go on.
I looked at compiling from scratch, but the guy making it uses Clang and Java stuff to build, and it just looked like too much hassle.

So, I had to troubleshoot even though I didn’t really want to spend the time.

My initial issue was something like the below:

forked-daapd wouldn’t run successfully (but it also didn’t complain, sigh). I could see that it wasn’t running on any port specified, and checking kern.log showed it was crapping out silently.

Mar 20 21:03:09 officenas kernel: [ 3392.026612] forked-daapd[9848] trap invalid opcode ip:7f8d8772958e sp:7f8d8176bf60 error:0 in libdispatch.so.0.0.0[7f8d87722000+c000]

Running it in the foreground showed it was having issues creating the mDNS bits.

mdns: Failed to create service browser: Bad state

This eventually worked out to editing /etc/nsswitch.conf, adding the following to the host lines:

hosts: files mdns4_minimal dns mdns4

then restarting avahi

/etc/init.d/avahi-daemon restart

This got me past the bad state error, but then it was bombing out with a missing symbol avl_alloc_tree error.

I did an strace on the thing and found it was looking for libav under /lib vs under /var/lib

This was also documented here – http://blog.openmediavault.org/?p=552&cpage=1#comment-8376, although sadly not until I found out myself.

Looks like the zfsonlinux is the culprit here, as that puts libav files in that folder. Tsk tsk.

I removed those libav files – rm /lib/libav* ****as I know what I’m doing**** (don’t randomly erase stuff unless you’re 3000% sure!), and sure enough, forked-daapd started up, and started blatting tons of output to the logs in a happy manner.
iTunes was also finally happy, and could see my music. Yay.

Took me about 2 hours to figure out sadly, but at least I sorted it out.

Hopefully this will save someone else the headache when they google for the error(s)!

Archives

Categories

Tags

PHOTOSTREAM