Support #684
Updated by Daniel Curtis about 9 years ago
{{>toc}} This is a guide on how I compiled support for ZFS on my Raspberry Pi 2. h2. Prepare The System * Update the system: <pre> pacman -Syu </pre> * Install base-devel, cmake, and linux-headers packages <pre> pacman -S base-devel cmake linux-raspberrypi-headers </pre> * Enable multiple core support for makepkg: <pre> sed -i -e 's/\#MAKEFLAGS=\"-j2\"/MAKEFLAGS=\"-j5\"/' /etc/makepkg.conf </pre> h3. Format the USB drives *NOTE*: I labeled the serial number for each thumb drive, one by one, as I connected them to the USB hub attached to the Raspberry Pi 2. This guide is using USB drives for its data drives. Yes, I know this will eventually cause a huge bottleneck in I/O performance. * Format /dev/sda: <pre> fdisk /dev/sda </pre> #* And type the following to format the USB drive as a Solaris root partition: <pre> g n 1 [Enter] [Enter] t 46 39 w </pre> * Format /dev/sdb: <pre> fdisk /dev/sdb </pre> #* And type the following to format the USB drive as a Solaris root partition: <pre> g n 1 [Enter] [Enter] t 46 39 w </pre> h3. Install yaourt Yaourt isn't necessary, but makes managing AUR packages a lot easier. * Download the packages for yaourt: <pre> cd /tmp wget https://aur.archlinux.org/cgit/aur.git/snapshot/package-query.tar.gz && wget https://aur.archlinux.org/cgit/aur.git/snapshot/yaourt.tar.gz tar xzf package-query.tar.gz tar xzf yaourt.tar.gz </pre> #* Install package-query: <pre> cd package-query makepkg -csi </pre> #* Install yaourt <pre> cd ../yaourt makepkg -csi </pre> h2. Install ZFS DKMS from the AUR * Install spl-dkms: <pre> yaourt spl-dkms </pre> * *NOTE*: Edit the +PKGBUILD+ for *spl-dkms* #* And modify the arch parameter to match the following, adding *"armv7h"*: <pre> arch=("i686" "x86_64" "armv7h") </pre> * Install zfs-dkms: <pre> yaourt zfs-dkms </pre> * *NOTE*: Edit the +PKGBUILD+ for *zfs-dkms* and *zfs-utils* #* And modify the arch parameter to match the following, adding *"armv7h"*: <pre> arch=("i686" "x86_64" "armv7h") </pre> * Install the zfs kernel module: <pre> sudo depmod -a sudo modprobe zfs </pre> * Check that the zfs modules were loaded: <pre> lsmod </pre> #* _Example output:_ <pre> zfs 1229845 0 zunicode 322454 1 zfs zavl 5993 1 zfs zcommon 43765 1 zfs znvpair 80689 2 zfs,zcommon spl 165409 5 zfs,zavl,zunicode,zcommon,znvpair </pre> --- h2. Setting Up The Pools This guide will be setting up a mirror of 2 USB drives, both will shown as */dev/sda* and */dev/sdb*, respectively. h3. Create a storage pool * Get the id's of the drives to add to the zpool. The zfs on Linux developers recommend using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply: <pre> ls -lah /dev/disk/by-id/ </pre> #* _Example output:_ <pre> lrwxrwxrwx 1 root root 9 Aug 12 16:26 usb-SanDisk_Cruzer_20015001801AE2D0432E-0:0-part1 -> ../../sda lrwxrwxrwx 1 root root 9 Aug 12 16:26 usb-SanDisk_Cruzer_20022213091FE2A0CC42-0:0-part1 -> ../../sdb </pre> * Create a directory to mount the zpool to: <pre> sudo mkdir /var/usbpool </pre> * Create the mirrored ZFS pool: <pre> sudo zpool create -f -m /var/usbpool /mnt/usbpool usbpool mirror /dev/disk/by-id/usb-SanDisk_Cruzer_20015001801AE2D0432E-0\:0-part1 /dev/disk/by-id/usb-SanDisk_Cruzer_20022213091FE2A0CC42-0\:0-part1 </pre> *NOTE*: Make sure the path to the partition is used and not the path for the disk itself, or else an error will occur. * Check the zpool status: <pre> sudo zpool status </pre> #* _Example output:_ <pre> pool: usbpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM usbpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 usb-SanDisk_Cruzer_20015001801AE2D0432E-0:0-part1 ONLINE 0 0 0 usb-SanDisk_Cruzer_20022213091FE2A0CC42-0:0-part1 ONLINE 0 0 0 errors: No known data errors </pre> * Create a mountpoint: <pre> sudo zfs create usbpool/home -o mountpoint=/home </pre> * Check the mount point status: <pre> sudo zfs list usbpool/home </pre> #* _Example output:_ <pre> NAME USED AVAIL REFER MOUNTPOINT usbpool/home 30K 58.6G 30K /home </pre> * Automatically mount the zfs pool: <pre> sudo mkdir -p /etc/zfs sudo zpool set cachefile=/etc/zfs/zpool.cache usbpool </pre> #* Enable the service so it is automatically started at boot time: <pre> sudo systemctl enable zfs.target </pre> #* To manually start the daemon: <pre> sudo systemctl start zfs.target </pre> h2. Kernel Upgrades I found that after upgrading the kernel will not automatically rebuild the ZFS DKMS module, this is to be expected. Rather than reinstalling from the AUR, the DKMS modules just need to be built again. * Upgrade the kernel: <pre> sudo pacman -Syu </pre> * And reboot for the new kernel to take effect: <pre> sudo reboot </pre> h3. Rebuild SPL DKMS * Rebuild SPL DKMS module: <pre> sudo dkms build spl/0.6.5.2 </pre> * Install SPL DKMS module: <pre> sudo dkms install spl/0.6.5.2 -k $(uname -r) </pre> h3. Rebuild ZFS DKMS * Rebuild ZFS DKMS module: <pre> sudo dkms build zfs/0.6.5.2 </pre> * Install ZFS DKMS module: <pre> sudo dkms install zfs/0.6.5.2 -k $(uname -r) </pre> * Install the zfs kernel module: <pre> sudo depmod -a sudo modprobe zfs </pre> * Check that the zfs modules were loaded: <pre> lsmod </pre> #* _Example output:_ <pre> zfs 1229845 0 zunicode 322454 1 zfs zavl 5993 1 zfs zcommon 43765 1 zfs znvpair 80689 2 zfs,zcommon spl 165409 5 zfs,zavl,zunicode,zcommon,znvpair </pre> h2. Tips h3. Lower ARC size * Edit the cmdline.txt: <pre> sudo nano /boot/cmdline.txt </pre> #* And add *zfs.zfs_arc_max=256000000* *zfs.zfs_arc_max=64000000* as a kernel parameter to set the ARC to 256MB: <pre> selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 zfs.zfs_arc_max=256000000 elevator=noop rootwait </pre> h3. Lower kmem size * Edit the cmdline.txt: <pre> sudo nano /boot/cmdline.txt </pre> #* And add *vm.kmem_size="256M" vm.kmem_size_max="256M"* as a kernel parameter to set the kmem to 256MB: <pre> selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 zfs.zfs_arc_max=64000000 vm.kmem_size="256M" vm.kmem_size_max="256M" elevator=noop rootwait </pre> h3. Lower vdev cache size * Edit the cmdline.txt: <pre> sudo nano /boot/cmdline.txt </pre> #* And add *zfs.vdev.cache.size="4M"* as a kernel parameter to set the vdev cache size to 4MB: <pre> selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 zfs.zfs_arc_max=64000000 vm.kmem_size="256M" vm.kmem_size_max="256M" zfs.vdev.cache.size="4M" elevator=noop rootwait </pre> --- h2. Resources * https://wiki.archlinux.org/index.php/ZFS * https://aur.archlinux.org/packages/zfs-dkms/ * https://aur.archlinux.org/packages/zfs-utils/