Project

General

Profile

Support #684

Updated by Daniel Curtis over 8 years ago

{{>toc}} 

 This is a guide on how I compiled support for ZFS on my Raspberry Pi 2. 

 h2. Prepare The System 

 * Update the system: 
 <pre> 
 pacman -Syu 
 </pre> 

 * Install base-devel, cmake, and linux-headers packages 
 <pre> 
 pacman -S base-devel cmake linux-headers 
 </pre> 
 #* NOTE: Make sure to choose the *linux-armv7-headers* package 

 * Enable multiple core support for makepkg: 
 <pre> 
 sed -i -e 's/\#MAKEFLAGS=\"-j2\"/MAKEFLAGS=\"-j3\"/' /etc/makepkg.conf 
 </pre> 

 h3. Format the USB drives 

 *NOTE*: I labeled the serial number for each thumb drive, one by one, as I connected them to the USB hub attached to the Raspberry Pi 2. 

 This guide is using USB drives for its data drives. Yes, I know this will eventually cause a huge bottleneck in I/O performance. 

 * Format /dev/sda: 
 <pre> 
 fdisk /dev/sda 
 </pre> 
 #* And type the following to format the USB drive as a Solaris root partition: 
 <pre> 
 g 
 n 
 1 
 [Enter] 
 [Enter] 
 t 
 39 
 w 
 </pre> 

 * Format /dev/sdb: 
 <pre> 
 fdisk /dev/sdb 
 </pre> 
 #* And type the following to format the USB drive as a Solaris root partition: 
 <pre> 
 g 
 n 
 1 
 [Enter] 
 [Enter] 
 t 
 39 
 w 
 </pre> 

 h3. Install yaourt 

 Yaourt isn't necessary, but makes managing AUR packages a lot easier. 

 * Download the packages for yaourt: 
 <pre> 
 cd /tmp 
 wget https://aur.archlinux.org/cgit/aur.git/snapshot/package-query.tar.gz && wget https://aur.archlinux.org/cgit/aur.git/snapshot/yaourt.tar.gz 
 tar xzf package-query.tar.gz 
 tar xzf yaourt.tar.gz 
 </pre> 
 #* Install package-query: 
 <pre> 
 cd package-query 
 makepkg -csi 
 </pre> 
 #* Install yaourt 
 <pre> 
 cd ../yaourt 
 makepkg -csi 
 </pre> 

 h2. Install ZFS DKMS from the AUR 

 * Install zfs-dkms: 
 <pre> 
 yaourt zfs-dkms 
 </pre> 
 * *NOTE*: Edit the +PKGBUILD+ for *zfs-dkms*, *spl-dkms*, and *zfs-utils* 
 #* And modify the arch parameter to match the following; adding *"armv6h"* and *"armv6l"*: 
 <pre> 
 arch=("i686" "x86_64" "armv7h") 
 </pre> 

 * Install zfs-utils: 
 <pre> 
 yaourt zfs-utils 
 </pre> 
 #* Force install the zfs-utils package (probably a bad idea, but the only way I could get it to install properly): 
 <pre> 
 pacman -U --force /tmp/yaourt-tmp-username/zfs-utils-0.6.5.2-1-armv7h.pkg.tar.xz /tmp/yaourt-tmp-username/zfs-utils-0.6.3-1.2-armv6h.pkg.tar.xz 
 </pre> 
 #* Replace *username* with the user that built the zfs-utils package 
 #* Replace the version with the current version being installed 

 * Install the zfs kernel module: 
 <pre> 
 depmod -a 
 modprobe zfs 
 </pre> 

 * Check that the zfs modules were loaded: 
 <pre> 
 lsmod 
 </pre> 
 #* _Example output:_ 
 <pre> 
 zfs                    1229845    0  
 zunicode                322454    1 zfs 
 zavl                      5993    1 zfs 
 zcommon                  43765    1 zfs 
 znvpair                  80689    2 zfs,zcommon 
 spl                     165409    5 zfs,zavl,zunicode,zcommon,znvpair 
 </pre> 

 --- 

 h2. Setting Up The Pools 

 This guide will be setting up a mirror of 2 USB drives, both will shown as */dev/sda* and */dev/sdb*, respectively. 

 h3. Create a storage pool 

 * Get the id's of the drives to add to the zpool. The zfs on Linux developers recommend using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply: 
 <pre> 
 ls -lah /dev/disk/by-id/ 
 </pre> 
 #* _Example output:_ 
 <pre> 
 lrwxrwxrwx 1 root root    9 Aug 12 16:26 usb-SanDisk_Cruzer_20015001801AE2D0432E-0:0-part1 -> ../../sda 
 lrwxrwxrwx 1 root root    9 Aug 12 16:26 usb-SanDisk_Cruzer_20022213091FE2A0CC42-0:0-part1 -> ../../sdb 
 </pre> 

 * Create the mirrored ZFS pool: 
 <pre> 
 zpool create -f -m /mnt/usbpool usbpool mirror /dev/disk/by-id/usb-SanDisk_Cruzer_20015001801AE2D0432E-0\:0-part1 /dev/disk/by-id/usb-SanDisk_Cruzer_20022213091FE2A0CC42-0\:0-part1 
 </pre> 
 *NOTE*: Make sure the path to the partition is used and not the path for the disk itself, or else an error will occur. 

 * Check the zpool status: 
 <pre> 
 zpool status 
 </pre> 
 #* _Example output:_ 
 <pre> 
   pool: usbpool 
  state: ONLINE 
   scan: none requested 
 config: 

	 NAME                                                     STATE       READ WRITE CKSUM 
	 usbpool                                                  ONLINE         0       0       0 
	   mirror-0                                               ONLINE         0       0       0 
	     usb-SanDisk_Cruzer_20015001801AE2D0432E-0:0-part1    ONLINE         0       0       0 
	     usb-SanDisk_Cruzer_20022213091FE2A0CC42-0:0-part1    ONLINE         0       0       0 

 errors: No known data errors 
 </pre> 

 * Create a mountpoint: 
 <pre> 
 zfs create usbpool/home -o mountpoint=/home 
 </pre> 

 * Check the mount point status: 
 <pre> 
 zfs list usbpool/home 
 </pre> 
 #* _Example output:_ 
 <pre> 
 NAME            USED    AVAIL    REFER    MOUNTPOINT 
 usbpool/home      30K    58.6G      30K    /home 
 </pre> 

 * Automatically mount the zfs pool: 
 <pre> 
 mkdir -p /etc/zfs 
 zpool set cachefile=/etc/zfs/zpool.cache usbpool 
 </pre> 
 #* Enable the service so it is automatically started at boot time: 
 <pre> 
 systemctl enable zfs.target 
 </pre> 
 #* To manually start the daemon: 
 <pre> 
 systemctl start zfs.target 
 </pre> 

 h2. Kernel Upgrades 

 I found that after upgrading the kernel will not automatically rebuild the ZFS DKMS module, this is to be expected. Rather than reinstalling from the AUR, the DKMS modules just need to be built again. 

 * Upgrade the kernel: 
 <pre> 
 pacman -Syu 
 </pre> 

 * And reboot for the new kernel to take effect: 
 <pre> 
 reboot 
 </pre>  

 h3. Rebuild SPL DKMS 

 * Rebuild SPL DKMS module: 
 <pre> 
 dkms build spl/0.6.5 spl/0.6.3 
 </pre> 

 * Install SPL DKMS module: 
 <pre> 
 dkms install spl/0.6.5 spl/0.6.3 
 </pre> 

 h3. Rebuild ZFS DKMS 

 * Rebuild ZFS DKMS module: 
 <pre> 
 dkms build zfs/0.6.5 zfs/0.6.3 
 </pre> 

 * Install ZFS DKMS module: 
 <pre> 
 dkms install zfs/0.6.5 zfs/0.6.3 
 </pre> 

 * Install the zfs kernel module: 
 <pre> 
 depmod -a 
 modprobe zfs 
 </pre> 

 * Check that the zfs modules were loaded: 
 <pre> 
 lsmod 
 </pre> 
 #* _Example output:_ 
 <pre> 
 zfs                    1229845    0  
 zunicode                322454    1 zfs 
 zavl                      5993    1 zfs 
 zcommon                  43765    1 zfs 
 znvpair                  80689    2 zfs,zcommon 
 spl                     165409    5 zfs,zavl,zunicode,zcommon,znvpair 
 </pre> 

 h2. Tips 

 h3. Lower ARC size 

 * Edit the cmdline.txt: 
 <pre> 
 nano /boot/cmdline.txt 
 </pre> 
 #* And add *zfs.zfs_arc_max=64000000* as a kernel parameter to set the ARC to 256MB: 64MB:  
 <pre> 
 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 zfs.zfs_arc_max=256000000 zfs.zfs_arc_max=64000000 elevator=noop rootwait 
 </pre> 

 h3. Lower kmem size 

 * Edit the cmdline.txt: 
 <pre> 
 nano /boot/cmdline.txt 
 </pre> 
 #* And add *vm.kmem_size="256M" vm.kmem_size_max="256M"* as a kernel parameter to set the kmem to 256MB:  
 <pre> 
 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 zfs.zfs_arc_max=64000000 vm.kmem_size="256M" vm.kmem_size_max="256M" elevator=noop rootwait 
 </pre> 

 h3. Lower vdev cache size 

 * Edit the cmdline.txt: 
 <pre> 
 nano /boot/cmdline.txt 
 </pre> 
 #* And add *zfs.vdev.cache.size="4M"* as a kernel parameter to set the vdev cache size to 4MB:  
 <pre> 
 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 zfs.zfs_arc_max=64000000 vm.kmem_size="256M" vm.kmem_size_max="256M" zfs.vdev.cache.size="4M" elevator=noop rootwait 
 </pre> 

 --- 

 h2. Resources 

 * https://wiki.archlinux.org/index.php/ZFS 
 * https://aur.archlinux.org/packages/zfs-dkms/ 
 * https://aur.archlinux.org/packages/zfs-utils/

Back