Project

General

Profile

Support #388

Installing ZFS on Arch Linux with GRUB

Added by Daniel Curtis over 10 years ago. Updated over 8 years ago.

Status:
Closed
Priority:
High
Assignee:
Category:
-
Target version:
-
Start date:
05/06/2014
Due date:
% Done:

100%

Estimated time:
4.00 h
Spent time:

Description

ZFS is an advanced filesystem created by Sun Microsystems (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), Copy-on-write, snapshots, data integrity verification and automatic repair (scrubbing), RAID-Z, a maximum 16 Exabyte file size, and a maximum 256 Zettabyte volume size. ZFS is licensed under the Common Development and Distribution License (CDDL).

Described as "The last word in filesystems" ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with zfsonlinux.org (ZOL).

This is a guide to show how to install a root ZFS installation, begin by downloading and booting off of a recent Arch Linux ISO.

Adding the repository and install ZFS

The maintainer of ZFS on Arch has a signed repository that you can add to the /etc/pacman.conf.

  • Add the [demz-repo-archiso] repo:
    vi /etc/pacman.conf
    
    • And add the following to the end:

[demz-repo-archiso]
Server = http://demizerone.com/$repo/$arch

  • Now repo key needs to be received and locally signed:
    pacman-key -r 0EE7A126
    pacman-key --lsign-key 0EE7A126
    
  • Now update the repository information:
    pacman -Sy
    
  • Its time to install ZFS:
    pacman -S zfs
    
  • Load the ZFS kernel module:
    modprobe zfs
    
  • Check to see that the module was loaded:
    lsmod | grep zfs
    

Preparing the system

  • Open cfdisk:
    cfdisk /dev/sda
    
    • Erase all partitions, create a small partition for the bootloader, then add the primary partition for ZFS
      [Delete] (all partitions)
      [New]
      primary
      512
      [Bootable] (make sure to have sda1 selected)
      (Select Free Space)
      [New]
      primary
      (Rest of the HD space)
      [Type]
      BF
      [Write]
      yes
      [Quit]
      

NOTE: If using a USB drive, then skip creating the 512MB partition and use the whole drive.

  • Format the boot partition
    mkfs.ext3 /dev/sda1
    

Setting up the ZFS filesystem

  • Create the zpool:
    zpool create zroot /dev/disk/by-id/id-to-partition
    

    WARNING: Always use id names when working with ZFS, otherwise import errors will occur.
  • Create necessary filesystems
    If so desired, sub-filesystem mount points such as /home and /root can be created with the following commands:
    zfs create zroot/home -o mountpoint=/home
    zfs create zroot/root -o mountpoint=/root
    

NOTE: That if you want to use other datasets for system directories (/var or /etc included) your system will not boot unless they are listed in /etc/fstab! We will address that at the appropriate time in this tutorial.

Swap partition

ZFS does not allow the use swapfiles, but it is possible to use a ZFS volume as swap partition. It is important to set the ZVOL block size to match the system page size; for x86 and x86_64 systems that is 4k.

  • Create a 2 GB (or whatever is required) ZFS volume:
    zfs create -V 2G -b 4K zroot/swap
    
  • Initialize and enable the volume as a swap partition:
    mkswap /dev/zvol/zroot/swap
    swapon /dev/zvol/zroot/swap
    
  • Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:
    zfs umount -a
    

Configure the root filesystem

  • First, set the mount point of the root filesystem:
    zfs set mountpoint=/ zroot
    
    1. and optionally, any sub-filesystems:
      zfs set mountpoint=/home zroot/home
      zfs set mountpoint=/root zroot/root
      
    2. and if you have seperate datasets for system directories (ie /var or /usr)
      zfs set mountpoint=legacy zroot/usr
      zfs set mountpoint=legacy zroot/var
      
    3. Then put them in /etc/fstab:
      vi /etc/fstab
      
    4. and add the following:

<file system> <dir> <type> <options> <dump> <pass>
zroot/usr /usr zfs defaults,noatime 0 0
zroot/var /var zfs defaults,noatime 0 0

  • Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.
    zpool set bootfs=zroot zroot
    
  • Turn off swap, if enabled:
    swapoff -a
    
  • Export the pool:
    zpool export zroot
    

WARNING: Do not skip this, otherwise you will be required to use -f when importing your pools. This unloads the imported pool.
NOTE: This might fail if you added a swap partition above. Need to turn it off with the swapoff command.

  • Finally, re-import the pool:
    zpool import -d /dev/disk/by-id -R /mnt zroot
    

NOTE: -d is not the actual device id, but the /dev/by-id directory containing the symbolic links.

If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.

  • Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.
    mkdir -p /mnt/etc/zfs
    cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache
    
    1. If you don't have /etc/zfs/zpool.cache, create it:
      zpool set cachefile=/etc/zfs/zpool.cache zroot
      

Installing Arch

  • Start by mounting the boot partition
    mkdir /mnt/boot
    mount /dev/sda1 /mnt/boot
    

NOTE: If using a USB drive for a bootloader mount it to /mnt/boot instead.

  • Now change the repository to demz-repo-core
    vi /etc/pacman.conf
    
    • And change [demz-repo-archiso] to the following

[demz-repo-core]
Server = http://demizerone.com/$repo/$arch

  • Then install the base system
    pacstrap -i /mnt base base-devel grub openssh zfs
    
  • Generate the fstab for filesystems, use:
    genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab
    
  • Edit the /etc/fstab. If you chose to create datasets for system directories, keep them in this fstab!
    vi /mnt/etc/fstab
    
    • Comment out the lines for the /, /root, and /home mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong. Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with /dev/zvol/zroot/swap.
    • Edit /mnt/etc/fstab to ensure the swap partition is mounted at boot:
      vi /mnt/etc/fstab
      

/dev/zvol/zroot/swap none swap defaults 0 0

  • Setup the initial environment:
    arch-chroot /mnt
    
    • Set a root password
      passwd
      
    • Set a hostname
      echo "archzfs" > /etc/hostname
      
    • Set a local time
      ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime
      
    • Set a local language by uncommenting en_US.UTF-8 in /etc/locale.gen, then running:
      locale-gen
      
    • Set a wired network connection
      cp /etc/netctl/examples/ethernet-dhcp /etc/netctl/wired
      netctl enable wired
      
    • Set SSH to start at boot
      systemctl enable sshd.service
      
  • Install yaourt

Setup the bootloader and initial ramdisk

  • When creating the initial ramdisk, first edit /etc/mkinitcpio.conf and add zfs before filesystems. Also, move keyboard hook before zfs so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your HOOKS line should look something like this:

HOOKS="base udev autodetect modconf block keyboard zfs filesystems"

  • Regenerate the initramfs with the command:
    mkinitcpio -p linux
    

Install and configure GRUB

  • Install GRUB to the primary hard drive:
    grub-install --target=i386-pc --recheck --debug /dev/sda
    

Edit GRUB to boot off of the zroot pool

grub-mkconfig does not properly detect the ZFS filesystem, so it is necessary to edit grub.cfg manually.

  • Edit the GRUB config:
    /boot/grub/grub.cfg
    
    • Add or modify it similar to the following
      set timeout=2
      set default=0
      
      # (0) Arch Linux
      menuentry "Arch Linux" {
          set root=(hd0,msdos1)
          linux /vmlinuz-linux zfs=zroot rw
          initrd /initramfs-linux.img
      }
      

If you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:

/dataset/@/actual/path

Example:

linux //boot/vmlinuz-linux zfs=zroot rw
initrd /
/boot/initramfs-linux.img

Finish the setup process

  • Exit the chroot environment:
    exit
    
  • Unmount all ZFS mount points:
    zfs umount -a
    
  • Unmount the bootloader partition:
    umount /mnt/boot
    
  • Export the zpool:
    zpool export zroot
    
  • Reboot:
    reboot
    

After the first boot

If everything went fine up to this point, your system will boot. Once. For your system to be able to reboot without issues, you need to enable the zfs.target to auto mount the pools and set the hostid.

  • For each pool you want automatically mounted execute:
    zpool set cachefile=/etc/zfs/zpool.cache <pool>
    
  • Enable the target with systemd:
    systemctl enable zfs.target
    

When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the kernel parameters in your boot loader. For example, adding spl.spl_hostid=0x00bab10c, to get your number use the hostid command.

  • The other, and suggested, solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image. To do write the hostid file safely you need to use a small C program:
    #include <stdio.h>
    #include <errno.h>
    #include <unistd.h>
    
    int main() {
        int res;
        res = sethostid(gethostid());
        if (res != 0) {
            switch (errno) {
                case EACCES:
                fprintf(stderr, "Error! No permission to write the" 
                             " file used to store the host ID.\n" 
                             "Are you root?\n");
                break;
                case EPERM:
                fprintf(stderr, "Error! The calling process's effective" 
                                " user or group ID is not the same as" 
                                " its corresponding real ID.\n");
                break;
                default:
                fprintf(stderr, "Unknown error.\n");
            }
            return 1;
        }
        return 0;
    }
    
  • Copy it, save it as writehostid.c and compile it with:
    gcc -o writehostid writehostid.c
    
    • Finally execute it and regenerate the initramfs image:
      ./writehostid
      mkinitcpio -p linux
      

You can now delete the two files writehostid.c and writehostid. Your system should work and reboot properly now.

Encryption in ZFS on linux

ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.

dm-crypt, possibly via LUKS, creates devices in /dev/mapper and their name is fixed. So you just need to change zpool create commands to point to that names. The idea is configuring the system to create the /dev/mapper block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.

  • For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:
    cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX encrypted
    zpool create zroot /dev/mapper/encrypted
    

In the case of a root filesystem pool, the mkinicpio.conf HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:

HOOKS="base udev autodetect modconf block keyboard encrypt zfs filesystems"

Since the /dev/mapper/encrypted name is fixed no import errors will occur.

Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.

ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use eCryptfs on it.

  • For example to have an encrypted home:
    zfs create -o compression=off -o dedup=off -o mountpoint=/home/<username> <zpool>/<username>
    useradd -m <username>
    passwd <username>
    ecryptfs-migrate-home -u <username>
    <log in user and complete the procedure with ecryptfs-unwrap-passphrase>
    

NOTE: The two passwords, encryption and login, must be the same

Resources

#1

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
#2

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
#3

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
  • % Done changed from 40 to 60
#4

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
#5

Updated by Daniel Curtis over 10 years ago

  • Subject changed from Installing ZFS on Arch to Installing ZFS on Arch Linux with GRUB
  • Description updated (diff)
#6

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
#7

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
  • Status changed from In Progress to Resolved
  • % Done changed from 60 to 90
#8

Updated by Daniel Curtis over 10 years ago

  • Description updated (diff)
  • % Done changed from 90 to 100
#9

Updated by Daniel Curtis about 10 years ago

  • Description updated (diff)
  • Status changed from Resolved to Closed
#10

Updated by Daniel Curtis over 8 years ago

  • Description updated (diff)

Also available in: Atom PDF