Project

General

Profile

Support #447

Updated by Daniel Curtis over 9 years ago

Like many developers, I like Linux; particularly Arch Linux. And like many sysadmins I like BSD, particularly FreeBSD. This is a guide of how I setup my recent developer laptop, it consists of a few goodies like: 
 # ZFS 
 # BlackArch PenTesting Distro 
 # LUKS Emergency Self-Destruct 
 # USB Boot Loader 

 Additional enhancements may come later. However getting all of this goodness onto a computer takes a little bit of patience and understanding. 

 h2. Securely Wipe the Hard Drive 

 * Once booted into an Arch Live ISO, run the following to find the drive to erase: 
 <pre> 
 fdisk -l 
 </pre> 
 #* Now erase the primary hard drive, this guide uses /dev/sda for the primary hard drive: 
 <pre> 
 dd if=/dev/zero of=/dev/sda bs=4M 
 </pre> 
 #* Now erase the USB drive, this guide uses /dev/sdc for the USB Boot Loader drive: 
 <pre> 
 dd if=/dev/zero of=/dev/sdc bs=1M 
 </pre> 

 h2. Adding the repository and install ZFS 

 The maintainer of ZFS on Arch has a signed repository that you can add to the @/etc/pacman.conf@. 

 * Add the *[demz-repo-archiso]* repo: 
 <pre> 
 vi /etc/pacman.conf 
 </pre> 
 #* And run the following to add the [demz-repo-archiso] to the end of /etc/pacman.conf: 
 <pre> 
 echo "[demz-repo-archiso]" >> /etc/pacman.conf 
 echo "Server = http://demizerone.com/$repo/$arch" >> /etc/pacman.conf 
 </pre> 

 * Now repo key needs to be received and locally signed: 
 <pre> 
 pacman-key -r 0EE7A126 
 pacman-key --lsign-key 0EE7A126 
 </pre> 

 * Now update the repository information: 
 <pre> 
 pacman -Sy 
 </pre> 

 * Its time to install ZFS: 
 <pre> 
 pacman -S zfs 
 </pre> 

 * Load the ZFS kernel module: 
 <pre> 
 modprobe zfs 
 </pre> 

 * Check to see that the module was loaded: 
 <pre> 
 lsmod | grep zfs 
 </pre> 

 h2. Install the patched cryptsetup 

 * Install the base-devel and libutil-linux packages: 
 <pre> 
 pacman -S base-devel libutil-linux 
 </pre> 

 * Grab the patch cryptsetup from the AUR 
 <pre> 
 mkdir ~/src && cd ~/src 
 wget https://aur.archlinux.org/packages/cr/cryptsetup-nuke-keys/cryptsetup-nuke-keys.tar.gz 
 tar xzf cryptsetup-nuke-keys.tar.gz 
 cd cryptsetup-nuke-keys 
 </pre> 
 * Install cryptsetup 
 <pre> 
 makepkg -s PKGBUILD 
 makepkg -i PKGBUILD 
 y 
 y 
 </pre> 

 h2. Preparing the USB Boot Loader 

 * Find where the USB drive is by running: 
 <pre> 
 fdisk -l 
 </pre> 
 *NOTE*: Since I am using an Arch ISO from a USB drive, this guide will use /dev/sdc for the USB Boot Loader. 

 * Open cfdisk on the UDB drive: 
 <pre> 
 cfdisk /dev/sdc 
 </pre> 
 #* Erase all partitions, create a small partition for the bootloader, then add a partition for the rest of the drive, for storage: 
 <pre> 
 [New] 
 primary 
 512 
 [Bootable] (make sure to have sda1 selected) 
 (Select Free Space) 
 [New] 
 primary 
 (Rest of the USB space) 
 [Write] 
 yes 
 [Quit] 
 </pre> 

 * Make an ext3 partition for @/boot@: 
 <pre> 
 mkfs.ext3 /dev/sdc1 
 </pre> 

 * Make a FAT partition for general storage on the USB drive: 
 <pre> 
 mkfs.fat /dev/sdc2 
 </pre> 

 h2. Setting up the encrypted hard drive 

 * Create a LUKS volume on /dev/sda 
 <pre> 
 cryptsetup    -i 15000 -c aes-xts-plain:sha512 -y -s 512 luksFormat /dev/sda 
 </pre> 
 > Enter passphrase: 
 > Verify passphrase: 

 * Add the LUKS Emergency Self-Destruct passphrase: 
 <pre> 
 cryptsetup luksAddNuke /dev/sda 
 </pre> 
 > Enter any existing passphrase:        (existing password) 
 > Enter new passphrase for key slot:    (set the nuke password) 
 > Verify passphrase:                    (verify the nuke password) 

 * Open the LUKS volume: 
 <pre> 
 cryptsetup luksOpen /dev/sda root 
 </pre> 

 *NOTE*: This will create the mapped device */dev/mapper/root*. This is where the ZFS root will be installed. 

 *# (Optional) Create a backup of the LUKS Header 
 <pre> 
 luksHeaderBackup /dev/sda --header-backup-file /path/to/backup-luksHeader.img 
 </pre> 
 *# (Optional) Restore the LUKS Header from a backup 
 luksHeaderRestore /dev/sda --header-backup-file /path/to/backup-luksHeader.img 

 h3. Preparing the encrypted primary hard drive 

 * Open cfdisk on the primary hard drive: 
 <pre> 
 cfdisk /dev/mapper/root 
 </pre> 
 #* Add the primary partition for ZFS 
 <pre> 
 (Select Free Space) 
 [New] 
 primary 
 (All of the HD space) 
 [Type] 
 BF 
 [Write] 
 yes 
 [Quit] 
 </pre> 

 h2. Setting up the ZFS filesystem 

 * Create the zpool: 
 <pre> 
 zpool create zroot /dev/mapper/root 
 </pre> 
 *WARNING*: Always use id names when working with ZFS, otherwise import errors will occur. 

 * Create necessary sub-filesystem mount points such as /home and /vms can be created with the following commands: 
 <pre> 
 zfs create zroot/home -o mountpoint=/home 
 zfs create zroot/vms -o mountpoint=/vms 
 </pre> 

 *NOTE*: That if you want to use other datasets for system directories (/var or /etc included) your system will not boot unless they are listed in /etc/fstab! We will address that at the appropriate time in this tutorial.  

 h2. Swap partition 

 ZFS does not allow the use swapfiles, but it is possible to use a ZFS volume as swap partition. It is important to set the ZVOL block size to match the system page size; for x86 and x86_64 systems that is 4k. 

 * Create a 2 GB (or whatever is required) ZFS volume: 
 <pre> 
 zfs create -V 2G -b 4K zroot/swap 
 </pre> 

 * Initialize and enable the volume as a swap partition: 
 <pre> 
 mkswap /dev/zvol/zroot/swap 
 swapon /dev/zvol/zroot/swap 
 </pre> 

 * Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported: 
 <pre> 
 zfs umount -a 
 </pre> 

 h2. Configure the ZFS root filesystem filesystemarch linux luks 

 * First, set the mount point of the root filesystem: 
 <pre> 
 zfs set mountpoint=/ zroot 
 </pre> 
 *# and optionally, any sub-filesystems: 
 <pre> 
 zfs set mountpoint=/home zroot/home 
 zfs set mountpoint=/vms zroot/vms 
 </pre> 

 * Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system. 
 <pre> 
 zpool set bootfs=zroot zroot 
 </pre> 

 * Turn off swap, if enabled: 
 <pre> 
 swapoff -a 
 </pre> 

 * Export the pool: 
 <pre> 
 zpool export zroot 
 </pre> 

 *WARNING*: Do not skip this, otherwise you will be required to use -f when importing your pools. This unloads the imported pool. 
 *NOTE*: This might fail if you added a swap partition above. Need to turn it off with the @swapoff@ command. 

 * Finally, re-import the pool: 
 <pre> 
 zpool import -d /dev/mapper -R /mnt zroot 
 </pre> 

 *NOTE*: @-d@ is not the actual device id, but the @/dev/mapper@ directory containing the symbolic links. 

 If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use. 

 * Be sure to bring the @zpool.cache@ file into your new system. This is required later for the ZFS daemon to start. 
 <pre> 
 mkdir -p /mnt/etc/zfs 
 cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache 
 </pre> 
 *# If you don't have /etc/zfs/zpool.cache, create it: 
 <pre> 
 zpool set cachefile=/etc/zfs/zpool.cache zroot 
 </pre> 

 h2. Installing Arch 

 * Start by mounting the boot partition 
 <pre> 
 mkdir /mnt/boot 
 mount /dev/sdc1 /mnt/boot 
 </pre> 

 * Now change the repository to *demz-repo-core* 
 <pre> 
 vi /etc/pacman.conf 
 </pre> 
 #* And change @[demz-repo-archiso]@ to the following 
 > [demz-repo-core] 
 > Server = http://demizerone.com/$repo/$arch 

 * Then install the base system 
 <pre> 
 pacstrap -i /mnt base base-devel grub openssh zfs 
 </pre> 

 * Generate the fstab for filesystems, use: 
 <pre> 
 genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab 
 </pre> 

 * Edit the @/etc/fstab@. If you chose to create datasets for system directories, keep them in this fstab!  
 <pre> 
 vi /mnt/etc/fstab 
 </pre> 
 #* +Comment out the lines+ for the /, /root, and /home mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong. Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with /dev/zvol/zroot/swap. 
 #* Edit @/mnt/etc/fstab@ to ensure the swap partition is mounted at boot: 
 <pre> 
 vi /mnt/etc/fstab 
 </pre> 
 > /dev/zvol/zroot/swap none swap defaults 0 0 

 * Setup the initial environment: 
 <pre> 
 arch-chroot /mnt 
 </pre> 
 #* Set a root password 
 <pre> 
 passwd 
 </pre> 
 #* Set a hostname 
 <pre> 
 echo "archzfs" > /etc/hostname 
 </pre> 
 #* Set a local time 
 <pre> 
 ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime 
 </pre> 
 #* Set a local language by uncommenting *en_US.UTF-8* in @/etc/locale.gen@, then running: 
 <pre> 
 locale-gen 
 </pre>  
 #* Set a wired network connection 
 <pre> 
 cp /etc/netctl/examples/ethernet-dhcp /etc/netctl/wired 
 netctl enable wired 
 </pre> 
 #* Set SSH to start at boot 
 <pre> 
 systemctl enable sshd.service 
 </pre> 

 h3. LXDE 

 * Install the LXDE desktop 
 <pre> 
 pacman -S lxde xorg xorg-xinit dbus gvfs gvfs-smb 
 echo 'exec startlxde' >> ~/.xinitrc 
 startx 
 </pre> 

 h3. Add an administive user 

 * It is generally a good idea not to run command directly as root, but rather as an administrative user using the sudo wrapper command 
 * First install sudo: 
 <pre> 
 pacman -S sudo 
 </pre> 
 * And create a user: 
 <pre> 
 useradd -m -g users -s /bin/bash bob 
 </pre> 
 * Add bob to the sudoers file: 
 visudo 
 bob ALL=(ALL) ALL 

 h2. Setup the bootloader and initial ramdisk 

 When creating the initial ramdisk, first edit @/etc/mkinitcpio.conf@ and add *zfs* +before+ *filesystems*. Also, move *keyboard* hook +before+ *zfs* so you can type in console if something goes wrong; and also put *usb* +before+ *keyboard* and *encrypt* +before+ *zfs*. You may also remove -fsck- (if you are not using Ext3 or Ext4).  

 * The @HOOKS@ line should look something like this: 
 > HOOKS="base udev autodetect modconf block *usb keyboard encrypt zfs* filesystems" 

 * Regenerate the initramfs with the command: 
 <pre> 
 mkinitcpio -p linux 
 </pre> 

 h2. Install and configure GRUB 

 * Install GRUB to the primary hard drive: 
 <pre> 
 grub-install --target=i386-pc --recheck --debug /dev/sdc 
 </pre> 

 h3. Edit GRUB to boot off of the zroot pool 

 grub-mkconfig does not properly detect the ZFS filesystem, so it is necessary to edit grub.cfg manually. 

 * Edit the GRUB config: 
 <pre> 
 /boot/grub/grub.cfg 
 </pre> 
 #* Add or modify it similar to the following 
 <pre> 
 set timeout=2 
 set default=0 

 # (0) Arch Linux 
 menuentry "Arch Linux" { 
     set root=(hd0,msdos1) 
     linux /vmlinuz-linux cryptdevice=/dev/sda:root root=/dev/mapper/root zfs=zroot rw 
     initrd /initramfs-linux.img 
 } 
 </pre> 

 h3. Finish the setup process 

 * Exit the chroot environment: 
 <pre> 
 exit 
 </pre> 

 *Unmount all ZFS mount points: 
 zfs umount -a 

 * Unmount the bootloader partition: 
 <pre> 
 umount /mnt/boot 
 </pre> 

 * Export the zpool: 
 <pre> 
 zpool export zroot 
 </pre> 

 * Reboot: 
 <pre> 
 reboot 
 </pre> 

 h2. After the first boot 

 If everything went fine up to this point, your system will boot. Once. For your system to be able to reboot without issues, you need to enable the zfs.target to auto mount the pools and set the hostid. 

 * For each pool you want automatically mounted execute: 
 <pre> 
 zpool set cachefile=/etc/zfs/zpool.cache <pool> 
 </pre> 

 * Enable the target with systemd: 
 <pre> 
 systemctl enable zfs.target 
 </pre> 

 When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the kernel parameters in your boot loader. For example, adding *spl.spl_hostid=0x00bab10c*, to get your number use the hostid command. 

 * The other, and suggested, solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image. To do write the hostid file safely you need to use a small C program: 
 <pre> 
 #include <stdio.h> 
 #include <errno.h> 
 #include <unistd.h> 

 int main() { 
     int res; 
     res = sethostid(gethostid()); 
     if (res != 0) { 
         switch (errno) { 
             case EACCES: 
             fprintf(stderr, "Error! No permission to write the" 
                          " file used to store the host ID.\n" 
                          "Are you root?\n"); 
             break; 
             case EPERM: 
             fprintf(stderr, "Error! The calling process's effective" 
                             " user or group ID is not the same as" 
                             " its corresponding real ID.\n"); 
             break; 
             default: 
             fprintf(stderr, "Unknown error.\n"); 
         } 
         return 1; 
     } 
     return 0; 
 } 
 </pre> 

 * Copy it, save it as @writehostid.c@ and compile it with: 
 <pre> 
 gcc -o writehostid writehostid.c 
 </pre> 
 #* Finally execute it and regenerate the initramfs image: 
 <pre> 
 ./writehostid 
 mkinitcpio -p linux 
 </pre> 

 You can now delete the two files writehostid.c and writehostid. Your system should work and reboot properly now.  

 h2. Installing BlackArch 

 h3. Add the *[Multilib]* repository 

 * Make sure to uncomment the *[Multilib]* repo, similar to the following: 
 > [Multilib] 
 > Include = /etc/pacman.d/mirrorlist 
 * Refresh pacman: 
 <pre> 
 pacman -Syy 
 </pre> 

 h3. Setting up as an Unofficial User Repository 

 BlackArch is compatible with normal Arch installations. It acts as an unofficial user repository. 

 # Run the strap.sh script from http://blackarch.org/strap.sh as root: 
 <pre> 
 curl -s http://blackarch.org/strap.sh | sudo sh 
 </pre> 
 # Run the following to add the BlackArch reposiroty to @/etc/pacman.conf@: 
 <pre> 
 echo "[blackarch]" >> /etc/pacman.conf 
 echo "Server = http://mirror.team-cymru.org/blackarch/\$repo/os/\$arch" >> /etc/pacman.conf 
 </pre> 
 # Now run: 
 <pre> 
 pacman -Syyu 
 </pre> 

 h2. Installing other developer tools and packages 

 There are a few more packages that I use in my day-to-day tasks, for brevity I will refer over to Issue #410. 

 h2. Optimizing and Tweaking 

 ZFS offers many useful features like snapshotting, replication, and dataset customization.  

 Virtual Machine Optimizations 

 Since I will be running virtual machines from my developer laptop, I will need a ZFS dataset that will enable/disable certain features like compression to allow VMs to run more smoothly. This is why the @zroot/vms@ dataset was created during the initial setup 

 * Options for zfs can be displayed using the zfs command: 
 <pre> 
 sudo zfs get all zroot/vms 
 </pre> 
 #* This will return: 
 <pre> 
 NAME         PROPERTY                VALUE                    SOURCE 
 zroot/vms    type                    filesystem               - 
 zroot/vms    creation                Sun Aug 31 14:47 2014    - 
 zroot/vms    used                    30K                      - 
 zroot/vms    available               29.5G                    - 
 zroot/vms    referenced              30K                      - 
 zroot/vms    compressratio           1.00x                    - 
 zroot/vms    mounted                 yes                      - 
 zroot/vms    quota                   none                     default 
 zroot/vms    reservation             none                     default 
 zroot/vms    recordsize              128K                     default 
 zroot/vms    mountpoint              /vms                     local 
 zroot/vms    sharenfs                off                      default 
 zroot/vms    checksum                on                       default 
 zroot/vms    compression             on                       default 
 zroot/vms    atime                   on                       default 
 zroot/vms    devices                 on                       default 
 zroot/vms    exec                    on                       default 
 zroot/vms    setuid                  on                       default 
 zroot/vms    readonly                off                      default 
 zroot/vms    zoned                   off                      default 
 zroot/vms    snapdir                 hidden                   default 
 zroot/vms    aclinherit              restricted               default 
 zroot/vms    canmount                on                       default 
 zroot/vms    xattr                   on                       default 
 zroot/vms    copies                  1                        default 
 zroot/vms    version                 5                        - 
 zroot/vms    utf8only                off                      - 
 zroot/vms    normalization           none                     - 
 zroot/vms    casesensitivity         sensitive                - 
 zroot/vms    vscan                   off                      default 
 zroot/vms    nbmand                  off                      default 
 zroot/vms    sharesmb                off                      default 
 zroot/vms    refquota                none                     default 
 zroot/vms    refreservation          none                     default 
 zroot/vms    primarycache            all                      default 
 zroot/vms    secondarycache          all                      default 
 zroot/vms    usedbysnapshots         0                        - 
 zroot/vms    usedbydataset           30K                      - 
 zroot/vms    usedbychildren          0                        - 
 zroot/vms    usedbyrefreservation    0                        - 
 zroot/vms    logbias                 latency                  default 
 zroot/vms    dedup                   off                      default 
 zroot/vms    mlslabel                none                     default 
 zroot/vms    sync                    standard                 default 
 zroot/vms    refcompressratio        1.00x                    - 
 zroot/vms    written                 30K                      - 
 zroot/vms    logicalused             15K                      - 
 zroot/vms    logicalreferenced       15K                      - 
 zroot/vms    snapdev                 hidden                   default 
 zroot/vms    acltype                 off                      default 
 zroot/vms    context                 none                     default 
 zroot/vms    fscontext               none                     default 
 zroot/vms    defcontext              none                     default 
 zroot/vms    rootcontext             none                     default 
 zroot/vms    relatime                off                      default 
 </pre> 

 * The above output shows that compression is turned on, to disable the compression feature, while still keeping it active for all other datasets in the zpool, run: 
 <pre> 
 zfs set compression=off zroot/vms 
 </pre> 

 h3. Snapshotting 

 ZFS Snapshot Manager 

 The zfs-snap-manager package from AUR provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a "Grandfather-father-son" scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. 

 * First install some dependencies: 
 *# python2-daemon 
 <pre> 
 mkdir ~/src && cd ~/src 
 wget https://aur.archlinux.org/packages/py/python2-daemon/python2-daemon.tar.gz 
 tar xzf python2-daemon.tar.gz 
 cd python2-daemon 
 makepkg -s PKGBUILD && makepkg -i PKGBUILD 
 </pre> 
 *# mbuffer 
 <pre> 
 cd ~/src 
 wget https://aur.archlinux.org/packages/mb/mbuffer/mbuffer.tar.gz 
 tar xzf mbuffer.tar.gz 
 cd mbuffer 
 makepkg -s PKGBUILD && makepkg -i PKGBUILD 
 </pre> 

 * Install zfs-snap-manager from the AUR 
 <pre> 
 cd ~/src 
 wget https://aur.archlinux.org/packages/zf/zfs-snap-manager/zfs-snap-manager.tar.gz 
 tar xzf zfs-snap-manager.tar.gz 
 cd zfs-snap-manager 
 makepkg -s PKGBUILD && makepkg -i PKGBUILD 
 </pre> 

 * Create a simple snapshot config 
 <pre> 
 vi /etc/zfssnapshotmanager.cfg 
 </pre> 
 #* And enter in the following: 
 <pre> 
 [zroot] 
 mountpoint = / 
 time = 21:00 
 snapshot = True 
 schema = 7d3w11m5y 
 </pre> 

 * Create a simple systemd service configuration 
 vi /etc/systemd/system/zfs-snap-manager.service 
 #* And enter in the following: 
 <pre> 
 [Unit] 
 Description=ZFS Snapshot Manager 
 After=syslog.target 
 [Service] 
 Type=simple 
 User=root 
 Group=root 
 WorkingDirectory=/usr/lib/zfs-snap-manager/ 
 PIDFile=/var/run/zfs-snap-manager.pid 
 ExecStart=/usr/lib/zfs-snap-manager/manager.py start 
 ExecStop=/usr/lib/zfs-snap-manager/manager.py stop 
 [Install] 
 WantedBy=multi-user.target 
 </pre> 

 * And finally enable the service at boot and start the service: 
 systemctl enable zfs-snap-manager.service 
 systemctl start zfs-snap-manager.service 

 The package also supports configurable replication to other machines running ZFS by means of zfs send and zfs receive. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.  

 h2. Resources 

 * https://wiki.archlinux.org/index.php/ZFS 
 * https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_ZFS 
 * http://blackarch.org/download.html

Back