| NixOS Manual | ||
|---|---|---|
| Next | ||
Table of Contents
List of Examples
/dev/sda (MBR)/dev/sda (UEFI)/dev/sdaemacs.nix)configuration.nix~/.config/nixpkgs/config.nixmkEnableOption usagemkPackageOption usagemkPackageOption with explicit default and examplemkPackageOption with additional description textservices.xserver.displayManager.enable in thegdm moduleservices.xserver.displayManager.enable in thesddm moduletypes.anythingsettings optionsettings attributeThis manual describes how to install, use and extend NixOS, a Linux distribution based on the purely functional package management systemNix, that is composed using modules and packages defined in theNixpkgs project.
Additional information regarding the Nix package manager and the Nixpkgs project can be found in respectively theNix manual and theNixpkgs manual.
If you encounter problems, please report them on theDiscourse, theMatrix room, or on the#nixos channel on Libera.Chat. Alternatively, considercontributing to this manual. Bugs should be reported inNixOS’ GitHub issue tracker.
Commands prefixed with# have to be run as root, either requiring to login as root user or temporarily switching to it usingsudo for example.
This section describes how to obtain, install, and configure NixOS for first-time use.
NixOS ISO images can be downloaded from theNixOS downloadpage. Follow the instructions inthe section called “Booting from a USB flash drive” to create a bootable USB flash drive.
If you have a very old system that can’t boot from USB, you can burn the imageto an empty CD. NixOS might not work very well on such systems.
As an alternative to installing NixOS yourself, you can get a runningNixOS system through several other means:
Using virtual appliances in Open Virtualization Format (OVF) thatcan be imported into VirtualBox. These are available from theNixOSdownload page.
Using AMIs for Amazon’s EC2. To find one for your region, please referto thedownload page.
Using NixOps, the NixOS-based cloud deployment tool, which allowsyou to provision VirtualBox and EC2 NixOS instances from declarativespecifications. Check out theNixOpshomepage for details.
Table of Contents
To begin the installation, you have to boot your computer from the install drive.
Plug in the install drive. Then turn on or restart your computer.
Open the boot menu by pressing the appropriate key, which is usually shownon the display on early boot.Select the USB flash drive (the option usually contains the word “USB”).If you choose the incorrect drive, your computer will likely continue toboot as normal. In that case restart your computer and pick adifferent drive.
The key to open the boot menu is different across computer brands and evenmodels. It can beF12, but alsoF1,F9,F10,Enter,Del,Esc or another function key. If you are unsure and don’t seeit on the early boot screen, you can search online for your computersbrand, model followed by “boot from usb”.The computer might not even have that feature, so you have to go into theBIOS/UEFI settings to change the boot order. Again, search online fordetails about your specific computer model.
For Apple computers with Intel processors press and hold the⌥(Option or Alt) key until you see the boot menu. On Apple silicon pressand hold the power button.
If your computer supports both BIOS and UEFI boot, choose the UEFI option.You will likely need to disable “Secure Boot” to use the UEFI option. The exact steps vary by device manufacturer but generally “Secure Boot” will be listed under “Boot”, “Security” or “Advanced” in the BIOS/UEFI menu.
If you use a CD for the installation, the computer will probably boot fromit automatically. If not, choose the option containing the word “CD” fromthe boot menu.
Shortly after selecting the appropriate boot drive, you should bepresented with a menu with different installer options. Leave the defaultand wait (or pressEnter to speed up).
The graphical images will start their corresponding desktop environmentand the graphical installer, which can take some time. The minimal imageswill boot to a command line. You have to follow the instructions inthe section called “Manual Installation” there.
The graphical installer is recommended for desktop users and will guide youthrough the installation.
In the “Welcome” screen, you can select the language of the Installer andthe installed system.
Leaving the language as “American English” will make it easier to search forerror messages in a search engine or to report an issue.
Next you should choose your location to have the timezone set correctly.You can actually click on the map!
The installer will use an online service to guess your location based onyour public IP address.
Then you can select the keyboard layout. The default keyboard model shouldwork well with most desktop keyboards. If you have a special keyboard ornotebook, your model might be in the list. Select the language you are mostcomfortable typing in.
On the “Users” screen, you have to type in your display name, login nameand password. You can also enable an option to automatically login to thedesktop.
Then you have the option to choose a desktop environment. If you want tocreate a custom setup with a window manager, you can select “No desktop”.
If you don’t have a favorite desktop and don’t know which one to choose,you can stick to either GNOME or Plasma. They have a quite differentdesign, so you should choose whichever you like better.They are both popular choices and well tested on NixOS.
You have the option to allow unfree software in the next screen.
The easiest option in the “Partitioning” screen is “Erase disk”, which willdelete all data from the selected disk and install the system on it.Also select “Swap (with Hibernation)” in the dropdown below it.You have the option to encrypt the whole disk with LUKS.
At the top left you see if the Installer was booted with BIOS or UEFI. Ifyou know your system supports UEFI and it shows “BIOS”, reboot with thecorrect option.
Make sure you have selected the correct disk at the top and that novaluable data is still on the disk! It will be deleted whenformatting the disk.
Check the choices you made in the “Summary” and click “Install”.
The installation takes about 15 minutes. The time varies based on theselected desktop environment, internet connection speed and disk write speed.
When the install is complete, remove the USB flash drive andreboot into your new system!
NixOS can be installed on BIOS or UEFI systems. The procedure for a UEFIinstallation is broadly the same as for a BIOS installation. The differencesare mentioned in the following steps.
The NixOS manual is available by runningnixos-help in the command lineor from the application menu in the desktop environment.
To have access to the command line on the graphical images, openTerminal (GNOME) or Konsole (Plasma) from the application menu.
You are logged-in automatically asnixos. Thenixos user account hasan empty password so you can usesudo without a password:
$ sudo -iYou can useloadkeys to switch to your preferred keyboard layout.(We even provide neo2 vialoadkeys de neo!)
If the text is too small to be legible, trysetfont ter-v32n toincrease the font size.
To install over a serial port connect with115200n8 (e.g.picocom -b 115200 /dev/ttyUSB0). When the bootloader lists bootentries, select the serial console boot entry.
The boot process should have brought up networking (checkip a). Networking is necessary for the installer, since it willdownload lots of stuff (such as source tarballs or Nixpkgs channelbinaries). It’s best if you have a DHCP server on your network.Otherwise configure networking manually usingifconfig.
On the graphical installer, you can configure the network, wifiincluded, through NetworkManager. Using thenmtui program, you can doso even in a non-graphical session. If you prefer to configure thenetwork manually, disable NetworkManager withsystemctl stop NetworkManager.
On the minimal installer, NetworkManager is not available, soconfiguration must be performed manually. To configure the wifi, firststart wpa_supplicant withsudo systemctl start wpa_supplicant, thenrunwpa_cli. For most home networks, you need to type in the followingcommands:
> add_network0> set_network 0 ssid "myhomenetwork"OK> set_network 0 psk "mypassword"OK> enable_network 0OKFor enterprise networks, for exampleeduroam, instead do:
> add_network0> set_network 0 ssid "eduroam"OK> set_network 0 identity "myname@example.com"OK> set_network 0 password "mypassword"OK> enable_network 0OKWhen successfully connected, you should see a line such as this one
<3>CTRL-EVENT-CONNECTED - Connection to 32:85:ab:ef:24:5c completed [id=0 id_str=]you can now leavewpa_cli by typingquit.
If you would like to continue the installation from a different machineyou can use activated SSH daemon. You need to copy your ssh key toeither/home/nixos/.ssh/authorized_keys or/root/.ssh/authorized_keys (Tip: For installers with a modifiablefilesystem such as the sd-card installer image a key can be manuallyplaced by mounting the image on a different machine). Alternatively youmust set a password for eitherroot ornixos withpasswd to beable to login.
The NixOS installer doesn’t do any partitioning or formatting, so youneed to do that yourself.
The NixOS installer ships with multiple partitioning tools. The examplesbelow useparted, but also providesfdisk,gdisk,cfdisk, andcgdisk.
Use the command ‘lsblk’ to find the name of your ‘disk’ device.
The recommended partition scheme differs depending if the computer usesLegacy Boot orUEFI.
Here’s an example partition scheme for UEFI, using/dev/sda as thedevice.
You can safely ignoreparted’s informational message about needing toupdate /etc/fstab.
Create aGPT partition table.
# parted /dev/sda -- mklabel gptAdd theroot partition. This will fill the disk except for the endpart, where the swap will live, and the space left in front (512MiB)which will be used by the boot partition.
# parted /dev/sda -- mkpart root ext4 512MB -8GBNext, add aswap partition. The size required will vary accordingto needs, here a 8GB one is created.
# parted /dev/sda -- mkpart swap linux-swap -8GB 100%The swap partition size rules are no different than for other Linuxdistributions.
Finally, theboot partition. NixOS by default uses the ESP (EFIsystem partition) as its/boot partition. It uses the initiallyreserved 512MiB at the start of the disk.
# parted /dev/sda -- mkpart ESP fat32 1MB 512MB# parted /dev/sda -- set 3 esp onIn case you decided to not create a swap partition, replace3 by2. To be sure of the id number of ESP, runparted --list.
Once complete, you can follow withthe section called “Formatting”.
Here’s an example partition scheme for Legacy Boot, using/dev/sda asthe device.
You can safely ignoreparted’s informational message about needing toupdate /etc/fstab.
Create aMBR partition table.
# parted /dev/sda -- mklabel msdosAdd theroot partition. This will fill the the disk except for theend part, where the swap will live.
# parted /dev/sda -- mkpart primary 1MB -8GBSet the root partition’s boot flag to on. This allows the disk to be booted from.
# parted /dev/sda -- set 1 boot onFinally, add aswap partition. The size required will varyaccording to needs, here a 8GB one is created.
# parted /dev/sda -- mkpart primary linux-swap -8GB 100%The swap partition size rules are no different than for other Linuxdistributions.
Once complete, you can follow withthe section called “Formatting”.
Use the following commands:
For initialising Ext4 partitions:mkfs.ext4. It is recommendedthat you assign a unique symbolic label to the file system using theoption-L label, since this makes the file system configurationindependent from device changes. For example:
# mkfs.ext4 -L nixos /dev/sda1For creating swap partitions:mkswap. Again it’s recommended toassign a label to the swap partition:-L label. For example:
# mkswap -L swap /dev/sda2UEFI systems
For creating boot partitions:mkfs.fat. Again it’s recommendedto assign a label to the boot partition:-n label. Forexample:
# mkfs.fat -F 32 -n boot /dev/sda3For creating LVM volumes, the LVM commands, e.g.,pvcreate,vgcreate, andlvcreate.
For creating software RAID devices, usemdadm.
Mount the target file system on which NixOS should be installed on/mnt, e.g.
# mount /dev/disk/by-label/nixos /mntUEFI systems
Mount the boot file system on/mnt/boot, e.g.
# mkdir -p /mnt/boot# mount -o umask=077 /dev/disk/by-label/boot /mnt/bootIf your machine has a limited amount of memory, you may want toactivate swap devices now (swapon device).The installer (or rather, the build actions that itmay spawn) may need quite a bit of RAM, depending on yourconfiguration.
# swapon /dev/sda2You now need to create a file/mnt/etc/nixos/configuration.nixthat specifies the intended configuration of the system. This isbecause NixOS has adeclarative configuration model: you create oredit a description of the desired configuration of your system, andthen NixOS takes care of making it happen. The syntax of the NixOSconfiguration file is described inConfiguration Syntax,while a list of available configuration options appears inAppendix A. A minimal example is shown inExample: NixOS Configuration.
This command accepts an optional--flake option, to also generate aflake.nix file, if you want to set up a flake-based configuration.
The commandnixos-generate-config can generate an initialconfiguration file for you:
# nixos-generate-config --root /mntYou should then edit/mnt/etc/nixos/configuration.nix to suit yourneeds:
# nano /mnt/etc/nixos/configuration.nixIf you’re using the graphical ISO image, other editors may beavailable (such asvim). If you have network access, you can alsoinstall other editors – for instance, you can install Emacs byrunningnix-env -f '<nixpkgs>' -iA emacs.
Youmust set the optionboot.loader.grub.device tospecify on which disk the GRUB boot loader is to be installed.Without it, NixOS cannot boot.
If there are other operating systems running on the machine beforeinstalling NixOS, theboot.loader.grub.useOSProberoption can be set totrue to automatically add them to the grubmenu.
You must select a boot-loader, either systemd-boot or GRUB. The recommendedoption is systemd-boot: set the optionboot.loader.systemd-boot.enabletotrue.nixos-generate-config should do this automaticallyfor new configurations when booted in UEFI mode.
You may want to look at the options starting withboot.loader.efi andboot.loader.systemd-bootas well.
If you want to use GRUB, setboot.loader.grub.device tonodev andboot.loader.grub.efiSupport totrue.
With systemd-boot, you should not need any special configuration to detectother installed systems. With GRUB, setboot.loader.grub.useOSProbertotrue, but this will only detect windows partitions, not other Linuxdistributions. If you dual boot another Linux distribution, use systemd-bootinstead.
If you need to configure networking for your machine theconfiguration options are described inNetworking. Inparticular, while wifi is supported on the installation image, it isnot enabled by default in the configuration generated bynixos-generate-config.
Another critical option isfileSystems, specifying the filesystems that need to be mounted by NixOS. However, you typicallydon’t need to set it yourself, becausenixos-generate-config setsit automatically in/mnt/etc/nixos/hardware-configuration.nix fromyour currently mounted file systems. (The configuration filehardware-configuration.nix is included fromconfiguration.nixand will be overwritten by future invocations ofnixos-generate-config; thus, you generally should not modify it.)Additionally, you may want to look atHardware configuration forknown-hardware at thispoint or after installation.
Depending on your hardware configuration or type of file system, youmay need to set the optionboot.initrd.kernelModules to includethe kernel modules that are necessary for mounting the root filesystem, otherwise the installed system will not be able to boot. (Ifthis happens, boot from the installation media again, mount thetarget file system on/mnt, fix/mnt/etc/nixos/configuration.nixand rerunnixos-install.) In most cases,nixos-generate-configwill figure out the required modules.
Do the installation:
# nixos-installThis will install your system based on the configuration youprovided. If anything fails due to a configuration problem or anyother issue (such as a network outage while downloading binariesfrom the NixOS binary cache), you can re-runnixos-install afterfixing yourconfiguration.nix.
If you opted for a flake-based configuration, you will need to pass the--flake here as well and specify the name of the configuration as used intheflake.nix file. For the default generated flake, this isnixos.
# nixos-install --flake 'path/to/flake.nix#nixos'As the last step,nixos-install will ask you to set the passwordfor theroot user, e.g.
setting root password...New password: ***Retype new password: ***If you have a user account declared in yourconfiguration.nix and plan to log in using this user, set a password before rebooting, e.g. for thealice user:
# nixos-enter --root /mnt -c 'passwd alice'For unattended installations, it is possible to usenixos-install --no-root-passwd in order to disable the passwordprompt entirely.
If everything went well:
# rebootYou should now be able to boot into the installed NixOS. The GRUBboot menu shows a list ofavailable configurations (initially justone). Every time you change the NixOS configuration (seeChangingConfiguration), a new item is added to themenu. This allows you to easily roll back to a previousconfiguration if something goes wrong.
Use your declared user account to log in.If you didn’t declare one, you should still be able to log in using theroot user.
Some graphical display managers such as SDDM do not allowroot login by default, so you might need to switch to TTY.Refer toUser Management for details on declaring user accounts.
You may also want to install some software. This will be covered inPackage Management.
To summarise,Example: Commands for Installing NixOS on/dev/sdashows a typical sequence of commands for installing NixOS on an empty harddrive (here/dev/sda).Example: NixOS Configuration shows acorresponding configuration Nix expression.
/dev/sda (MBR)# parted /dev/sda -- mklabel msdos# parted /dev/sda -- mkpart primary 1MB -8GB# parted /dev/sda -- mkpart primary linux-swap -8GB 100%/dev/sda (UEFI)# parted /dev/sda -- mklabel gpt# parted /dev/sda -- mkpart root ext4 512MB -8GB# parted /dev/sda -- mkpart swap linux-swap -8GB 100%# parted /dev/sda -- mkpart ESP fat32 1MB 512MB# parted /dev/sda -- set 3 esp on/dev/sdaWith a partitioned disk.
# mkfs.ext4 -L nixos /dev/sda1# mkswap -L swap /dev/sda2# swapon /dev/sda2# mkfs.fat -F 32 -n boot /dev/sda3 # (for UEFI systems only)# mount /dev/disk/by-label/nixos /mnt# mkdir -p /mnt/boot # (for UEFI systems only)# mount -o umask=077 /dev/disk/by-label/boot /mnt/boot # (for UEFI systems only)# nixos-generate-config --root /mnt# nano /mnt/etc/nixos/configuration.nix# nixos-install# reboot{ config, pkgs, ... }: { imports = [ # Include the results of the hardware scan. ./hardware-configuration.nix ]; boot.loader.grub.device = "/dev/sda"; # (for BIOS systems only) boot.loader.systemd-boot.enable = true; # (for UEFI systems only) # Note: setting fileSystems is generally not # necessary, since nixos-generate-config figures them out # automatically in hardware-configuration.nix. #fileSystems."/".device = "/dev/disk/by-label/nixos"; # Enable the OpenSSH server. services.sshd.enable = true;}The image has to be written verbatim to the USB flash drive for it to bebootable on UEFI and BIOS systems. Here are the recommended tools to do that.
Etcher is a popular and user-friendly tool. It works on Linux, Windows and macOS.
Download it frombalena.io, start the program,select the downloaded NixOS ISO, then select the USB flash drive and flash it.
Etcher reports errors and usage statistics by default, which can be disabled inthe settings.
An alternative isUSBImager,which is very simple and does not connect to the internet. Download the versionwith write-only (wo) interface for your system. Start the program,select the image, select the USB flash drive and click “Write”.
Plug in the USB flash drive.
Find the corresponding device withlsblk. You can distinguish them bytheir size.
Make sure all partitions on the device are properly unmounted. ReplacesdXwith your device (e.g.sdb).
sudo umount /dev/sdX*Then use thedd utility to write the image to the USB flash drive.
sudo dd bs=4M conv=fsync oflag=direct status=progress if=<path-to-image> of=/dev/sdXPlug in the USB flash drive.
Find the corresponding device withdiskutil list. You can distinguish themby their size.
Make sure all partitions on the device are properly unmounted. ReplacediskXwith your device (e.g.disk1).
diskutil unmountDisk diskXThen use thedd utility to write the image to the USB flash drive.
sudo dd if=<path-to-image> of=/dev/rdiskX bs=4mAfterdd completes, a GUI dialog “The diskyou inserted was not readable by this computer” will pop up, which canbe ignored.
Using the ‘raw’rdiskX device instead ofdiskX with dd completes inminutes instead of hours.
Eject the disk when it is finished.
diskutil eject /dev/diskXAdvanced users may wish to install NixOS using an existing PXE or iPXEsetup.
These instructions assume that you have an existing PXE or iPXEinfrastructure and want to add the NixOS installer as anotheroption. To build the necessary files from your current version of nixpkgs,you can run:
nix-build -A netboot.x86_64-linux '<nixpkgs/nixos/release.nix>'This will create aresult directory containing: *bzImage – theLinux kernel *initrd – the initrd file *netboot.ipxe – anexample ipxe script demonstrating the appropriate kernel command linearguments for this image
If you’re using plain PXE, configure your boot loader to use thebzImage andinitrd files and have it provide the same kernel commandline arguments found innetboot.ipxe.
If you’re using iPXE, depending on how your HTTP/FTP/etc. server isconfigured you may be able to usenetboot.ipxe unmodified, or you mayneed to update the paths to the files to match your server’s directorylayout.
In the future we may begin making these files available as buildproducts from hydra at which point we will update this documentationwith instructions on how to obtain them either for placing on adedicated TFTP server or to boot them directly over the internet.
In some cases, your system might already be booted into/preinstalled withanother Linux distribution, and booting NixOS by attaching an installationimage is quite a manual process.
This is particularly useful for (cloud) providers where you can’t boot a customimage, but get some Debian or Ubuntu installation.
In these cases, it might be easier to usekexec to “jump into NixOS” from therunning system, which only assumesbash andkexec to be installed on themachine.
Note that kexec may not work correctly on some hardware, as devices are notfully re-initialized in the process. In practice, this however is rarely thecase.
To build the necessary files from your current version of nixpkgs,you can run:
nix-build -A kexec.x86_64-linux '<nixpkgs/nixos/release.nix>'This will create aresult directory containing the following:
bzImage (the Linux kernel)
initrd (the initrd file)
kexec-boot (a shellscript invokingkexec)
These three files are meant to be copied over to the other already runningLinux Distribution.
Note its symlinks pointing elsewhere, socd in, and usescp * root@$destination to copy it over, rather than rsync.
Once you finished copying, executekexec-booton the destination, and aftersome seconds, the machine should be booting into an (ephemeral) NixOSinstallation medium.
In case you want to describe your own system closure to kexec into, instead ofthe default installer image, you can build your ownconfiguration.nix:
{ modulesPath, ... }:{ imports = [ (modulesPath + "/installer/netboot/netboot-minimal.nix") ]; services.openssh.enable = true; users.users.root.openssh.authorizedKeys.keys = [ "my-ssh-pubkey" ];}nix-build '<nixpkgs/nixos>' \ --arg configuration ./configuration.nix --attr config.system.build.kexecTreeMake sure yourconfiguration.nix does still importnetboot-minimal.nix (ornetboot-base.nix).
Installing NixOS into a VirtualBox guest is convenient for users whowant to try NixOS without installing it on bare metal. If you want to setup a VirtualBox guest, follow these instructions:
Add a New Machine in VirtualBox with OS Type “Linux / Other Linux”
Base Memory Size: 768 MB or higher.
New Hard Disk of 10 GB or higher.
Mount the CD-ROM with the NixOS ISO (by clicking on CD/DVD-ROM)
Click on Settings / System / Processor and enable PAE/NX
Click on Settings / System / Acceleration and enable “VT-x/AMD-V”acceleration
Click on Settings / Display / Screen and select VMSVGA as GraphicsController
Save the settings, start the virtual machine, and continueinstallation like normal
There are a few modifications you should make in configuration.nix.Enable booting:
{ boot.loader.grub.device = "/dev/sda"; }Also remove the fsck that runs at startup. It will always fail to run,stopping your boot until you press*.
{ boot.initrd.checkJournalingFS = false; }Shared folders can be given a name and a path in the host system in theVirtualBox settings (Machine / Settings / Shared Folders, then click onthe “Add” icon). Add the following to the/etc/nixos/configuration.nix to auto-mount them. If you do not add"nofail", the system will not boot properly.
{ config, pkgs, ... }:{ fileSystems."/virtualboxshare" = { fsType = "vboxsf"; device = "nameofthesharedfolder"; options = [ "rw" "nofail" ]; };}The folder will be available directly under the root directory.
Because Nix (the package manager) & Nixpkgs (the Nix packagescollection) can both be installed on any (most?) Linux distributions,they can be used to install NixOS in various creative ways. You can, forinstance:
Install NixOS on another partition, from your existing Linuxdistribution (without the use of a USB or optical device!)
Install NixOS on the same partition (in place!), from your existingnon-NixOS Linux distribution usingNIXOS_LUSTRATE.
Install NixOS on your hard drive from the Live CD of any Linuxdistribution.
The first steps to all these are the same:
Install the Nix package manager:
Short version:
$ curl -L https://nixos.org/nix/install | sh$ . $HOME/.nix-profile/etc/profile.d/nix.sh # …or open a fresh shellMore details in the Nixmanual
Switch to the NixOS channel:
If you’ve just installed Nix on a non-NixOS distribution, you willbe on thenixpkgs channel by default.
$ nix-channel --listnixpkgs https://nixos.org/channels/nixpkgs-unstableAs that channel gets released without running the NixOS tests, itwill be safer to use thenixos-* channels instead:
$ nix-channel --add https://nixos.org/channels/nixos-<version> nixpkgsWhere<version> corresponds to the latest version available onchannels.nixos.org.
You may want to throw in anix-channel --update for good measure.
Install the NixOS installation tools:
You’ll neednixos-generate-config andnixos-install, but thisalso makes some man pages andnixos-enter available, just in caseyou want to chroot into your NixOS partition. NixOS installs theseby default, but you don’t have NixOS yet…
$ nix-env -f '<nixpkgs>' -iA nixos-install-toolsThe following 5 steps are only for installing NixOS to anotherpartition. For installing NixOS in place usingNIXOS_LUSTRATE,skip ahead.
Prepare your target partition:
At this point it is time to prepare your target partition. Pleaserefer to the partitioning, file-system creation, and mounting stepsofInstalling NixOS
If you’re about to install NixOS in place usingNIXOS_LUSTRATEthere is nothing to do for this step.
Generate your NixOS configuration:
$ sudo `which nixos-generate-config` --root /mntYou’ll probably want to edit the configuration files. Refer to thenixos-generate-config step inInstalling NixOS for moreinformation.
Consider setting up the NixOS bootloader to give you the ability toboot on your existing Linux partition. For instance, if you’reusing GRUB and your existing distribution is running Ubuntu, you maywant to add something like this to yourconfiguration.nix:
{ boot.loader.grub.extraEntries = '' menuentry "Ubuntu" { search --set=ubuntu --fs-uuid 3cc3e652-0c1f-4800-8451-033754f68e6e configfile "($ubuntu)/boot/grub/grub.cfg" } '';}(You can find the appropriate UUID for your partition in/dev/disk/by-uuid)
Create thenixbld group and user on your original distribution:
$ sudo groupadd -g 30000 nixbld$ sudo useradd -u 30000 -g nixbld -G nixbld nixbldDownload/build/install NixOS:
Once you complete this step, you might no longer be able to boot onexisting systems without the help of a rescue USB drive or similar.
On some distributions there are separate PATHS for programs intendedonly for root. In order for the installation to succeed, you mighthave to usePATH="$PATH:/usr/sbin:/sbin" in the following command.
$ sudo PATH="$PATH" `which nixos-install` --root /mntAgain, please refer to thenixos-install step inInstalling NixOS for more information.
That should be it for installation to another partition!
Optionally, you may want to clean up your non-NixOS distribution:
$ sudo userdel nixbld$ sudo groupdel nixbldIf you do not wish to keep the Nix package manager installed either,run something likesudo rm -rv ~/.nix-* /nix and remove the linethat the Nix installer added to your~/.profile.
The following steps are only for installing NixOS in place usingNIXOS_LUSTRATE:
Generate your NixOS configuration:
$ sudo `which nixos-generate-config`Note that this will place the generated configuration files in/etc/nixos. You’ll probably want to edit the configuration files.Refer to thenixos-generate-config step inInstalling NixOS for more information.
OnUEFI systems, check that your/etc/nixos/hardware-configuration.nix did the right thing with theEFI System Partition.In NixOS, by default, bothsystemd-boot andgrub expect it to be mounted on/boot.However, the configuration generator bases itsfileSystems configuration on the current mount points at the time it is run.If the current system and NixOS’s bootloader configuration don’t agree on where theEFI System Partition is to be mounted, you’ll need to manually alter the mount point inhardware-configuration.nix before building the system closure.
The lustrate process will not work if theboot.initrd.systemd.enable option is set totrue.If you want to use this option, wait until after the first boot into the NixOS system to enable it and rebuild.
You’ll likely want to set a root password for your first boot usingthe configuration files because you won’t have a chance to enter apassword until after you reboot. You can initialize the root passwordto an empty one with this line: (and of course don’t forget to setone once you’ve rebooted or to lock the account withsudo passwd -l root if you usesudo)
{ users.users.root.initialHashedPassword = ""; }Build the NixOS closure and install it in thesystem profile:
$ nix-env -p /nix/var/nix/profiles/system -f '<nixpkgs/nixos>' -I nixos-config=/etc/nixos/configuration.nix -iA systemChange ownership of the/nix tree to root (since your Nix installwas probably single user):
$ sudo chown -R 0:0 /nixSet up the/etc/NIXOS and/etc/NIXOS_LUSTRATE files:
/etc/NIXOS officializes that this is now a NixOS partition (thebootup scripts require its presence).
/etc/NIXOS_LUSTRATE tells the NixOS bootup scripts to moveeverything that’s in the root partition to/old-root. This willmove your existing distribution out of the way in the very earlystages of the NixOS bootup. There are exceptions (we do need to keepNixOS there after all), so the NixOS lustrate process will nottouch:
The/nix directory
The/boot directory
Any file or directory listed in/etc/NIXOS_LUSTRATE (one perline)
The act of “lustrating” refers to the wiping of the existing distribution.Creating/etc/NIXOS_LUSTRATE can also be used on NixOS to removeall mutable files from your root partition (anything that’s not in/nix or/boot gets “lustrated” on the next boot.
lustrate /ˈlʌstreɪt/ verb.
purify by expiatory sacrifice, ceremonial washing, or some otherritual action.
Let’s create the files:
$ sudo touch /etc/NIXOS$ sudo touch /etc/NIXOS_LUSTRATELet’s also make sure the NixOS configuration files are kept once wereboot on NixOS:
$ echo etc/nixos | sudo tee -a /etc/NIXOS_LUSTRATEFinally, install NixOS’s boot system, backing up the current boot system’s files in the process.
The details of this step can vary depending on the bootloader configuration in NixOS and the bootloader in use by the current system.
The commands below should work for:
BIOS systems.
UEFI systems where both the current system and NixOS mount theEFI System Partition on/boot.Bothsystemd-boot andgrub expect this by default in NixOS, but other distributions vary.
Once you complete this step, your current distribution will no longer be bootable!If you didn’t get all the NixOS configuration right, especially those settings pertaining to boot loading and root partition, NixOS may not be bootable either.Have a USB rescue device ready in case this happens.
OnUEFI systems, anything on theEFI System Partition will be removed by these commands, such as other coexisting OS’s bootloaders.
$ sudo mkdir /boot.bak && sudo mv /boot/* /boot.bak &&sudo NIXOS_INSTALL_BOOTLOADER=1 /nix/var/nix/profiles/system/bin/switch-to-configuration bootCross your fingers, reboot, hopefully you should get a NixOS prompt!
In other cases, most commonly where theEFI System Partition of the current system is instead mounted on/boot/efi, the goal is to:
Make sure/boot (and theEFI System Partition, if mounted elsewhere) are mounted how the NixOS configuration would mount them.
Clear them of files related to the current system, backing them up outside of/boot.NixOS will move the backups into/old-root along with everything else when it first boots.
Instruct the NixOS closure built earlier to install its bootloader with:
sudo NIXOS_INSTALL_BOOTLOADER=1 /nix/var/nix/profiles/system/bin/switch-to-configuration bootIf for some reason you want to revert to the old distribution,you’ll need to boot on a USB rescue disk and do something alongthese lines:
# mkdir root# mount /dev/sdaX root# mkdir root/nixos-root# mv -v root/* root/nixos-root/# mv -v root/nixos-root/old-root/* root/# mv -v root/boot.bak root/boot # We had renamed this by hand earlier# umount root# rebootThis may work as is or you might also need to reinstall the bootloader.
And of course, if you’re happy with NixOS and no longer need theold distribution:
sudo rm -rf /old-rootIt’s also worth noting that this whole process can be automated.This is especially useful for Cloud VMs, where provider do notprovide NixOS. For instance,nixos-infect uses thelustrate process to convert Digital Ocean droplets to NixOS fromother distributions automatically.
To install NixOS behind a proxy, do the following before runningnixos-install.
Update proxy configuration in/mnt/etc/nixos/configuration.nix tokeep the internet accessible after reboot.
{ networking.proxy.default = "http://user:password@proxy:port/"; networking.proxy.noProxy = "127.0.0.1,localhost,internal.domain";}Setup the proxy environment variables in the shell where you arerunningnixos-install.
# proxy_url="http://user:password@proxy:port/"# export http_proxy="$proxy_url"# export HTTP_PROXY="$proxy_url"# export https_proxy="$proxy_url"# export HTTPS_PROXY="$proxy_url"If you are switching networks with different proxy configurations, usethespecialisation option inconfiguration.nix to switch proxies atruntime. Refer toAppendix A for more information.
The file/etc/nixos/configuration.nix contains the currentconfiguration of your machine. Whenever you’vechangedsomething in that file, you should do
# nixos-rebuild switchto build the new configuration, make it the default configuration forbooting, and try to realise the configuration in the running system(e.g., by restarting system services).
This command doesn’t start/stopuser servicesautomatically.nixos-rebuild only runs adaemon-reload for each user with runninguser services.
These commands must be executed as root, so you should either run themfrom a root shell or by prefixing them withsudo -i.
You can also do
# nixos-rebuild testto build the configuration and switch the running system to it, butwithout making it the boot default. So if (say) the configuration locksup your machine, you can just reboot to get back to a workingconfiguration.
There is also
# nixos-rebuild bootto build the configuration and make it the boot default, but not switchto it now (so it will only take effect after the next reboot).
You can make your configuration show up in a different submenu of theGRUB 2 boot screen by giving it a differentprofile name, e.g.
# nixos-rebuild switch -p testwhich causes the new configuration (and previous ones created using-p test) to show up in the GRUB submenu “NixOS - Profile ‘test’”.This can be useful to separate test configurations from “stable”configurations.
A repl, or read-eval-print loop, is also available. You can inspect your configuration and use the Nix language with
# nixos-rebuild replYour configuration is loaded into theconfig variable. Use tab for autocompletion, use the:r command to reload the configuration files. See:? ornix repl in the Nix manual to learn more.
Finally, you can do
$ nixos-rebuild buildto build the configuration but nothing more. This is useful to seewhether everything compiles cleanly.
If you have a machine that supports hardware virtualisation, you canalso test the new configuration in a sandbox by building and running aQEMUvirtual machine that contains the desired configuration. Just do
$ nixos-rebuild build-vm$ ./result/bin/run-*-vmThe VM does not have any data from your host system, so your existinguser accounts and home directories will not be available unless you havesetmutableUsers = false. Another way is to temporarily add thefollowing to your configuration:
{ users.users.your-user.initialHashedPassword = "test"; }Important: delete the $hostname.qcow2 file if you have started thevirtual machine at least once without the right users, otherwise thechanges will not get picked up. You can forward ports on the host to theguest. For instance, the following will forward host port 2222 to guestport 22 (SSH):
$ QEMU_NET_OPTS="hostfwd=tcp:127.0.0.1:2222-:22" ./result/bin/run-*-vmallowing you to log in via SSH (assuming you have set the appropriatepasswords or SSH authorized keys):
$ ssh -p 2222 localhostSuch port forwardings connect via the VM’s virtual network interface.Thus they cannot connect to ports that are only bound to the VM’sloopback interface (127.0.0.1), and the VM’s NixOS firewallmust be configured to allow these connections.
Table of Contents
The best way to keep your NixOS installation up to date is to use one ofthe NixOSchannels. A channel is a Nix mechanism for distributing Nixexpressions and associated binaries. The NixOS channels are updatedautomatically from NixOS’s Git repository after certain tests havepassed and all packages have been built. These channels are:
Stable channels, such asnixos-25.05.These only get conservative bug fixes and package upgrades. Forinstance, a channel update may cause the Linux kernel on your systemto be upgraded from 4.19.34 to 4.19.38 (a minor bug fix), but notfrom 4.19.x to 4.20.x (a major change that has the potential to break things).Stable channels are generally maintained until the next stablebranch is created.
Theunstable channel,nixos-unstable.This corresponds to NixOS’s main development branch, and may thus seeradical changes between channel updates. It’s not recommended forproduction systems.
Small channels, such asnixos-25.05-smallornixos-unstable-small.These are identical to the stable and unstable channels described above,except that they contain fewer binary packages. This means they get updatedfaster than the regular channels (for instance, when a critical security patchis committed to NixOS’s source tree), but may require more packages to bebuilt from source than usual. They’re mostly intended for server environmentsand as such contain few GUI applications.
To see what channels are available, go tohttps://channels.nixos.org.(Note that the URIs of the various channels redirect to a directory thatcontains the channel’s latest version and includes ISO images andVirtualBox appliances.) Please note that during the release process,channels that are not yet released will be present here as well. See theGetting NixOS pagehttps://nixos.org/download/ to find the newestsupported stable release.
When you first install NixOS, you’re automatically subscribed to theNixOS channel that corresponds to your installation source. Forinstance, if you installed from a 25.05 ISO, you will be subscribed tothenixos-25.05 channel. To see which NixOS channel you’re subscribedto, run the following as root:
# nix-channel --list | grep nixosnixos https://channels.nixos.org/nixos-unstableTo switch to a different NixOS channel, do
# nix-channel --add https://channels.nixos.org/channel-name nixos(Be sure to include thenixos parameter at the end.) For instance, touse the NixOS 25.05 stable channel:
# nix-channel --add https://channels.nixos.org/nixos-25.05 nixosIf you have a server, you may want to use the “small” channel instead:
# nix-channel --add https://channels.nixos.org/nixos-25.05-small nixosAnd if you want to live on the bleeding edge:
# nix-channel --add https://channels.nixos.org/nixos-unstable nixosYou can then upgrade NixOS to the latest version in your chosen channelby running
# nixos-rebuild switch --upgradewhich is equivalent to the more verbosenix-channel --update nixos; nixos-rebuild switch.
Channels are set per user. This means that runningnix-channel --addas a non root user (or without sudo) will not affectconfiguration in/etc/nixos/configuration.nix
It is generally safe to switch back and forth between channels. The onlyexception is that a newer NixOS may also have a newer Nix version, whichmay involve an upgrade of Nix’s database schema. This cannot be undoneeasily, so in that case you will not be able to go back to your originalchannel.
You can keep a NixOS system up-to-date automatically by adding thefollowing toconfiguration.nix:
{ system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = true;}This enables a periodically executed systemd service namednixos-upgrade.service. If theallowReboot option isfalse, it runsnixos-rebuild switch --upgrade to upgrade NixOS to the latest versionin the current channel. (To see when the service runs, seesystemctl list-timers.)IfallowReboot istrue, then the system will automatically reboot ifthe new generation contains a different kernel, initrd or kernelmodules. You can also specify a channel explicitly, e.g.
{ system.autoUpgrade.channel = "https://channels.nixos.org/nixos-25.05"; }Table of Contents
Default live installer configurations are available insidenixos/modules/installer/cd-dvd.For building other system images, seeBuilding Images withnixos-rebuild build-image.
You have two options:
Use any of those default configurations as is
Combine them with (any of) your host config(s)
System images, such as the live installer ones, know how to enforce configuration settingson which they immediately depend in order to work correctly.
However, if you are confident, you can opt to override thoseenforced values withmkForce.
To build an ISO image for the channelnixos-unstable:
$ git clone https://github.com/NixOS/nixpkgs.git$ cd nixpkgs/nixos$ git switch nixos-unstable$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nixTo check the content of an ISO image, mount it like so:
# mount -o loop -t iso9660 ./result/iso/nixos-image-25.05pre-git-x86_64-linux.iso /mnt/isoIf you need additional (non-distributable) drivers or firmware in theinstaller, you might want to extend these configurations.
For example, to build the GNOME graphical installer ISO, but with support forcertain WiFi adapters present in some MacBooks, you can create the followingfile atmodules/installer/cd-dvd/installation-cd-graphical-gnome-macbook.nix:
{ config, ... }:{ imports = [ ./installation-cd-graphical-gnome.nix ]; boot.initrd.kernelModules = [ "wl" ]; boot.kernelModules = [ "kvm-intel" "wl" ]; boot.extraModulePackages = [ config.boot.kernelPackages.broadcom_sta ];}Then build it like in the example above:
$ git clone https://github.com/NixOS/nixpkgs.git$ cd nixpkgs/nixos$ export NIXPKGS_ALLOW_UNFREE=1$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-graphical-gnome-macbook.nix default.nixThe config value enforcement is implemented viamkImageMediaOverride = mkOverride 60;and therefore primes over simple value assignments, but also yields tomkForce.
This property allows image designers to implement in semantically correct ways thoseconfiguration values upon which the correct functioning of the image depends.
For example, the iso base image overrides those file systems which it needs at a minimumfor correct functioning, while the installer base image overrides the entire file systemlayout because there can’t be any other guarantees on a live medium than those givenby the live medium itself. The latter is especially true before formatting the targetblock device(s). On the other hand, the netboot iso only overrides its minimum dependenciessince netboot images are always made-to-target.
nixos-rebuild build-imageNixpkgs contains a variety of modules to build custom images for different virtualization platforms and cloud providers, such as e.g.amazon-image.nix andproxmox-lxc.nix.
While those can be imported directly,system.build.images provides an attribute set mapping variant names to image derivations. Available variants are defined - end extendable - inimage.modules, an attribute set mapping variant names to NixOS modules.
All of those images can be built via both, theirsystem.build.image attribute and thenixos-rebuild build-image command.
For example, to build an Amazon image from your existing NixOS configuration, run:
$ nixos-rebuild build-image --image-variant amazon[...]Done. The disk image can be found in /nix/store/[hash]-nixos-image-amazon-25.05pre-git-x86_64-linux/nixos-image-amazon-25.05pre-git-x86_64-linux.vpcTo get a list of all variants available, runnixos-rebuild build-image without arguments.
Theimage.modules option can be used to set specific options per image variant, in a similar fashion asspecialisations for generic NixOS configurations.
E.g. images for the cloud provider Linode usegrub2 as a bootloader by default. If you are usingsystemd-boot on other platforms and want to disable it for Linode only, you could use the following options:
{ image.modules.linode = { boot.loader.systemd-boot.enable = lib.mkForce false; };}systemd-repartTable of Contents
You can build disk images in NixOS with theimage.repart option provided bythe moduleimage/repart.nix. This module usessystemd-repart to build theimages and exposes it’s entire interface via therepartConfig option.
An example of how to build an image:
{ config, modulesPath, ... }:{ imports = [ "${modulesPath}/image/repart.nix" ]; image.repart = { name = "image"; partitions = { "esp" = { contents = { # ... }; repartConfig = { Type = "esp"; # ... }; }; "root" = { storePaths = [ config.system.build.toplevel ]; repartConfig = { Type = "root"; Label = "nixos"; # ... }; }; }; };}You can define a partition that only contains the Nix store and then mount itunder/nix/store. Because the/nix/store part of the paths is alreadydetermined by the mount point, you have to setstripNixStorePrefix = true; sothat the prefix is stripped from the paths before copying them into the image.
{ fileSystems."/nix/store".device = "/dev/disk/by-partlabel/nix-store"; image.repart.partitions = { "store" = { storePaths = [ config.system.build.toplevel ]; stripNixStorePrefix = true; repartConfig = { Type = "linux-generic"; Label = "nix-store"; # ... }; }; };}Theimage/repart.nix module can also be used to build self-containedsoftwareappliances.
The generation based update mechanism of NixOS is not suited for appliances.Updates of appliances are usually either performed by replacing the entireimage with a new one or by updating partitions via an A/B scheme. See theChrome OS update process for an example of how to achievethis. The appliance image built in the following example does not contain aconfiguration.nix and thus you will not be able to callnixos-rebuild fromthis system. Furthermore, it uses aUnified Kernel Image.
let pkgs = import <nixpkgs> { }; efiArch = pkgs.stdenv.hostPlatform.efiArch;in(pkgs.nixos [ ( { config, lib, pkgs, modulesPath, ... }: { imports = [ "${modulesPath}/image/repart.nix" ]; boot.loader.grub.enable = false; fileSystems."/".device = "/dev/disk/by-label/nixos"; image.repart = { name = "image"; partitions = { "esp" = { contents = { "/EFI/BOOT/BOOT${lib.toUpper efiArch}.EFI".source = "${pkgs.systemd}/lib/systemd/boot/efi/systemd-boot${efiArch}.efi"; "/EFI/Linux/${config.system.boot.loader.ukiFile}".source = "${config.system.build.uki}/${config.system.boot.loader.ukiFile}"; }; repartConfig = { Type = "esp"; Format = "vfat"; SizeMinBytes = "96M"; }; }; "root" = { storePaths = [ config.system.build.toplevel ]; repartConfig = { Type = "root"; Format = "ext4"; Label = "nixos"; Minimize = "guess"; }; }; }; }; } )]).imageThis chapter describes how to configure various aspects of a NixOS machine through the configuration file/etc/nixos/configuration.nix. As described inChanging the Configuration, changes to this file only take effect after you runnixos-rebuild.
Table of Contents
Table of Contents
The NixOS configuration file/etc/nixos/configuration.nix is actuallyaNix expression, which is the Nix package manager’s purely functionallanguage for describing how to build packages and configurations. Thismeans you have all the expressive power of that language at yourdisposal, including the ability to abstract over common patterns, whichis very useful when managing complex systems. The syntax and semanticsof the Nix language are fully described in theNixmanual, buthere we give a short overview of the most important constructs useful inNixOS configuration files.
The NixOS configuration file generally looks like this:
{ config, pkgs, ... }:{ # option definitions}The first line ({ config, pkgs, ... }:) denotes that this is actuallya function that takes at least the two argumentsconfig andpkgs.(These are explained later, in chapterWriting NixOS Modules) Thefunction returns aset of option definitions ({ ... }).These definitions have the formname = value, wherename is thename of an option andvalue is its value. For example,
{ config, pkgs, ... }:{ services.httpd.enable = true; services.httpd.adminAddr = "alice@example.org"; services.httpd.virtualHosts.localhost.documentRoot = "/webroot";}defines a configuration with three option definitions that togetherenable the Apache HTTP Server with/webroot as the document root.
Sets can be nested, and in fact dots in option names are shorthand fordefining a set containing another set. For instance,services.httpd.enable defines a set namedservices that contains a set namedhttpd, which in turn contains anoption definition namedenable with valuetrue. This means that theexample above can also be written as:
{ config, pkgs, ... }:{ services = { httpd = { enable = true; adminAddr = "alice@example.org"; virtualHosts = { localhost = { documentRoot = "/webroot"; }; }; }; };}which may be more convenient if you have lots of option definitions thatshare the same prefix (such asservices.httpd).
NixOS checks your option definitions for correctness. For instance, ifyou try to define an option that doesn’t exist (that is, doesn’t have acorrespondingoption declaration),nixos-rebuild will give an errorlike:
The option `services.httpd.enable' defined in `/etc/nixos/configuration.nix' does not exist.Likewise, values in option definitions must have a correct type. Forinstance,services.httpd.enable must be a Boolean (true orfalse).Trying to give it a value of another type, such as a string, will causean error:
The option value `services.httpd.enable' in `/etc/nixos/configuration.nix' is not a boolean.Options have various types of values. The most important are:
Strings are enclosed in double quotes, e.g.
{ networking.hostName = "dexter";}Special characters can be escaped by prefixing them with a backslash(e.g.\").
Multi-line strings can be enclosed indouble single quotes, e.g.
{ networking.extraHosts = '' 127.0.0.2 other-localhost 10.0.0.1 server '';}The main difference is that it strips from each line a number ofspaces equal to the minimal indentation of the string as a whole(disregarding the indentation of empty lines), and that characterslike" and\ are not special (making it more convenient forincluding things like shell code). See more info about this in theNix manualhere.
These can betrue orfalse, e.g.
{ networking.firewall.enable = true; networking.firewall.allowPing = false;}For example,
{ boot.kernel.sysctl."net.ipv4.tcp_keepalive_time" = 60;}(Note that here the attribute namenet.ipv4.tcp_keepalive_time isenclosed in quotes to prevent it from being interpreted as a setnamednet containing a set namedipv4, and so on. This isbecause it’s not a NixOS option but the literal name of a Linuxkernel setting.)
Sets were introduced above. They are name/value pairs enclosed inbraces, as in the option definition
{ fileSystems."/boot" = { device = "/dev/sda1"; fsType = "ext4"; options = [ "rw" "data=ordered" "relatime" ]; };}The important thing to note about lists is that list elements areseparated by whitespace, like this:
{ boot.kernelModules = [ "fuse" "kvm-intel" "coretemp" ];}List elements can be any other type, e.g. sets:
{ swapDevices = [ { device = "/dev/disk/by-label/swap"; } ];}Usually, the packages you need are already part of the Nix Packagescollection, which is a set that can be accessed through the functionargumentpkgs. Typical uses:
{ environment.systemPackages = [ pkgs.thunderbird pkgs.emacs ]; services.postgresql.package = pkgs.postgresql_14;}The latter option definition changes the default PostgreSQL packageused by NixOS’s PostgreSQL service to 14.x. For more information onpackages, including how to add new ones, seethe section called “Adding Custom Packages”.
If you find yourself repeating yourself over and over, it’s time to abstract. Take, for instance, this Apache HTTP Server configuration:
{ services.httpd.virtualHosts = { "blog.example.org" = { documentRoot = "/webroot/blog.example.org"; adminAddr = "alice@example.org"; forceSSL = true; enableACME = true; }; "wiki.example.org" = { documentRoot = "/webroot/wiki.example.org"; adminAddr = "alice@example.org"; forceSSL = true; enableACME = true; }; };}It defines two virtual hosts with nearly identical configuration; the only difference is the document root directories. To prevent this duplication, we can use alet:
let commonConfig = { adminAddr = "alice@example.org"; forceSSL = true; enableACME = true; };in{ services.httpd.virtualHosts = { "blog.example.org" = (commonConfig // { documentRoot = "/webroot/blog.example.org"; }); "wiki.example.org" = (commonConfig // { documentRoot = "/webroot/wiki.example.org"; }); };}Thelet commonConfig = ... defines a variable namedcommonConfig. The// operator merges two attribute sets, so the configuration of the second virtual host is the setcommonConfig extended with the document root option.
You can write alet wherever an expression is allowed. Thus, you also could have written:
{ services.httpd.virtualHosts = let commonConfig = { # ... }; in { "blog.example.org" = ( commonConfig // { # ... } ); "wiki.example.org" = ( commonConfig // { # ... } ); };}but not{ let commonConfig = ...; in ...; } since attributes (as opposed to attribute values) are not expressions.
Functions provide another method of abstraction. For instance, suppose that we want to generate lots of different virtual hosts, all with identical configuration except for the document root. This can be done as follows:
{ services.httpd.virtualHosts = let makeVirtualHost = webroot: { documentRoot = webroot; adminAddr = "alice@example.org"; forceSSL = true; enableACME = true; }; in { "example.org" = (makeVirtualHost "/webroot/example.org"); "example.com" = (makeVirtualHost "/webroot/example.com"); "example.gov" = (makeVirtualHost "/webroot/example.gov"); "example.nl" = (makeVirtualHost "/webroot/example.nl"); };}Here,makeVirtualHost is a function that takes a single argumentwebroot and returns the configuration for a virtual host. That function is then called for several names to produce the list of virtual host configurations.
The NixOS configuration mechanism is modular. If yourconfiguration.nix becomes too big, you can split it into multiplefiles. Likewise, if you have multiple NixOS configurations (e.g. fordifferent computers) with some commonality, you can move the commonconfiguration into a shared file.
Modules have exactly the same syntax asconfiguration.nix. In fact,configuration.nix is itself a module. You can use other modules byincluding them fromconfiguration.nix, e.g.:
{ config, pkgs, ... }:{ imports = [ ./vpn.nix ./kde.nix ]; services.httpd.enable = true; environment.systemPackages = [ pkgs.emacs ]; # ...}Here, we include two modules from the same directory,vpn.nix andkde.nix. The latter might look like this:
{ config, pkgs, ... }:{ services.xserver.enable = true; services.displayManager.sddm.enable = true; services.xserver.desktopManager.plasma5.enable = true; environment.systemPackages = [ pkgs.vim ];}Note that bothconfiguration.nix andkde.nix define the optionenvironment.systemPackages. When multiple modules define anoption, NixOS will try tomerge the definitions. In the case ofenvironment.systemPackages the lists of packages will beconcatenated. The value inconfiguration.nix ismerged last, so for list-type options, it will appear at the end of themerged list. If you want it to appear first, you can usemkBefore:
{ boot.kernelModules = mkBefore [ "kvm-intel" ]; }This causes thekvm-intel kernel module to be loaded before any otherkernel modules.
For other types of options, a merge may not be possible. For instance,if two modules defineservices.httpd.adminAddr,nixos-rebuild will give an error:
The unique option `services.httpd.adminAddr' is defined multiple times, in `/etc/nixos/httpd.nix' and `/etc/nixos/configuration.nix'.When that happens, it’s possible to force one definition take precedenceover the others:
{ services.httpd.adminAddr = pkgs.lib.mkForce "bob@example.org"; }When using multiple modules, you may need to access configuration valuesdefined in other modules. This is what theconfig function argument isfor: it contains the complete, merged system configuration. That is,config is the result of combining the configurations returned by everymodule. (If you’re wondering how it’s possible that the (indirect)resultof a function is passed as aninput to that same function: that’sbecause Nix is a “lazy” language — it only computes values whenthey are needed. This works as long as no individual configurationvalue depends on itself.)
For example, here is a module that adds some packages toenvironment.systemPackages only ifservices.xserver.enable is set totrue somewhere else:
{ config, pkgs, ... }:{ environment.systemPackages = if config.services.xserver.enable then [ pkgs.firefox pkgs.thunderbird ] else [ ];}With multiple modules, it may not be obvious what the final value of aconfiguration option is. The commandnixos-option allows you to findout:
$ nixos-option services.xserver.enabletrue$ nixos-option boot.kernelModules[ "tun" "ipv6" "loop" ... ]Interactive exploration of the configuration is possible usingnix repl, a read-eval-print loop for Nix expressions. A typical use:
$ nix repl '<nixpkgs/nixos>'nix-repl> config.networking.hostName"mandark"nix-repl> map (x: x.hostName) config.services.httpd.virtualHosts[ "example.org" "example.gov" ]While abstracting your configuration, you may find it useful to generatemodules using code, instead of writing files. The example below wouldhave the same effect as importing a file which sets those options.
{ config, pkgs, ... }:let netConfig = hostName: { networking.hostName = hostName; networking.useDHCP = false; };in{ imports = [ (netConfig "nixos.localdomain") ];}Table of Contents
This section describes how to add additional packages to your system.NixOS has two distinct styles of package management:
Declarative, where you declare what packages you want in yourconfiguration.nix. Every time you runnixos-rebuild, NixOS willensure that you get a consistent set of binaries corresponding toyour specification.
Ad hoc, where you install, upgrade and uninstall packages via thenix-env command. This style allows mixing packages from differentNixpkgs versions. It’s the only choice for non-root users.
With declarative package management, you specify which packages you wanton your system by setting the optionenvironment.systemPackages. For instance, adding thefollowing line toconfiguration.nix enables the Mozilla Thunderbirdemail application:
{ environment.systemPackages = [ pkgs.thunderbird ]; }The effect of this specification is that the Thunderbird package fromNixpkgs will be built or downloaded as part of the system when you runnixos-rebuild switch.
Some packages require additional global configuration such as D-Bus orsystemd service registration so adding them toenvironment.systemPackages might not be sufficient. You areadvised to check thelist of options whether a NixOSmodule for the package does not exist.
You can get a list of the available packages as follows:
$ nix-env -qaP '*' --descriptionnixos.firefox firefox-23.0 Mozilla Firefox - the browser, reloaded...The first column in the output is theattribute name, such asnixos.thunderbird.
Note: thenixos prefix tells us that we want to get the package fromthenixos channel and works only in CLI tools. In declarativeconfiguration usepkgs prefix (variable).
To “uninstall” a package, remove it fromenvironment.systemPackages and runnixos-rebuild switch.
The Nixpkgs configuration for a NixOS system is set by thenixpkgs.config option.
{ nixpkgs.config = { allowUnfree = true; };}This only allows unfree software in the given NixOS configuration.For users invoking Nix commands such asnix-build, Nixpkgs is configured independently.See theNixpkgs manual section on global configuration for details.
Some packages in Nixpkgs have options to enable or disable optional functionality, or change other aspects of the package.
Unfortunately, Nixpkgs currently lacks a way to query available package configuration options.
For example, many packages come with extensions one might add.Examples include:
You can use them like this:
{ environment.systemPackages = with pkgs; [ sl (pass.withExtensions ( subpkgs: with subpkgs; [ pass-audit pass-otp pass-genphrase ] )) (python3.withPackages (subpkgs: with subpkgs; [ requests ])) cowsay ];}Apart from high-level options, it’s possible to tweak a package inalmost arbitrary ways, such as changing or disabling dependencies of apackage. For instance, the Emacs package in Nixpkgs by default has adependency on GTK 2. If you want to build it against GTK 3, you canspecify that as follows:
{ environment.systemPackages = [ (pkgs.emacs.override { gtk = pkgs.gtk3; }) ]; }The functionoverride performs the call to the Nix function thatproduces Emacs, with the original arguments amended by the set ofarguments specified by you. So here the function argumentgtk gets thevaluepkgs.gtk3, causing Emacs to depend on GTK 3. (The parenthesesare necessary because in Nix, function application binds more weaklythan list construction, so without them,environment.systemPackageswould be a list with two elements.)
Even greater customisation is possible using the functionoverrideAttrs. While theoverride mechanism above overrides thearguments of a package function,overrideAttrs allows changing theattributes passed tomkDerivation. This permits changing any aspectof the package, such as the source code. For instance, if you want tooverride the source code of Emacs, you can say:
{ environment.systemPackages = [ (pkgs.emacs.overrideAttrs (oldAttrs: { name = "emacs-25.0-pre"; src = /path/to/my/emacs/tree; })) ];}Here,overrideAttrs takes the Nix derivation specified bypkgs.emacsand produces a new derivation in which the original’sname andsrcattribute have been replaced by the given values by re-callingstdenv.mkDerivation. The original attributes are accessible via thefunction argument, which is conventionally namedoldAttrs.
The overrides shown above are not global. They do not affect theoriginal package; other packages in Nixpkgs continue to depend on theoriginal rather than the customised package. This means that if anotherpackage in your system depends on the original package, you end up withtwo instances of the package. If you want to have everything depend onyour customised instance, you can apply aglobal override as follows:
{ nixpkgs.config.packageOverrides = pkgs: { emacs = pkgs.emacs.override { gtk = pkgs.gtk3; }; };}The effect of this definition is essentially equivalent to modifying theemacs attribute in the Nixpkgs source tree. Any package in Nixpkgsthat depends onemacs will be passed your customised instance.(However, the valuepkgs.emacs innixpkgs.config.packageOverridesrefers to the original rather than overridden instance, to prevent aninfinite recursion.)
It’s possible that a package you need is not available in NixOS. In thatcase, you can do two things. Either you can package it with Nix, or you can tryto use prebuilt packages from upstream. Due to the peculiarities of NixOS, itis important to note that building software from source is often easier thanusing pre-built executables.
This can be done either in-tree or out-of-tree. For an in-tree build, you canclone the Nixpkgs repository, add the package to your clone, and (optionally)submit a patch or pull request to have it accepted into the main Nixpkgsrepository. This is described in detail in theNixpkgsmanual. In short, you clone Nixpkgs:
$ git clone https://github.com/NixOS/nixpkgs$ cd nixpkgsThen you write and test the package as described in the Nixpkgs manual.Finally, you add it toenvironment.systemPackages, e.g.
{ environment.systemPackages = [ pkgs.my-package ]; }and you runnixos-rebuild, specifying your own Nixpkgs tree:
# nixos-rebuild switch -I nixpkgs=/path/to/my/nixpkgsThe second possibility is to add the package outside of the Nixpkgstree. For instance, here is how you specify a build of theGNU Hello package directly inconfiguration.nix:
{ environment.systemPackages = let my-hello = with pkgs; stdenv.mkDerivation rec { name = "hello-2.8"; src = fetchurl { url = "mirror://gnu/hello/${name}.tar.gz"; hash = "sha256-5rd/gffPfa761Kn1tl3myunD8TuM+66oy1O7XqVGDXM="; }; }; in [ my-hello ];}Of course, you can also move the definition ofmy-hello into aseparate Nix expression, e.g.
{ environment.systemPackages = [ (import ./my-hello.nix) ]; }wheremy-hello.nix contains:
with import <nixpkgs> { }; # bring all of Nixpkgs into scopestdenv.mkDerivation rec { name = "hello-2.8"; src = fetchurl { url = "mirror://gnu/hello/${name}.tar.gz"; hash = "sha256-5rd/gffPfa761Kn1tl3myunD8TuM+66oy1O7XqVGDXM="; };}This allows testing the package easily:
$ nix-build my-hello.nix$ ./result/bin/helloHello, world!Most pre-built executables will not work on NixOS. There are two notableexceptions: flatpaks and AppImages. For flatpaks see thededicatedsection. AppImages can run “as-is” on NixOS.
First you need to enable AppImage support: add to/etc/nixos/configuration.nix
{ programs.appimage.enable = true; programs.appimage.binfmt = true;}Then you can run the AppImage “as-is” or withappimage-run foo.appimage.
If there are shared libraries missing add them with
{ programs.appimage.package = pkgs.appimage-run.override { extraPkgs = pkgs: [ # missing libraries here, e.g.: `pkgs.libepoxy` ]; };}To make other pre-built executables work on NixOS, you need to package themwith Nix and special helpers likeautoPatchelfHook orbuildFHSEnv. SeetheNixpkgs manual for details. Thisis complex and often doing a source build is easier.
With the commandnix-env, you can install and uninstall packages fromthe command line. For instance, to install Mozilla Thunderbird:
$ nix-env -iA nixos.thunderbirdIf you invoke this as root, the package is installed in the Nix profile/nix/var/nix/profiles/default and visible to all users of the system;otherwise, the package ends up in/nix/var/nix/profiles/per-user/username/profile and is not visible toother users. The-A flag specifies the package by its attribute name;without it, the package is installed by matching against its packagename (e.g.thunderbird). The latter is slower because it requiresmatching against all available Nix packages, and is ambiguous if thereare multiple matching packages.
Packages come from the NixOS channel. You typically upgrade a package byupdating to the latest version of the NixOS channel:
$ nix-channel --update nixosand then runningnix-env -i again. Other packages in the profile arenot affected; this is the crucial difference with the declarativestyle of package management, where runningnixos-rebuild switch causesall packages to be updated to their current versions in the NixOSchannel. You can however upgrade all packages for which there is a newerversion by doing:
$ nix-env -u '*'A package can be uninstalled using the-e flag:
$ nix-env -e thunderbirdFinally, you can roll back an undesirablenix-env action:
$ nix-env --rollbacknix-env has many more flags. For details, see the nix-env(1) manpage orthe Nix manual.
NixOS supports both declarative and imperative styles of usermanagement. In the declarative style, users are specified inconfiguration.nix. For instance, the following states that a useraccount namedalice shall exist:
{ users.users.alice = { isNormalUser = true; home = "/home/alice"; description = "Alice Foobar"; extraGroups = [ "wheel" "networkmanager" ]; openssh.authorizedKeys.keys = [ "ssh-dss AAAAB3Nza... alice@foobar" ]; };}Note thatalice is a member of thewheel andnetworkmanagergroups, which allows her to usesudo to execute commands asroot andto configure the network, respectively. Also note the SSH public keythat allows remote logins with the corresponding private key. Userscreated in this way do not have a password by default, so they cannotlog in via mechanisms that require a password. However, you can use thepasswd program to set a password, which is retained across invocationsofnixos-rebuild.
If you setusers.mutableUsers tofalse, then the contents of/etc/passwd and/etc/group will be congruentto your NixOS configuration. For instance, if you remove a user fromusers.users and run nixos-rebuild, the useraccount will cease to exist. Also, imperative commands for managing users andgroups, such as useradd, are no longer available. Passwords may still beassigned by setting the user’shashedPassword option. Ahashed password can be generated usingmkpasswd.
A user ID (uid) is assigned automatically. You can also specify a uidmanually by adding
{ uid = 1000; }to the user specification.
Groups can be specified similarly. The following states that a groupnamedstudents shall exist:
{ users.groups.students.gid = 1000; }As with users, the group ID (gid) is optional and will be assignedautomatically if it’s missing.
In the imperative style, users and groups are managed by commands suchasuseradd,groupmod and so on. For instance, to create a useraccount namedalice:
# useradd -m aliceTo make all nix tools available to this new user use `su - USER` whichopens a login shell (==shell that loads the profile) for given user.This will create the ~/.nix-defexpr symlink. So run:
# su - alice -c "true"The flag-m causes the creation of a home directory for the new user,which is generally what you want. The user does not have an initialpassword and therefore cannot log in. A password can be set using thepasswd utility:
# passwd aliceEnter new UNIX password: ***Retype new UNIX password: ***A user can be deleted usinguserdel:
# userdel -r aliceThe flag-r deletes the user’s home directory. Accounts can bemodified usingusermod. Unix groups can be managed usinggroupadd,groupmod andgroupdel.
systemd-sysusersThis is experimental.
Please consider usingUserborn over systemd-sysusers as it’smore feature complete.
Instead of using a custom perl script to create users and groups, you can usesystemd-sysusers:
{ systemd.sysusers.enable = true; }The primary benefit of this is to remove a dependency on perl.
userbornThis is experimental.
Like systemd-sysusers, Userborn doesn’t depend on Perl but offers some moreadvantages over systemd-sysusers:
It can create “normal” users (with a GID >= 1000).
It can update some information about users. Most notably it can update theirpasswords.
It will warn when users use an insecure or unsupported password hashingscheme.
Userborn is the recommended way to manage users if you don’t want to rely onthe Perl script. It aims to eventually replace the Perl script by default.
You can enable Userborn via:
{ services.userborn.enable = true; }You can configure Userborn to store the password files(/etc/{group,passwd,shadow}) outside of/etc and symlink them from thislocation to/etc:
{ services.userborn.passwordFilesLocation = "/persistent/etc"; }This is useful when you store/etc on atmpfs or if/etc is immutable(e.g. when usingsystem.etc.overlay.mutable = false;). In the latter case theoriginal files are by default stored in/var/lib/nixos.
Userborn implements immutable users by re-mounting the password filesread-only. This means that unlike when using the Perl script, trying to add anew user (e.g. viauseradd) will fail right away.
Table of Contents
You can define file systems using thefileSystems configurationoption. For instance, the following definition causes NixOS to mount theExt4 file system on device/dev/disk/by-label/data onto the mountpoint/data:
{ fileSystems."/data" = { device = "/dev/disk/by-label/data"; fsType = "ext4"; };}This will create an entry in/etc/fstab, which will generate acorrespondingsystemd.mountunit viasystemd-fstab-generator.The filesystem will be mounted automatically unless"noauto" ispresent inoptions."noauto"filesystems can be mounted explicitly usingsystemctl e.g.systemctl start data.mount. Mount points are created automatically if they don’talready exist. Fordevice, it’s best to use the topology-independentdevice aliases in/dev/disk/by-label and/dev/disk/by-uuid, as thesedon’t change if the topology changes (e.g. if a disk is moved to anotherIDE controller).
You can usually omit the file system type (fsType), sincemount canusually detect the type and load the necessary kernel moduleautomatically. However, if the file system is needed at early boot (inthe initial ramdisk) and is notext2,ext3 orext4, then it’s bestto specifyfsType to ensure that the kernel module is available.
System startup will fail if any of the filesystems fails to mount,dropping you to the emergency shell. You can make a mount asynchronousand non-critical by addingoptions = [ "nofail" ];.
NixOS supports file systems that are encrypted usingLUKS (LinuxUnified Key Setup). For example, here is how you create an encryptedExt4 file system on the device/dev/disk/by-uuid/3f6b0024-3a44-4fde-a43a-767b872abe5d:
# cryptsetup luksFormat /dev/disk/by-uuid/3f6b0024-3a44-4fde-a43a-767b872abe5dWARNING!========This will overwrite data on /dev/disk/by-uuid/3f6b0024-3a44-4fde-a43a-767b872abe5d irrevocably.Are you sure? (Type uppercase yes): YESEnter LUKS passphrase: ***Verify passphrase: ***# cryptsetup luksOpen /dev/disk/by-uuid/3f6b0024-3a44-4fde-a43a-767b872abe5d cryptedEnter passphrase for /dev/disk/by-uuid/3f6b0024-3a44-4fde-a43a-767b872abe5d: ***# mkfs.ext4 /dev/mapper/cryptedThe LUKS volume should be automatically picked up bynixos-generate-config, but you might want to verify that yourhardware-configuration.nix looks correct. To manually ensure that thesystem is automatically mounted at boot time as/, add the followingtoconfiguration.nix:
{ boot.initrd.luks.devices.crypted.device = "/dev/disk/by-uuid/3f6b0024-3a44-4fde-a43a-767b872abe5d"; fileSystems."/".device = "/dev/mapper/crypted";}Should grub be used as bootloader, and/boot is located on anencrypted partition, it is necessary to add the following grub option:
{ boot.loader.grub.enableCryptodisk = true; }NixOS also supports unlocking your LUKS-Encrypted file system using a FIDO2compatible token.
In the following example, we will create a newFIDO2 credential and add it as a new key to our existing device/dev/sda2:
# export FIDO2_LABEL="/dev/sda2 @ $HOSTNAME"# fido2luks credential "$FIDO2_LABEL"f1d00200108b9d6e849a8b388da457688e3dd653b4e53770012d8f28e5d3b269865038c346802f36f3da7278b13ad6a3bb6a1452e24ebeeaa24ba40eef559b1b287d2a2f80b7# fido2luks -i add-key /dev/sda2 f1d00200108b9d6e849a8b388da457688e3dd653b4e53770012d8f28e5d3b269865038c346802f36f3da7278b13ad6a3bb6a1452e24ebeeaa24ba40eef559b1b287d2a2f80b7Password:Password (again):Old password:Old password (again):Added to key to device /dev/sda2, slot: 2To ensure that this file system is decrypted using the FIDO2 compatiblekey, add the following toconfiguration.nix:
{ boot.initrd.luks.fido2Support = true; boot.initrd.luks.devices."/dev/sda2".fido2.credential = "f1d00200108b9d6e849a8b388da457688e3dd653b4e53770012d8f28e5d3b269865038c346802f36f3da7278b13ad6a3bb6a1452e24ebeeaa24ba40eef559b1b287d2a2f80b7";}You can also use the FIDO2 passwordless setup, but for security reasons,you might want to enable it only when your device is PIN protected, suchasTrezor.
{ boot.initrd.luks.devices."/dev/sda2".fido2.passwordLess = true; }If systemd stage 1 is enabled, it handles unlocking of LUKS-encrypted volumesduring boot. The following example enables systemd stage1 and adds support forunlocking the existing LUKS2 volumeroot using any enrolled FIDO2 compatibletokens.
{ boot.initrd = { luks.devices.root = { crypttabExtraOpts = [ "fido2-device=auto" ]; device = "/dev/sda2"; }; systemd.enable = true; };}All tokens that should be used for unlocking the LUKS2-encrypted volume mustfirst be enrolled usingsystemd-cryptenroll.In the following example, a new key slot for the first discovered token isadded to the LUKS volume.
# systemd-cryptenroll --fido2-device=auto /dev/sda2Existing key slots are left intact, unless--wipe-slot= is specified. It isrecommended to add a recovery key that should be stored in a secure physicallocation and can be entered wherever a password would be entered.
# systemd-cryptenroll --recovery-key /dev/sda2SSHFS is aFUSE filesystem that allows easy access to directories on a remote machine using the SSH File Transfer Protocol (SFTP).It means that if you have SSH access to a machine, no additional setup is needed to mount a directory.
In NixOS, SSHFS is packaged assshfs.Once installed, mounting a directory interactively is simple as running:
$ sshfs my-user@example.com:/my-dir /mnt/my-dirLike any other FUSE file system, the directory is unmounted using:
$ fusermount -u /mnt/my-dirMounting non-interactively requires some precautions becausesshfs will run at boot and under a different user (root).For obvious reason, you can’t input a password, so public key authentication using an unencrypted key is needed.To create a new key without a passphrase you can do:
$ ssh-keygen -t ed25519 -P '' -f example-keyGenerating public/private ed25519 key pair.Your identification has been saved in example-keyYour public key has been saved in example-key.pubThe key fingerprint is:SHA256:yjxl3UbTn31fLWeyLYTAKYJPRmzknjQZoyG8gSNEoIE my-user@workstationTo keep the key safe, change the ownership toroot:root and make sure the permissions are600:OpenSSH normally refuses to use the key if it’s not well-protected.
The file system can be configured in NixOS via the usualfileSystems option.Here’s a typical setup:
{ fileSystems."/mnt/my-dir" = { device = "my-user@example.com:/my-dir/"; fsType = "sshfs"; options = [ # Filesystem options "allow_other" # for non-root access "_netdev" # this is a network fs "x-systemd.automount" # mount on demand # SSH options "reconnect" # handle connection drops "ServerAliveInterval=15" # keep connections alive "IdentityFile=/var/secrets/example-key" ]; };}More options fromssh_config(5) can be given as well, for example you can change the default SSH port or specify a jump proxy:
{ options = [ "ProxyJump=bastion@example.com" "Port=22" ];}It’s also possible to change thessh command used by SSHFS to connect to the server.For example:
{ options = [ (builtins.replaceStrings [ " " ] [ "\\040" ] "ssh_command=${pkgs.openssh}/bin/ssh -v -L 8080:localhost:80" ) ];}The escaping of spaces is needed because every option is written to the/etc/fstab file, which is a space-separated table.
If you’re having a hard time figuring out why mounting is failing, you can add the option"debug".This enables a verbose log in SSHFS that you can access via:
$ journalctl -u $(systemd-escape -p /mnt/my-dir/).mountJun 22 11:41:18 workstation mount[87790]: SSHFS version 3.7.1Jun 22 11:41:18 workstation mount[87793]: executing <ssh> <-x> <-a> <-oClearAllForwardings=yes> <-oServerAliveInterval=15> <-oIdentityFile=/var/secrets/wrong-key> <-2> <my-user@example.com> <-s> <sftp>Jun 22 11:41:19 workstation mount[87793]: my-user@example.com: Permission denied (publickey).Jun 22 11:41:19 workstation mount[87790]: read: Connection reset by peerJun 22 11:41:19 workstation systemd[1]: mnt-my\x2ddir.mount: Mount process exited, code=exited, status=1/FAILUREJun 22 11:41:19 workstation systemd[1]: mnt-my\x2ddir.mount: Failed with result 'exit-code'.Jun 22 11:41:19 workstation systemd[1]: Failed to mount /mnt/my-dir.Jun 22 11:41:19 workstation systemd[1]: mnt-my\x2ddir.mount: Consumed 54ms CPU time, received 2.3K IP traffic, sent 2.7K IP traffic.If the mount point contains special characters it needs to be escaped usingsystemd-escape.This is due to the way systemd converts paths into unit names.
NixOS offers a convenient abstraction to create both read-only as well writableoverlays.
{ fileSystems = { "/writable-overlay" = { overlay = { lowerdir = [ writableOverlayLowerdir ]; upperdir = "/.rw-writable-overlay/upper"; workdir = "/.rw-writable-overlay/work"; }; # Mount the writable overlay in the initrd. neededForBoot = true; }; "/readonly-overlay".overlay.lowerdir = [ writableOverlayLowerdir writableOverlayLowerdir2 ]; };}Ifupperdir andworkdir are not null, they will be created before theoverlay is mounted.
To mount an overlay as read-only, you need to provide at least twolowerdirs.
Table of Contents
The X Window System (X11) provides the basis of NixOS’ graphical userinterface. It can be enabled as follows:
{ services.xserver.enable = true; }The X server will automatically detect and use the appropriate videodriver from a set of X.org drivers (such asvesa andintel). You canalso specify a driver manually, e.g.
{ services.xserver.videoDrivers = [ "r128" ]; }to enable X.org’sxf86-video-r128 driver.
You also need to enable at least one desktop or window manager.Otherwise, you can only log into a plain undecoratedxterm window.Thus you should pick one or more of the following lines:
{ services.xserver.desktopManager.plasma5.enable = true; services.xserver.desktopManager.xfce.enable = true; services.xserver.desktopManager.gnome.enable = true; services.xserver.desktopManager.mate.enable = true; services.xserver.windowManager.xmonad.enable = true; services.xserver.windowManager.twm.enable = true; services.xserver.windowManager.icewm.enable = true; services.xserver.windowManager.i3.enable = true; services.xserver.windowManager.herbstluftwm.enable = true;}NixOS’s defaultdisplay manager (the program that provides a graphicallogin prompt and manages the X server) is LightDM. You can select analternative one by picking one of the following lines:
{ services.displayManager.sddm.enable = true; services.xserver.displayManager.gdm.enable = true;}You can set the keyboard layout (and optionally the layout variant):
{ services.xserver.xkb.layout = "de"; services.xserver.xkb.variant = "neo";}The X server is started automatically at boot time. If you don’t wantthis to happen, you can set:
{ services.xserver.autorun = false; }The X server can then be started manually:
# systemctl start display-manager.serviceOn 64-bit systems, if you want OpenGL for 32-bit programs such as inWine, you should also set the following:
{ hardware.graphics.enable32Bit = true; }The x11 login screen can be skipped entirely, automatically logging youinto your window manager and desktop environment when you boot yourcomputer.
This is especially helpful if you have disk encryption enabled. Sinceyou already have to provide a password to decrypt your disk, entering asecond password to login can be redundant.
To enable auto-login, you need to define your default window manager anddesktop environment. If you wanted no desktop environment and i3 as youryour window manager, you’d define:
{ services.displayManager.defaultSession = "none+i3"; }Every display manager in NixOS supports auto-login, here is an exampleusing lightdm for a useralice:
{ services.xserver.displayManager.lightdm.enable = true; services.displayManager.autoLogin.enable = true; services.displayManager.autoLogin.user = "alice";}It is possible to avoid a display manager entirely and starting the X servermanually from a virtual terminal. Add to your configuration:
{ services.xserver.displayManager.startx = { enable = true; generateScript = true; };}then you can start the X server with thestartx command.
The second option will generate a basexinitrc script that will run yourwindow manager and set up the systemd user session.You can extend the script using theextraCommandsoption, for example:
{ services.xserver.displayManager.startx = { generateScript = true; extraCommands = '' xrdb -load .Xresources xsetroot -solid '#666661' xsetroot -cursor_name left_ptr ''; };}or, alternatively, you can write your own from scratch in~/.xinitrc.
In this case, remember you’re responsible for starting the window manager, forexample:
sxhkd &bspwm &and if you have enabled some systemd user service, you will probably want toalso add these lines too:
# import required env variables from the current shellsystemctl --user import-environment DISPLAY XDG_SESSION_ID# start all graphical user servicessystemctl --user start nixos-fake-graphical-session.target# start the user dbus daemondbus-daemon --session --address="unix:path=/run/user/$(id -u)/bus" &The default and recommended driver for Intel Graphics in X.org ismodesetting(included in the xorg-server package itself).This is a generic driver which uses the kernelmodesetting (KMS) mechanism, itsupports Glamor (2D graphics acceleration via OpenGL) and is activelymaintained, it may perform worse in some cases (like in old chipsets).
There is a second driver,intel (provided by the xf86-video-intel package),specific to older Intel iGPUs from generation 2 to 9. It is not recommended bymost distributions: it lacks several modern features (for example, it doesn’tsupport Glamor) and the package hasn’t been officially updated since 2015.
Third generation and older iGPUs (15-20+ years old) are not supported by themodesetting driver (X will crash upon startup). Thus, theintel driver isrequired for these chipsets.Otherwise, the results vary depending on the hardware, so you may have to tryboth drivers. Use the optionservices.xserver.videoDriversto set one. The recommended configuration for modern systems is:
{ services.xserver.videoDrivers = [ "modesetting" ]; }Themodesetting driver doesn’t currently provide aTearFree option (thiswill become available in an upcoming X.org release), So, without using acompositor (for example, seeservices.picom.enable) you willexperience screen tearing.
If you experience screen tearing no matter what, this configuration wasreported to resolve the issue:
{ services.xserver.videoDrivers = [ "intel" ]; services.xserver.deviceSection = '' Option "DRI" "2" Option "TearFree" "true" '';}Note that this will likely downgrade the performance compared tomodesetting orintel with DRI 3 (default).
NVIDIA provides a proprietary driver for its graphics cards that hasbetter 3D performance than the X.org drivers. It is not enabled bydefault because it’s not free software. You can enable it as follows:
{ services.xserver.videoDrivers = [ "nvidia" ]; }If you have an older card, you may have to use one of the legacy drivers:
{ hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.legacy_470; hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.legacy_390; hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.legacy_340;}You may need to reboot after enabling this driver to prevent a clashwith other kernel modules.
Support for Synaptics touchpads (found in many laptops such as the DellLatitude series) can be enabled as follows:
{ services.libinput.enable = true; }The driver has many options (seeAppendix A).For instance, the following disables tap-to-click behavior:
{ services.libinput.touchpad.tapping = false; }Note: the use ofservices.xserver.synaptics is deprecated since NixOS17.09.
GTK themes can be installed either to user profile or system-wide (viaenvironment.systemPackages). To make Qt 5 applications look similar toGTK ones, you can use the following configuration:
{ qt.enable = true; qt.platformTheme = "gtk2"; qt.style = "gtk2";}It is possible to install custom XKB keyboard layoutsusing the optionservices.xserver.xkb.extraLayouts.
As a first example, we are going to create a layout based on the basicUS layout, with an additional layer to type some greek symbols bypressing the right-alt key.
Create a file calledus-greek with the following content (under adirectory calledsymbols; it’s an XKB peculiarity that will help withtesting):
xkb_symbols "us-greek"{ include "us(basic)" // includes the base US keys include "level3(ralt_switch)" // configures right alt as a third level switch key <LatA> { [ a, A, Greek_alpha ] }; key <LatB> { [ b, B, Greek_beta ] }; key <LatG> { [ g, G, Greek_gamma ] }; key <LatD> { [ d, D, Greek_delta ] }; key <LatZ> { [ z, Z, Greek_zeta ] };};A minimal layout specification must include the following:
{ services.xserver.xkb.extraLayouts.us-greek = { description = "US layout with alt-gr greek"; languages = [ "eng" ]; symbolsFile = /yourpath/symbols/us-greek; };}The name (afterextraLayouts.) should match the one given to thexkb_symbols block.
Applying this customization requires rebuilding several packages, and abroken XKB file can lead to the X session crashing at login. Therefore,you’re strongly advised totest your layout before applying it:
$ nix-shell -p xorg.xkbcomp$ setxkbmap -I/yourpath us-greek -print | xkbcomp -I/yourpath - $DISPLAYYou can inspect the predefined XKB files for examples:
$ echo "$(nix-build --no-out-link '<nixpkgs>' -A xorg.xkeyboardconfig)/etc/X11/xkb/"Once the configuration is applied, and you did a logout/login cycle, thelayout should be ready to use. You can try it by e.g. runningsetxkbmap us-greek and then type<alt>+a (it may not get applied inyour terminal straight away). To change the default, the usualservices.xserver.xkb.layout option can still be used.
A layout can have several other components besidesxkb_symbols, forexample we will define new keycodes for some multimedia key and bindthese to some symbol.
Use thexev utility frompkgs.xorg.xev to find the codes of the keysof interest, then create amedia-key file to hold the keycodesdefinitions
xkb_keycodes "media"{ <volUp> = 123; <volDown> = 456;}Now use the newly define keycodes inmedia-sym:
xkb_symbols "media"{ key.type = "ONE_LEVEL"; key <volUp> { [ XF86AudioLowerVolume ] }; key <volDown> { [ XF86AudioRaiseVolume ] };}As before, to install the layout do
{ services.xserver.xkb.extraLayouts.media = { description = "Multimedia keys remapping"; languages = [ "eng" ]; symbolsFile = /path/to/media-key; keycodesFile = /path/to/media-sym; };}The functionpkgs.writeText <filename> <content> can be useful if youprefer to keep the layout definitions inside the NixOS configuration.
Unfortunately, the Xorg server does not (currently) support setting akeymap directly but relies instead on XKB rules to select the matchingcomponents (keycodes, types, …) of a layout. This means thatcomponents other than symbols won’t be loaded by default. As aworkaround, you can set the keymap usingsetxkbmap at the start of thesession with:
{ services.xserver.displayManager.sessionCommands = "setxkbmap -keycodes media";}If you are manually starting the X server, you should set the argument-xkbdir /etc/X11/xkb, otherwise X won’t find your layout files. Forexample withxinit run
$ xinit -- -xkbdir /etc/X11/xkbTo learn how to write layouts take a look at the XKBdocumentation.More example layouts can also be foundhere.
While X11 (seeX Window System) is still the primary display technologyon NixOS, Wayland support is steadily improving. Where X11 separates theX Server and the window manager, on Wayland those are combined: aWayland Compositor is like an X11 window manager, but also embeds theWayland ‘Server’ functionality. This means it is sufficient to installa Wayland Compositor such as sway without separately enabling a Waylandserver:
{ programs.sway.enable = true; }This installs the sway compositor along with some essential utilities.Now you can start sway from the TTY console.
If you are using a wlroots-based compositor, like sway, and want to beable to share your screen, make sure to configure Pipewire usingservices.pipewire.enableand related options.
For more helpful tips and tricks, see thewiki page about Sway.
Table of Contents
NixOS provides various APIs that benefit from GPU hardware acceleration,such as VA-API and VDPAU for video playback; OpenGL and Vulkan for 3Dgraphics; and OpenCL for general-purpose computing. This chapterdescribes how to set up GPU hardware acceleration (as far as this is notdone automatically) and how to verify that hardware acceleration isindeed used.
Most of the aforementioned APIs are agnostic with regards to whichdisplay server is used. Consequently, these instructions should applyboth to the X Window System and Wayland compositors.
OpenCL is a general compute API.It is used by various applications such as Blender and Darktable toaccelerate certain operations.
OpenCL applications load drivers through theInstallable Client Driver(ICD) mechanism. In this mechanism, an ICD file specifies the path tothe OpenCL driver for a particular GPU family. In NixOS, there are twoways to make ICD files visible to the ICD loader. The first is throughtheOCL_ICD_VENDORS environment variable. This variable can contain adirectory which is scanned by the ICL loader for ICD files. For example:
$ export \ OCL_ICD_VENDORS=`nix-build '<nixpkgs>' --no-out-link -A rocmPackages.clr.icd`/etc/OpenCL/vendors/The second mechanism is to add the OpenCL driver package tohardware.graphics.extraPackages.This links the ICD file under/run/opengl-driver, where it will be visibleto the ICD loader.
The proper installation of OpenCL drivers can be verified through theclinfo command of the clinfo package. This command will report thenumber of hardware devices that is found and give detailed informationfor each device:
$ clinfo | head -n3Number of platforms 1Platform Name AMD Accelerated Parallel ProcessingPlatform Vendor Advanced Micro Devices, Inc.Modern AMDGraphics CoreNext (GCN) GPUs aresupported through the rocmPackages.clr.icd package. Adding this package tohardware.graphics.extraPackagesenables OpenCL support:
{ hardware.graphics.extraPackages = [ rocmPackages.clr.icd ]; }Intel Gen12 and later GPUsare supported by the Intel NEO OpenCL runtime that is provided by theintel-compute-runtime package.The previous generations (8,9 and 11), have been moved to theintel-compute-runtime-legacy1 package.The proprietary Intel OpenCL runtime, in theintel-ocl package, is an alternative for Gen7 GPUs.
Bothintel-compute-runtime packages, as well as theintel-ocl package can be added tohardware.graphics.extraPackagesto enable OpenCL support. For example, for Gen12 and later GPUs, the followingconfiguration can be used:
{ hardware.graphics.extraPackages = [ intel-compute-runtime ]; }Vulkan is a graphics andcompute API for GPUs. It is used directly by games or indirectly thoughcompatibility layers likeDXVK.
By default, ifhardware.graphics.enableis enabled, Mesa is installed and provides Vulkan for supported hardware.
Similar to OpenCL, Vulkan drivers are loaded through theInstallableClient Driver (ICD) mechanism. ICD files for Vulkan are JSON files thatspecify the path to the driver library and the supported Vulkan version.All successfully loaded drivers are exposed to the application asdifferent GPUs. In NixOS, there are two ways to make ICD files visibleto Vulkan applications: an environment variable and a module option.
The first option is through theVK_ICD_FILENAMES environment variable.This variable can contain multiple JSON files, separated by:. Forexample:
$ export \ VK_ICD_FILENAMES=`nix-build '<nixpkgs>' --no-out-link -A amdvlk`/share/vulkan/icd.d/amd_icd64.jsonThe second mechanism is to add the Vulkan driver package tohardware.graphics.extraPackages.This links the ICD file under/run/opengl-driver, where it will bevisible to the ICD loader.
The proper installation of Vulkan drivers can be verified through thevulkaninfo command of the vulkan-tools package. This command willreport the hardware devices and drivers found, in this example outputamdvlk and radv:
$ vulkaninfo | grep GPU GPU id : 0 (Unknown AMD GPU) GPU id : 1 (AMD RADV NAVI10 (LLVM 9.0.1)) ...GPU0: deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU deviceName = Unknown AMD GPUGPU1: deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPUA simple graphical application that uses Vulkan isvkcube from thevulkan-tools package.
Modern AMDGraphics CoreNext (GCN) GPUs aresupported through either radv, which is part of mesa, or the amdvlkpackage. Adding the amdvlk package tohardware.graphics.extraPackagesmakes amdvlk the default driver and hides radv and lavapipe from the device list.A specific driver can be forced as follows:
{ hardware.graphics.extraPackages = [ pkgs.amdvlk ]; # To enable Vulkan support for 32-bit applications, also add: hardware.graphics.extraPackages32 = [ pkgs.driversi686Linux.amdvlk ]; # Force radv environment.variables.AMD_VULKAN_ICD = "RADV"; # Or environment.variables.VK_ICD_FILENAMES = "/run/opengl-driver/share/vulkan/icd.d/radeon_icd.x86_64.json";}VA-API (Video Acceleration API)is an open-source library and API specification, which provides access tographics hardware acceleration capabilities for video processing.
VA-API drivers are loaded bylibva. The version in nixpkgs is built to searchthe opengl driver path, so drivers can be installed inhardware.graphics.extraPackages.
VA-API can be tested using:
$ nix-shell -p libva-utils --run vainfoModern Intel GPUs use the iHD driver, which can be installed with:
{ hardware.graphics.extraPackages = [ intel-media-driver ]; }Older Intel GPUs use the i965 driver, which can be installed with:
{ hardware.graphics.extraPackages = [ intel-vaapi-driver ]; }Except where noted explicitly, it should not be necessary to adjust userpermissions to use these acceleration APIs. In the defaultconfiguration, GPU devices have world-read/write permissions(/dev/dri/renderD*) or are tagged asuaccess (/dev/dri/card*). Theaccess control lists of devices with theuaccess tag will be updatedautomatically when a user logs in throughsystemd-logind. For example,if the useralice is logged in, the access control list should look asfollows:
$ getfacl /dev/dri/card0# file: dev/dri/card0# owner: root# group: videouser::rw-user:alice:rw-group::rw-mask::rw-other::---If you disabled (this functionality of)systemd-logind, you may needto add the user to thevideo group and log in again.
TheInstallable Client Driver (ICD) mechanism used by OpenCL andVulkan loads runtimes into its address space usingdlopen. Mixing anICD loader mechanism and runtimes from different version of nixpkgs maynot work. For example, if the ICD loader uses an older version of glibcthan the runtime, the runtime may not be loadable due to missingsymbols. Unfortunately, the loader will generally be quiet about suchissues.
If you suspect that you are running into library version mismatchesbetween an ICL loader and a runtime, you could run an application withtheLD_DEBUG variable set to get more diagnostic information. Forexample, OpenCL can be tested withLD_DEBUG=files clinfo, which shouldreport missing symbols.
Table of Contents
To enable the Xfce Desktop Environment, set
{ services.xserver.desktopManager.xfce.enable = true; services.displayManager.defaultSession = "xfce";}Optionally,picom can be enabled for nice graphical effects, someexample settings:
{ services.picom = { enable = true; fade = true; inactiveOpacity = 0.9; shadow = true; fadeDelta = 4; };}Some Xfce programs are not installed automatically. To install themmanually (system wide), put them into yourenvironment.systemPackages frompkgs.xfce.
Thunar (the Xfce file manager) is automatically enabled when Xfce isenabled. To enable Thunar without enabling Xfce, use the configurationoptionprograms.thunar.enable instead of addingpkgs.xfce.thunar toenvironment.systemPackages.
If you’d like to add extra plugins to Thunar, add them toprograms.thunar.plugins. You shouldn’t just add them toenvironment.systemPackages.
Even after enabling udisks2, volume management might not work. Thunarand/or the desktop takes time to show up. Thunar will spit out this kindof message on start (look atjournalctl --user -b).
Thunar:2410): GVFS-RemoteVolumeMonitor-WARNING **: remote volume monitor with dbus name org.gtk.Private.UDisks2VolumeMonitor is not supportedThis is caused by some needed GNOME services not running. This is allfixed by enabling “Launch GNOME services on startup” in the Advancedtab of the Session and Startup settings panel. Alternatively, you canrun this command to do the same thing.
$ xfconf-query -c xfce4-session -p /compat/LaunchGNOME -s trueIt is necessary to log out and log in again for this to take effect.
Table of Contents
This section describes how to configure networking componentson your NixOS machine.
To facilitate network configuration, some desktop environments useNetworkManager. You can enable NetworkManager by setting:
{ networking.networkmanager.enable = true; }some desktop managers (e.g., GNOME) enable NetworkManager automaticallyfor you.
All users that should have permission to change network settings mustbelong to thenetworkmanager group:
{ users.users.alice.extraGroups = [ "networkmanager" ]; }NetworkManager is controlled using eithernmcli ornmtui(curses-based terminal user interface). See their manual pages fordetails on their usage. Some desktop environments (GNOME, KDE) havetheir own configuration tools for NetworkManager. On XFCE, there is noconfiguration tool for NetworkManager by default: by enablingprograms.nm-applet.enable, the graphical applet will beinstalled and will launch automatically when the graphical session isstarted.
networking.networkmanager andnetworking.wireless (WPA Supplicant)can be used together if desired. To do this you need to instructNetworkManager to ignore those interfaces like:
{ networking.networkmanager.unmanaged = [ "*" "except:type:wwan" "except:type:gsm" ];}Refer to the option description for the exact syntax and references toexternal documentation.
Secure shell (SSH) access to your machine can be enabled by setting:
{ services.openssh.enable = true; }By default, root logins using a password are disallowed. They can bedisabled entirely by settingservices.openssh.settings.PermitRootLogin to"no".
You can declaratively specify authorised public keys for a useras follows:
{ users.users.alice.openssh.authorizedKeys.keys = [ "ssh-ed25519 AAAAB3NzaC1kc3MAAACBAPIkGWVEt4..." ];}By default, NixOS uses DHCP (specifically,dhcpcd) to automaticallyconfigure network interfaces. However, you can configure an interfacemanually as follows:
{ networking.interfaces.eth0.ipv4.addresses = [ { address = "192.168.1.2"; prefixLength = 24; } ];}Typically you’ll also want to set a default gateway and set of nameservers:
{ networking.defaultGateway = "192.168.1.1"; networking.nameservers = [ "8.8.8.8" ];}Statically configured interfaces are set up by the systemd serviceinterface-name-cfg.service. The default gateway and name serverconfiguration is performed bynetwork-setup.service.
The host name is set usingnetworking.hostName:
{ networking.hostName = "cartman"; }The default host name isnixos. Set it to the empty string ("") toallow the DHCP server to provide the host name.
IPv6 is enabled by default. Stateless address autoconfiguration is usedto automatically assign IPv6 addresses to all interfaces, and PrivacyExtensions (RFC 4946) are enabled by default. You can adjust the defaultfor this by settingnetworking.tempAddresses. This optionmay be overridden on a per-interface basis bynetworking.interfaces.<name>.tempAddress. You can disableIPv6 support globally by setting:
{ networking.enableIPv6 = false; }You can disable IPv6 on a single interface using a normal sysctl (inthis example, we use interfaceeth0):
{ boot.kernel.sysctl."net.ipv6.conf.eth0.disable_ipv6" = true; }As with IPv4 networking interfaces are automatically configured viaDHCPv6. You can configure an interface manually:
{ networking.interfaces.eth0.ipv6.addresses = [ { address = "fe00:aa:bb:cc::2"; prefixLength = 64; } ];}For configuring a gateway, optionally with explicitly specifiedinterface:
{ networking.defaultGateway6 = { address = "fe00::1"; interface = "enp0s3"; };}Seethe section called “IPv4 Configuration” for similar examples and additional information.
NixOS has a simple stateful firewall that blocks incoming connectionsand other unexpected packets. The firewall applies to both IPv4 and IPv6traffic. It is enabled by default. It can be disabled as follows:
{ networking.firewall.enable = false; }If the firewall is enabled, you can open specific TCP ports to theoutside world:
{ networking.firewall.allowedTCPPorts = [ 80 443 ];}Note that TCP port 22 (ssh) is opened automatically if the SSH daemon isenabled (services.openssh.enable = true). UDP ports can be opened throughnetworking.firewall.allowedUDPPorts.
To open ranges of TCP ports:
{ networking.firewall.allowedTCPPortRanges = [ { from = 4000; to = 4007; } { from = 8000; to = 8010; } ];}Similarly, UDP port ranges can be opened throughnetworking.firewall.allowedUDPPortRanges.
For a desktop installation using NetworkManager (e.g., GNOME), you justhave to make sure the user is in thenetworkmanager group and you canskip the rest of this section on wireless networks.
NixOS will start wpa_supplicant for you if you enable this setting:
{ networking.wireless.enable = true; }NixOS lets you specify networks for wpa_supplicant declaratively:
{ networking.wireless.networks = { # SSID with no spaces or special characters echelon = { psk = "abcdefgh"; }; # SSID with spaces and/or special characters "echelon's AP" = { psk = "ijklmnop"; }; # Hidden SSID echelon = { hidden = true; psk = "qrstuvwx"; }; free.wifi = { }; # Public wireless network };}Be aware that keys will be written to the nix store in plaintext! Whenno networks are set, it will default to using a configuration file at/etc/wpa_supplicant.conf. You should edit this file yourself to definewireless networks, WPA keys and so on (see wpa_supplicant.conf(5)).
If you are using WPA2 you can generate pskRaw key usingwpa_passphrase:
$ wpa_passphrase ESSID PSKnetwork={ ssid="echelon" #psk="abcdefgh" psk=dca6d6ed41f4ab5a984c9f55f6f66d4efdc720ebf66959810f4329bb391c5435}{ networking.wireless.networks = { echelon = { pskRaw = "dca6d6ed41f4ab5a984c9f55f6f66d4efdc720ebf66959810f4329bb391c5435"; }; };}or you can use it to directly generate thewpa_supplicant.conf:
# wpa_passphrase ESSID PSK > /etc/wpa_supplicant.confAfter you have edited thewpa_supplicant.conf, you need to restart thewpa_supplicant service.
# systemctl restart wpa_supplicant.serviceYou can usenetworking.localCommands tospecify shell commands to be run at the end ofnetwork-setup.service. Thisis useful for doing network configuration not covered by the existing NixOSmodules. For instance, to statically configure an IPv6 address:
{ networking.localCommands = '' ip -6 addr add 2001:610:685:1::1/64 dev eth0 '';}NixOS uses the udevpredictable namingscheme to assign namesto network interfaces. This means that by default cards are not giventhe traditional names likeeth0 oreth1, whose order can changeunpredictably across reboots. Instead, relying on physical locations andfirmware information, the scheme produces names likeens1,enp2s0,etc.
These names are predictable but less memorable and not necessarilystable: for example installing new hardware or changing firmwaresettings can result in anamechange.If this is undesirable, for example if you have a single ethernet card,you can revert to the traditional scheme by settingnetworking.usePredictableInterfaceNamestofalse.
In case there are multiple interfaces of the same type, it’s better toassign custom names based on the device hardware address. For example,we assign the namewan to the interface with MAC address52:54:00:12:01:01 using a netword link unit:
{ systemd.network.links."10-wan" = { matchConfig.PermanentMACAddress = "52:54:00:12:01:01"; linkConfig.Name = "wan"; };}Note that links are directly read by udev,not networkd, and will workeven if networkd is disabled.
Alternatively, we can use a plain old udev rule:
{ boot.initrd.services.udev.rules = '' SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", \ ATTR{address}=="52:54:00:12:01:01", KERNEL=="eth*", NAME="wan" '';}The rule must be installed in the initrd usingboot.initrd.services.udev.rules, not the usualservices.udev.extraRulesoption. This is to avoid race conditions with other programs controllingthe interface.
Table of Contents
You can override the Linux kernel and associated packages using theoptionboot.kernelPackages. For instance, this selects the Linux 3.10kernel:
{ boot.kernelPackages = pkgs.linuxKernel.packages.linux_3_10; }Note that this not only replaces the kernel, but also packages that arespecific to the kernel version, such as the NVIDIA video drivers. Thisensures that driver packages are consistent with the kernel.
Whilepkgs.linuxKernel.packages contains all available kernel packages,you may want to use one of the unversionedpkgs.linuxPackages_* aliasessuch aspkgs.linuxPackages_latest, that are kept up to date with newversions.
Please note that the current convention in NixOS is to only keep activelymaintained kernel versions on both unstable and the currently supported stablerelease(s) of NixOS. This means that a non-longterm kernel will be removed after it’sabandoned by the kernel developers, even on stable NixOS versions. If youpin your kernel onto a non-longterm version, expect your evaluation to fail assoon as the version is out of maintenance.
A kernel will be removed from nixpkgs when the first batch of stable kernelsafter the final release is published. E.g. when 6.15.11 is the final releaseof the 6.15 series and is released together with 6.16.3 and 6.12.43, it will beremoved on the release of 6.16.4 and 6.12.44. Custom kernel variants suchas linux-hardened are also affected by this.
Longterm versions of kernels will be removed before the next stable NixOS that willexceed the maintenance period of the kernel version.
The default Linux kernel configuration should be fine for most users.You can see the configuration of your current kernel with the followingcommand:
zcat /proc/config.gzIf you want to change the kernel configuration, you can use thepackageOverrides feature (seethe section called “Customising Packages”). Forinstance, to enable support for the kernel debugger KGDB:
{ nixpkgs.config.packageOverrides = pkgs: pkgs.lib.recursiveUpdate pkgs { linuxKernel.kernels.linux_5_10 = pkgs.linuxKernel.kernels.linux_5_10.override { extraConfig = '' KGDB y ''; }; };}extraConfig takes a list of Linux kernel configuration options, oneper line. The name of the option should not include the prefixCONFIG_. The option value is typicallyy,n orm (to buildsomething as a kernel module).
Kernel modules for hardware devices are generally loaded automaticallybyudev. You can force a module to be loaded viaboot.kernelModules, e.g.
{ boot.kernelModules = [ "fuse" "kvm-intel" "coretemp" ];}If the module is required early during the boot (e.g. to mount the rootfile system), you can useboot.initrd.kernelModules:
{ boot.initrd.kernelModules = [ "cifs" ]; }This causes the specified modules and their dependencies to be added tothe initial ramdisk.
Kernel runtime parameters can be set throughboot.kernel.sysctl, e.g.
{ boot.kernel.sysctl."net.ipv4.tcp_keepalive_time" = 120; }sets the kernel’s TCP keepalive time to 120 seconds. To see theavailable parameters, runsysctl -a.
Please refer to the Nixpkgs manual for the various ways ofbuilding a custom kernel.
To use your custom kernel package in your NixOS configuration, set
{ boot.kernelPackages = pkgs.linuxPackagesFor yourCustomKernel; }The Linux kernel does not have Rust language support enabled bydefault. For kernel versions 6.7 or newer, experimental Rust supportcan be enabled. In a NixOS configuration, set:
{ boot.kernelPatches = [ { name = "Rust Support"; patch = null; features = { rust = true; }; } ];}This section was moved to theNixpkgs manual.
It’s a common issue that the latest stable version of ZFS doesn’t support the latestavailable Linux kernel. It is recommended to use the latest available LTS that’s compatiblewith ZFS. Usually this is the default kernel provided by nixpkgs (i.e.pkgs.linuxPackages).
Table of Contents
Subversion is a centralizedversion-control system. It can use avariety ofprotocolsfor communication between client and server.
This section focuses on configuring a web-based server on top of theApache HTTP server, which usesWebDAV/DeltaVfor communication.
For more information on the general setup, please refer to thetheappropriate section of the Subversionbook.
To configure, include in/etc/nixos/configuration.nix code to activateApache HTTP, settingservices.httpd.adminAddrappropriately:
{ services.httpd.enable = true; services.httpd.adminAddr = "..."; networking.firewall.allowedTCPPorts = [ 80 443 ];}For a simple Subversion server with basic authentication, configure theSubversion module for Apache as follows, settinghostName anddocumentRoot appropriately, andSVNParentPath to the parentdirectory of the repositories,AuthzSVNAccessFile to the location ofthe.authz file describing access permission, andAuthUserFile tothe password file.
{ services.httpd.extraModules = [ # note that order is *super* important here { name = "dav_svn"; path = "${pkgs.apacheHttpdPackages.subversion}/modules/mod_dav_svn.so"; } { name = "authz_svn"; path = "${pkgs.apacheHttpdPackages.subversion}/modules/mod_authz_svn.so"; } ]; services.httpd.virtualHosts = { "svn" = { hostName = HOSTNAME; documentRoot = DOCUMENTROOT; locations."/svn".extraConfig = '' DAV svn SVNParentPath REPO_PARENT AuthzSVNAccessFile ACCESS_FILE AuthName "SVN Repositories" AuthType Basic AuthUserFile PASSWORD_FILE Require valid-user ''; }; };}The key"svn" is just a symbolic name identifying the virtual host.The"/svn" inlocations."/svn".extraConfig is the path underneathwhich the repositories will be served.
This page explainshow to set up the Subversion configuration itself. This boils down tothe following:
UnderneathREPO_PARENT repositories can be set up as follows:
$ svn create REPO_NAMERepository files need to be accessible bywwwrun:
$ chown -R wwwrun:wwwrun REPO_PARENTThe password filePASSWORD_FILE can be created as follows:
$ htpasswd -cs PASSWORD_FILE USER_NAMEAdditional users can be set up similarly, omitting thec flag:
$ htpasswd -s PASSWORD_FILE USER_NAMEThe file describing access permissionsACCESS_FILE will look somethinglike the following:
[/]* = r[REPO_NAME:/]USER_NAME = rwThe Subversion repositories will be accessible ashttp://HOSTNAME/svn/REPO_NAME.
Table of Contents
Pantheon is the desktop environment created for the elementary OS distribution. It is written from scratch in Vala, utilizing GNOME technologies with GTK and Granite.
All of Pantheon is working in NixOS and the applications should be available, aside from a fewexceptions. To enable Pantheon, set
{ services.xserver.desktopManager.pantheon.enable = true; }This automatically enables LightDM and Pantheon’s LightDM greeter. If you’d like to disable this, set
{ services.xserver.displayManager.lightdm.greeters.pantheon.enable = false; services.xserver.displayManager.lightdm.enable = false;}but please be aware using Pantheon without LightDM as a display manager will break screenlocking from the UI. The NixOS module for Pantheon installs all of Pantheon’s default applications. If you’d like to not install Pantheon’s apps, set
{ services.pantheon.apps.enable = false; }You can also useenvironment.pantheon.excludePackages to remove any other app (likeelementary-mail).
Wingpanel and Switchboard work differently than they do in other distributions, as far as using plugins. You cannot install a plugin globally (like withenvironment.systemPackages) to start using it. You should instead be using the following options:
to configure the programs with plugs or indicators.
The difference in NixOS is both these programs are patched to load plugins from a directory that is the value of an environment variable. All of which is controlled in Nix. If you need to configure the particular packages manually you can override the packages like:
wingpanel-with-indicators.override { indicators = [ pkgs.some-special-indicator ];}switchboard-with-plugs.override { plugs = [ pkgs.some-special-plug ]; }please note that, like how the NixOS options describe these as extra plugins, this would only add to the default plugins included with the programs. If for some reason you’d like to configure which plugins to use exactly, both packages have an argument for this:
wingpanel-with-indicators.override { useDefaultIndicators = false; indicators = specialListOfIndicators;}switchboard-with-plugs.override { useDefaultPlugs = false; plugs = specialListOfPlugs;}this could be most useful for testing a particular plug-in in isolation.
Open Switchboard and go to: Administration → About → Restore Default Settings → Restore Settings. This will reset any dconf settings to their Pantheon defaults. Note this could reset certain GNOME specific preferences if that desktop was used prior.
This is a knownissue and there is no known workaround.
AppCenter is available and the Flatpak backend should work so you can install some Flatpak applications using it. However, due to missing appstream metadata, the Packagekit backend does not function currently. See thisissue.
If you are using Pantheon, AppCenter should be installed by default if you haveFlatpak support enabled. If you also wish to add theappcenter Flatpak remote:
$ flatpak remote-add --if-not-exists appcenter https://flatpak.elementary.io/repo.flatpakrepoTable of Contents
GNOME provides a simple, yet full-featured desktop environment with a focus on productivity. Its Mutter compositor supports both Wayland and X server, and the GNOME Shell user interface is fully customizable by extensions.
All of the core apps, optional apps, games, and core developer tools from GNOME are available.
To enable the GNOME desktop use:
{ services.xserver.desktopManager.gnome.enable = true; services.xserver.displayManager.gdm.enable = true;}While it is not strictly necessary to use GDM as the display manager with GNOME, it is recommended, as some features such as screen lockmight not work without it.
The default applications used in NixOS are very minimal, inspired by the defaults used ingnome-build-meta.
If you’d like to only use the GNOME desktop and not the apps, you can disable them with:
{ services.gnome.core-apps.enable = false; }and none of them will be installed.
If you’d only like to omit a subset of the core utilities, you can useenvironment.gnome.excludePackages.Note that this mechanism can only exclude core utilities, games and core developer tools.
It is also possible to disable many of thecore services. For example, if you do not need indexing files, you can disable TinySPARQL with:
{ services.gnome.localsearch.enable = false; services.gnome.tinysparql.enable = false;}Note, however, that doing so is not supported and might break some applications. Notably, GNOME Music cannot work without TinySPARQL.
You can install all of the GNOME games with:
{ services.gnome.games.enable = true; }You can install GNOME core developer tools with:
{ services.gnome.core-developer-tools.enable = true; }GNOME Flashback provides a desktop environment based on the classic GNOME 2 architecture. You can enable the default GNOME Flashback session, which uses the Metacity window manager, with:
{ services.xserver.desktopManager.gnome.flashback.enableMetacity = true; }It is also possible to create custom sessions that replace Metacity with a different window manager usingservices.xserver.desktopManager.gnome.flashback.customSessions.
The following example usesxmonad window manager:
{ services.xserver.desktopManager.gnome.flashback.customSessions = [ { wmName = "xmonad"; wmLabel = "XMonad"; wmCommand = "${pkgs.haskellPackages.xmonad}/bin/xmonad"; enableGnomePanel = false; } ];}Icon themes and GTK themes don’t require any special option to install in NixOS.
You can add them toenvironment.systemPackages and switch to them with GNOME Tweaks.If you’d like to do this manually in dconf, change the values of the following keys:
/org/gnome/desktop/interface/gtk-theme/org/gnome/desktop/interface/icon-themeindconf-editor
Most Shell extensions are packaged under thegnomeExtensions attribute.Some packages that include Shell extensions, likegpaste, don’t have their extension decoupled under this attribute.
You can install them like any other package:
{ environment.systemPackages = [ gnomeExtensions.dash-to-dock gnomeExtensions.gsconnect gnomeExtensions.mpris-indicator-button ];}Unfortunately, we lack a way for these to be managed in a completely declarative way.So you have to enable them manually with an Extensions application.It is possible to use aGSettings override for this onorg.gnome.shell.enabled-extensions, but that will only influence the default value.
Majority of software building on the GNOME platform use GLib’sGSettings system to manage runtime configuration. For our purposes, the system consists of XML schemas describing the individual configuration options, stored in the package, and a settings backend, where the values of the settings are stored. On NixOS, like on most Linux distributions, dconf database is used as the backend.
GSettings vendor overrides can be used to adjust the default values for settings of the GNOME desktop and apps by replacing the default values specified in the XML schemas. Using overrides will allow you to pre-seed user settings before you even start the session.
Overrides really only change the default values for GSettings keys so if you or an application changes the setting value, the value set by the override will be ignored. UntilNixOS’s dconf module implements changing values, you will either need to keep that in mind and clear the setting from the backend usingdconf reset command when that happens, or use themodule from home-manager.
You can override the default GSettings values using theservices.xserver.desktopManager.gnome.extraGSettingsOverrides option.
Take note that whatever packages you want to override GSettings for, you need to add them toservices.xserver.desktopManager.gnome.extraGSettingsOverridePackages.
You can usedconf-editor tool to explore which GSettings you can set.
{ services.xserver.desktopManager.gnome = { extraGSettingsOverrides = '' # Change default background [org.gnome.desktop.background] picture-uri='file://${pkgs.nixos-artwork.wallpapers.mosaic-blue.gnomeFilePath}' # Favorite apps in gnome-shell [org.gnome.shell] favorite-apps=['org.gnome.Console.desktop', 'org.gnome.Nautilus.desktop'] ''; extraGSettingsOverridePackages = [ pkgs.gsettings-desktop-schemas # for org.gnome.desktop pkgs.gnome-shell # for org.gnome.shell ]; };}Yes you can, and any other display-manager in NixOS.
However, it doesn’t work correctly for the Wayland session of GNOME Shell yet, andwon’t be able to lock your screen.
Seethis issue.
Table of Contents
NixOS has support for several bootloader backends by default: systemd-boot, grub, uboot, etc.The built-in bootloader backend support is generic and supports most use cases.Some users may prefer to create advanced workflows around managing the bootloader and bootable entries.
You can replace the built-in bootloader support with your own tooling using the “external” bootloader option.
Imagine you have created a new package called FooBoot.FooBoot provides a program at${pkgs.fooboot}/bin/fooboot-install which takes the system closure’s path as its only argument and configures the system’s bootloader.
You can enable FooBoot like this:
{ pkgs, ... }:{ boot.loader.external = { enable = true; installHook = "${pkgs.fooboot}/bin/fooboot-install"; };}Bootloaders should useRFC-0125’s Bootspec format and synthesis tools to identify the key properties for bootable system generations.
Table of Contents
Clevisis a framework for automated decryption of resources.Clevis allows for secure unattended disk decryption during boot, using decryption policies that must be satisfied for the data to decrypt.
The first step is to embed your secret in aJWE file.JWE files have to be created through the clevis command line. 3 types of policies are supported:
TPM policies
Secrets are pinned against the presence of a TPM2 device, for example:
echo -n hi | clevis encrypt tpm2 '{}' > hi.jweTang policies
Secrets are pinned against the presence of a Tang server, for example:
echo -n hi | clevis encrypt tang '{"url": "http://tang.local"}' > hi.jweShamir Secret Sharing
Using Shamir’s Secret Sharing (sss), secrets are pinned using a combination of the two preceding policies. For example:
echo -n hi | clevis encrypt sss \'{"t": 2, "pins": {"tpm2": {"pcr_ids": "0"}, "tang": {"url": "http://tang.local"}}}' \> hi.jweFor more complete documentation on how to generate a secret with clevis, see theclevis documentation.
In order to activate unattended decryption of a resource at boot, enable theclevis module:
{ boot.initrd.clevis.enable = true; }Then, specify the device you want to decrypt using a given clevis secret. Clevis will automatically try to decrypt the device at boot and will fallback to interactive unlocking if the decryption policy is not fulfilled.
{ boot.initrd.clevis.devices."/dev/nvme0n1p1".secretFile = ./nvme0n1p1.jwe; }Onlybcachefs,zfs andluks encrypted devices are supported at this time.
Table of Contents
Garageis an open-source, self-hostable S3 store, simpler than MinIO, for geodistributed stores.The server setup can be automated usingservices.garage. Aclient configured to your local Garage instance is available inthe global environment asgarage-manage.
The current default by NixOS isgarage_0_8 which is also the latestmajor version available.
Garage provides a cookbook documentation on how to upgrade:https://garagehq.deuxfleurs.fr/documentation/cookbook/upgrading/
Garage has two types of upgrades: patch-level upgrades and minor/major version upgrades.
In all cases, you should read the changelog and ideally test the upgrade on a staging cluster.
Checking the health of your cluster can be achieved usinggarage-manage repair.
Until 1.0 is released, patch-level upgrades are considered as minor version upgrades.Minor version upgrades are considered as major version upgrades.i.e. 0.6 to 0.7 is a major version upgrade.
Straightforward upgrades (patch-level upgrades).Upgrades must be performed one by one, i.e. for each node, stop it, upgrade it : changestateVersion orservices.garage.package, restart it if it was not already by switching.
Multiple version upgrades.Garage do not provide any guarantee on moving more than one major-version forward.E.g., if you’re on0.7, you cannot upgrade to0.9.You need to upgrade to0.8 first.As long asstateVersion is declared properly,this is enforced automatically. The module will issue a warning to remind the user to upgrade to latestGarageafter that deploy.
Here are some baseline instructions to handle advanced upgrades in Garage, when in doubt, please refer to upstream instructions.
Disable API and web access to Garage.
Performgarage-manage repair --all-nodes --yes tables andgarage-manage repair --all-nodes --yes blocks.
Verify the resulting logs and check that data is synced properly between all nodes.If you have time, do additional checks (scrub,block_refs, etc.).
Check if queues are empty bygarage-manage stats or through monitoring tools.
Runsystemctl stop garage to stop the actual Garage version.
Backup the metadata folder of ALL your nodes, e.g. for a metadata directory (the default one) in/var/lib/garage/meta,you can runpushd /var/lib/garage; tar -acf meta-v0.7.tar.zst meta/; popd.
Run the offline migration:nix-shell -p garage_0_8 --run "garage offline-repair --yes", this can take some time depending on how many objects are stored in your cluster.
Bump Garage version in your NixOS configuration, either by changingstateVersion or bumpingservices.garage.package, this should restart Garage automatically.
Performgarage-manage repair --all-nodes --yes tables andgarage-manage repair --all-nodes --yes blocks.
Wait for a full table sync to run.
Your upgraded cluster should be in a working state, re-enable API and web access.
As stated in the previous paragraph, we must provide a clean upgrade-path for Garagesince it cannot move more than one major version forward on a single upgrade. This chapteradds some notes how Garage updates should be rolled out in the future.This is inspired from how Nextcloud does it.
While patch-level updates are no problem and can be done directly in thepackage-expression (and should be backported to supported stable branches after that),major-releases should be added in a new attribute (e.g. Garagev0.8.0should be available innixpkgs aspkgs.garage_0_8_0).To provide simple upgrade paths it’s generally useful to backport those as well to stablebranches. As long as the package-default isn’t altered, this won’t break existing setups.After that, the versioning-warning in thegarage-module should beupdated to make sure that thepackage-option selects the latest versionon fresh setups.
If major-releases will be abandoned by upstream, we should check first if those are neededin NixOS for a safe upgrade-path before removing those. In that case we should keep thosepackages, but mark them as insecure in an expression like this (in<nixpkgs/pkgs/tools/filesystem/garage/default.nix>):
# ...{ garage_0_7_3 = generic { version = "0.7.3"; sha256 = "0000000000000000000000000000000000000000000000000000"; eol = true; };}Ideally we should make sure that it’s possible to jump two NixOS versions forward:i.e. the warnings and the logic in the module should guard a user to upgrade from aGarage on e.g. 22.11 to a Garage on 23.11.
Table of Contents
YouTrack is a browser-based bug tracker, issue tracking system and project management software.
YouTrack exposes a web GUI installer on first login.You need a token to access it.You can find this token in the log of theyoutrack service. The log line looks like
* JetBrains YouTrack 2023.3 Configuration Wizard will be available on [http://127.0.0.1:8090/?wizard_token=somelongtoken] after startStarting with YouTrack 2023.1, JetBrains no longer distributes it as as JAR.The new distribution with the JetBrains Launcher as a ZIP changed the basic data structure and also some configuration parameters.Check out https://www.jetbrains.com/help/youtrack/server/YouTrack-Java-Start-Parameters.html for more information on the new configuration options.When upgrading to YouTrack 2023.1 or higher, a migration script will move the old state directory to/var/lib/youtrack/2022_3 as a backup.A one-time manual update is required:
Before you update take a backup of your YouTrack instance!
Migrate the options you set inservices.youtrack.extraParams andservices.youtrack.jvmOpts toservices.youtrack.generalParameters andservices.youtrack.environmentalParameters (see the examples andthe YouTrack docs)
To start the upgrade setservices.youtrack.package = pkgs.youtrack
YouTrack then starts in upgrade mode, meaning you need to obtain the wizard token as above
Select you want toUpgrade YouTrack
As source you select/var/lib/youtrack/2022_3/teamsysdata/ (adopt if you have a different state path)
Change the data directory location to/var/lib/youtrack/data/. The other paths should already be right.
If you migrate a larger YouTrack instance, it might be useful to set-Dexodus.entityStore.refactoring.forceAll=true inservices.youtrack.generalParameters for the first startup of YouTrack 2023.x.
Table of Contents
A free and open source manga reader server that runs extensions built for Tachiyomi.
By default, the module will execute Suwayomi-Server backend and web UI:
{ ... }:{ services.suwayomi-server = { enable = true; };}It runs in the systemd service namedsuwayomi-server in the data directory/var/lib/suwayomi-server.
You can change the default parameters with some other parameters:
{ ... }:{ services.suwayomi-server = { enable = true; dataDir = "/var/lib/suwayomi"; # Default is "/var/lib/suwayomi-server" openFirewall = true; settings = { server.port = 4567; }; };}If you want to create a desktop icon, you can activate the system tray option:
{ ... }:{ services.suwayomi-server = { enable = true; dataDir = "/var/lib/suwayomi"; # Default is "/var/lib/suwayomi-server" openFirewall = true; settings = { server.port = 4567; server.enableSystemTray = true; }; };}You can configure a basic authentication to the web interface with:
{ ... }:{ services.suwayomi-server = { enable = true; openFirewall = true; settings = { server.port = 4567; server = { basicAuthEnabled = true; basicAuthUsername = "username"; # NOTE: this is not a real upstream option basicAuthPasswordFile = ./path/to/the/password/file; }; }; };}Not all the configuration options are available directly in this module, but you can add the other options of suwayomi-server with:
{ ... }:{ services.suwayomi-server = { enable = true; openFirewall = true; settings = { server = { port = 4567; autoDownloadNewChapters = false; maxSourcesInParallel = 6; extensionRepos = [ "https://raw.githubusercontent.com/MY_ACCOUNT/MY_REPO/repo/index.min.json" ]; }; }; };}Table of Contents
strfry is a relay for thenostr protocol.
By default, the module will execute strfry:
{ ... }:{ services.strfry.enable = true;}It runs in the systemd service namedstrfry.
You can configure nginx as a reverse proxy with:
{ ... }:{ security.acme = { acceptTerms = true; defaults.email = "foo@bar.com"; }; services.nginx.enable = true; services.nginx.virtualHosts."strfry.example.com" = { addSSL = true; enableACME = true; locations."/" = { proxyPass = "http://127.0.0.1:${toString config.services.strfry.settings.relay.port}"; proxyWebsockets = true; # nostr uses websockets }; }; services.strfry.enable = true;}Table of Contents
Plausible is a privacy-friendly alternative toGoogle analytics.
At first, a secret key is needed to be generated. This can be done with e.g.
$ openssl rand -base64 64After that,plausible can be deployed like this:
{ services.plausible = { enable = true; server = { baseUrl = "http://analytics.example.org"; # secretKeybaseFile is a path to the file which contains the secret generated # with openssl as described above. secretKeybaseFile = "/run/secrets/plausible-secret-key-base"; }; };}Table of Contents
A self-hosted file sharing platform and an alternative for WeTransfer.
By default, the module will execute Pingvin Share backend and frontend on the ports 8080 and 3000.
I will run two systemd services namedpingvin-share-backend andpingvin-share-frontend in the specified data directory.
Here is a basic configuration:
{ services-pingvin-share = { enable = true; openFirewall = true; backend.port = 9010; frontend.port = 9011; };}The preferred method to run this service is behind a reverse proxy not to expose an open port. This, you can configure Nginx such like this:
{ services-pingvin-share = { enable = true; hostname = "pingvin-share.domain.tld"; https = true; nginx.enable = true; };}Furthermore, you can increase the maximal size of an uploaded file with the optionservices.nginx.clientMaxBodySize.
Table of Contents
pict-rs is a a simple image hosting service.
the minimum to start pict-rs is
{ services.pict-rs.enable = true; }this will start the http server on port 8080 by default.
pict-rs offers the following endpoints:
POST /image for uploading an image. Uploaded content must be valid multipart/form-data with animage array located within theimages[] key
This endpoint returns the following JSON structure on success with a 201 Created status
{ "files": [ { "delete_token": "JFvFhqJA98", "file": "lkWZDRvugm.jpg" }, { "delete_token": "kAYy9nk2WK", "file": "8qFS0QooAn.jpg" }, { "delete_token": "OxRpM3sf0Y", "file": "1hJaYfGE01.jpg" } ], "msg": "ok"}GET /image/download?url=... Download an image from a remote server, returning the same JSONpayload as thePOST endpoint
GET /image/original/{file} for getting a full-resolution image.file here is thefile key from the/image endpoint’s JSON
GET /image/details/original/{file} for getting the details of a full-resolution image.The returned JSON is structured like so:
{ "width": 800, "height": 537, "content_type": "image/webp", "created_at": [ 2020, 345, 67376, 394363487 ]}GET /image/process.{ext}?src={file}&... get a file with transformations applied.existing transformations include
identity=true: apply no changes
blur={float}: apply a gaussian blur to the file
thumbnail={int}: produce a thumbnail of the image fitting inside an{int} by{int}square using raw pixel sampling
resize={int}: produce a thumbnail of the image fitting inside an{int} by{int} squareusing a Lanczos2 filter. This is slower than sampling but looks a bit better in some cases
crop={int-w}x{int-h}: produce a cropped version of the image with an{int-w} by{int-h}aspect ratio. The resulting crop will be centered on the image. Either the width or heightof the image will remain full-size, depending on the image’s aspect ratio and the requestedaspect ratio. For example, a 1600x900 image cropped with a 1x1 aspect ratio will become 900x900. A1600x1100 image cropped with a 16x9 aspect ratio will become 1600x900.
Supportedext file extensions includepng,jpg, andwebp
An example of usage could be
GET /image/process.jpg?src=asdf.png&thumbnail=256&blur=3.0which would create a 256x256px JPEG thumbnail and blur it
GET /image/details/process.{ext}?src={file}&... for getting the details of a processed image.The returned JSON is the same format as listed for the full-resolution details endpoint.
DELETE /image/delete/{delete_token}/{file} orGET /image/delete/{delete_token}/{file} todelete a file, wheredelete_token andfile are from the/image endpoint’s JSON
Configuring the secure-api-key is not included yet. The envisioned basic use case is consumption on localhost by other services without exposing the service to the internet.
Table of Contents
Nextcloud is an open-source,self-hostable cloud platform. The server setup can be automated usingservices.nextcloud. Adesktop client is packaged atpkgs.nextcloud-client.
The current default by NixOS isnextcloud32 which is also the latestmajor version available.
Nextcloud is a PHP-based application which requires an HTTP server(services.nextcloudand optionally supportsservices.nginx).
For the database, you can setservices.nextcloud.config.dbtype toeithersqlite (the default),mysql, orpgsql. The simplest issqlite,which will be automatically created and managed by the application. For thelast two, you can easily create a local database by settingservices.nextcloud.database.createLocallytotrue, Nextcloud will automatically be configured to connect to it throughsocket.
A very basic configuration may look like this:
{ pkgs, ... }:{ services.nextcloud = { enable = true; hostName = "nextcloud.tld"; database.createLocally = true; config = { dbtype = "pgsql"; adminpassFile = "/path/to/admin-pass-file"; }; }; networking.firewall.allowedTCPPorts = [ 80 443 ];}ThehostName option is used internally to configure an HTTPserver usingPHP-FPMandnginx. Theconfig attribute set isused by the imperative installer and all values are written to an additional fileto ensure that changes can be applied by changing the module’s options.
In case the application serves multiple domains (those are checked with$_SERVER['HTTP_HOST'])it’s needed to add them toservices.nextcloud.settings.trusted_domains.
Auto updates for Nextcloud apps can be enabled usingservices.nextcloud.autoUpdateApps.
nextcloud-occThe management commandocc can beinvoked by using thenextcloud-occ wrapper that’s globally available on a system with Nextcloud enabled.
It requires elevated permissions to become thenextcloud user. Given the way the privilegeescalation is implemented, parameters passed via the environment to Nextcloud arecurrently ignored, except forOC_PASS andNC_PASS.
Custom service units that need to runnextcloud-occ either need elevated privilegesor the systemd configuration fromnextcloud-setup.service (recommended):
{ config, ... }:{ systemd.services.my-custom-service = { script = '' nextcloud-occ … ''; serviceConfig = { inherit (config.systemd.services.nextcloud-cron.serviceConfig) User LoadCredential KillMode ; }; };}Please note that the options required are subject to change. Please make sure to read therelease notes when upgrading.
General notes.Unfortunately Nextcloud appears to be very stateful when it comes tomanaging its own configuration. The config file lives in the home directoryof thenextcloud user (by default/var/lib/nextcloud/config/config.php) and is also used totrack several states of the application (e.g., whether installed or not).
All configuration parameters are also stored in/var/lib/nextcloud/config/override.config.php which is generated bythe module and linked from the store to ensure that all values fromconfig.php can be modified by the module.Howeverconfig.php manages the application’s state and shouldn’t betouched manually because of that.
Don’t deleteconfig.php! This filetracks the application’s state and a deletion can cause unwantedside-effects!
Don’t rerunnextcloud-occ maintenance:install!This command tries to install the applicationand can cause unwanted side-effects!
Multiple version upgrades.Nextcloud doesn’t allow to move more than one major-version forward. E.g., if you’re onv16, you cannot upgrade tov18, you need to upgrade tov17 first. This is ensured automatically as long as thestateVersion is declared properly. In that casethe oldest version available (one major behind the one from the previous NixOSrelease) will be selected by default and the module will generate a warning that remindsthe user to upgrade to latest Nextcloudafter that deploy.
Error: Command "upgrade" is not defined.This error usually occurs if the initial installation(nextcloud-occ maintenance:install) has failed. After that, the applicationis not installed, but the upgrade is attempted to be executed. Further context canbe found inNixOS/nixpkgs#111175.
First of all, it makes sense to find out what went wrong by looking at the logsof the installation viajournalctl -u nextcloud-setup and try to fixthe underlying issue.
If this occurs on anexisting setup, this is most likely becausethe maintenance mode is active. It can be deactivated by runningnextcloud-occ maintenance:mode --off. It’s advisable though tocheck the logs first on why the maintenance mode was activated.
Only perform the following measures onfreshly installed instances!
A re-run of the installer can be forced bydeleting/var/lib/nextcloud/config/config.php. This is the only timeadvisable because the fresh install doesn’t have any state that can be lost.In case that doesn’t help, an entire re-creation can be forced viarm -rf ~nextcloud/.
Server-side encryption.Nextcloud supportsserver-side encryption (SSE).This is not an end-to-end encryption, but can be used to encrypt files that will be persistedto external storage such as S3.
Issues with file permissions / unsafe path transitions
systemd-tmpfiles(8) makes sure that the paths for
configuration (including declarative config)
data
app store
home directory itself (usually/var/lib/nextcloud)
are properly set up. However,systemd-tmpfiles will refuse to do soif it detects an unsafe path transition, i.e. creating files/directorieswithin a directory that is neither owned byroot nor bynextcloud, theowning user of the files/directories to be created.
Symptoms of that include
config/override.config.php not being updated (and the config fileeventually being garbage-collected).
failure to read from application data.
To work around that, please make sure that all directories in questionare owned bynextcloud:nextcloud.
Failed to open stream: No such file or directory after deploys
Symptoms are errors like this after a deployment that disappear aftera few minutes:
Warning: file_get_contents(/run/secrets/nextcloud_db_password): Failed to open stream: No such file or directory in /nix/store/lqw657xbh6h67ccv9cgv104qhcs1i2vw-nextcloud-config.php on line 11Warning: http_response_code(): Cannot set response code - headers already sent (output started at /nix/store/lqw657xbh6h67ccv9cgv104qhcs1i2vw-nextcloud-config.php:11) in /nix/store/ikxpaq7kjdhpr4w7cgl1n28kc2gvlhg6-nextcloud-29.0.7/lib/base.php on line 639Cannot decode /run/secrets/nextcloud_secrets, because: Syntax errorThis can happen ifservices.nextcloud.secretFile orservices.nextcloud.config.dbpassFile are managed bysops-nix.
Here,/run/secrets/nextcloud_secrets is a symlink to/run/secrets.d/N/nextcloud_secrets. TheN will be incrementedwhen the sops-nix activation script runs, i.e./run/secrets.d/N doesn’t exist anymore after a deploy,only/run/secrets.d/N+1.
PHP maintains acache forrealpaththat still resolves to the old path which is causingtheNo such file or directory error. Interestingly,the cache isn’t used forfile_exists which is why this warningcomes instead of the error fromnix_read_secret inoverride.config.php.
One option to work around this is to turn off the cache by settingthe cache size to zero:
{ services.nextcloud.phpOptions."realpath_cache_size" = "0"; }httpd)By default,nginx is used as reverse-proxy fornextcloud.However, it’s possible to use e.g.httpd by explicitly disablingnginx usingservices.nginx.enable and fixing thesettingslisten.owner &listen.group in thecorrespondingphpfpm pool.
An exemplary configuration may look like this:
{ config, lib, pkgs, ...}:{ services.nginx.enable = false; services.nextcloud = { enable = true; hostName = "localhost"; # further, required options }; services.phpfpm.pools.nextcloud.settings = { "listen.owner" = config.services.httpd.user; "listen.group" = config.services.httpd.group; }; services.httpd = { enable = true; adminAddr = "webmaster@localhost"; extraModules = [ "proxy_fcgi" ]; virtualHosts."localhost" = { documentRoot = config.services.nextcloud.package; extraConfig = '' <Directory "${config.services.nextcloud.package}"> <FilesMatch "\.php$"> <If "-f %{REQUEST_FILENAME}"> SetHandler "proxy:unix:${config.services.phpfpm.pools.nextcloud.socket}|fcgi://localhost/" </If> </FilesMatch> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> DirectoryIndex index.php Require all granted Options +FollowSymLinks </Directory> ''; }; };}Nextcloud apps are installed statefully through the web interface.Some apps may require extra PHP extensions to be installed.This can be configured with theservices.nextcloud.phpExtraExtensions setting.
Alternatively, extra apps can also be declared with theservices.nextcloud.extraApps setting.When using this setting, apps can no longer be managed statefully because this can lead to Nextcloud updating appsthat are managed by Nix:
{ config, pkgs, ... }:{ services.nextcloud.extraApps = with config.services.nextcloud.package.packages.apps; { inherit user_oidc calendar contacts; };}Keep in mind that this is essentially a mirror of the apps from the appstore, but managed innixpkgs. This is by no means a curated list of apps that receive special testing on each update.
If you want automatic updates it is recommended that you use web interface to install apps.
This is because
our module writes logs into the journal (journalctl -t Nextcloud)
the Logreader application that allows reading logs in the admin panel is enabledby default and requires logs written to a file.
If you want to view logs in the admin panel,setservices.nextcloud.settings.log_type to “file”.
If you prefer logs in the journal, disable the logreader application to shut up the“info”. We can’t really do that by default since whether apps are enabled/disabledis part of the application’s state and tracked inside the database.
As stated in the previous paragraph, we must provide a clean upgrade-path for Nextcloudsince it cannot move more than one major version forward on a single upgrade. This chapteradds some notes how Nextcloud updates should be rolled out in the future.
While minor and patch-level updates are no problem and can be done directly in thepackage-expression (and should be backported to supported stable branches after that),major-releases should be added in a new attribute (e.g. Nextcloudv19.0.0should be available innixpkgs aspkgs.nextcloud19).To provide simple upgrade paths it’s generally useful to backport those as well to stablebranches. As long as the package-default isn’t altered, this won’t break existing setups.After that, the versioning-warning in thenextcloud-module should beupdated to make sure that thepackage-option selects the latest versionon fresh setups.
If major-releases will be abandoned by upstream, we should check first if those are neededin NixOS for a safe upgrade-path before removing those. In that case we should keep thosepackages, but mark them as insecure in an expression like this (in<nixpkgs/pkgs/servers/nextcloud/default.nix>):
# ...{ nextcloud17 = generic { version = "17.0.x"; sha256 = "0000000000000000000000000000000000000000000000000000"; eol = true; };}Ideally we should make sure that it’s possible to jump two NixOS versions forward:i.e. the warnings and the logic in the module should guard a user to upgrade from aNextcloud on e.g. 19.09 to a Nextcloud on 20.09.
Matomo is a real-time web analytics application. This module configuresphp-fpm as backend for Matomo, optionally configuring an nginx vhost as well.
An automatic setup is not supported by Matomo, so you need to configure Matomoitself in the browser-based Matomo setup.
You also need to configure a MariaDB or MySQL database and -user for Matomoyourself, and enter those credentials in your browser. You can usepasswordless database authentication via the UNIX_SOCKET authenticationplugin with the following SQL commands:
# For MariaDBINSTALL PLUGIN unix_socket SONAME 'auth_socket';CREATE DATABASE matomo;CREATE USER 'matomo'@'localhost' IDENTIFIED WITH unix_socket;GRANT ALL PRIVILEGES ON matomo.* TO 'matomo'@'localhost';# For MySQLINSTALL PLUGIN auth_socket SONAME 'auth_socket.so';CREATE DATABASE matomo;CREATE USER 'matomo'@'localhost' IDENTIFIED WITH auth_socket;GRANT ALL PRIVILEGES ON matomo.* TO 'matomo'@'localhost';Then fill inmatomo as database user and database name,and leave the password field blank. This authentication works by allowingonly thematomo unix user to authenticate as thematomo database user (without needing a password), but noother users. For more information on passwordless login, seehttps://mariadb.com/kb/en/mariadb/unix_socket-authentication-plugin/.
Of course, you can use password based authentication as well, e.g. when thedatabase is not on the same host.
This module comes with the systemd servicematomo-archive-processing.service and a timer thatautomatically triggers archive processing every hour. This means that youcan safelydisable browser triggers for Matomo archiving atAdministration > System > General Settings.
With automatic archive processing, you can now also enable todelete old visitor logsatAdministration > System > Privacy, but make sure that you runsystemctl start matomo-archive-processing.service at least once without errors ifyou have already collected data before, so that the reports get archivedbefore the source data gets deleted.
You only need to take backups of your MySQL database and the/var/lib/matomo/config/config.ini.php file. Use a userin thematomo group or root to access the file. For moreinformation, seehttps://matomo.org/faq/how-to-install/faq_138/.
Matomo will warn you that the JavaScript tracker is not writable. This isbecause it’s located in the read-only nix store. You can safely ignorethis, unless you need a plugin that needs JavaScript tracker access.
You can use other web servers by forwarding calls forindex.php andpiwik.php to theservices.phpfpm.pools.<name>.socketfastcgi unix socket. You can usethe nginx configuration in the module code as a reference to what elseshould be configured.
Table of Contents
Lemmy is a federated alternative to reddit in rust.
the minimum to start lemmy is
{ services.lemmy = { enable = true; settings = { hostname = "lemmy.union.rocks"; database.createLocally = true; }; caddy.enable = true; };}this will start the backend on port 8536 and the frontend on port 1234.It will expose your instance with a caddy reverse proxy to the hostname you’ve provided.Postgres will be initialized on that same instance automatically.
On first connection you will be asked to define an admin user.
Exposing with nginx is not implemented yet.
This has been tested using a local database with a unix socket connection. Using different database settings will likely require modifications
Table of Contents
Keycloak is anopen source identity and access management server with support forOpenID Connect,OAUTH 2.0 andSAML 2.0.
An administrative user with the usernameadmin is automatically created in themaster realm. Its initial password can beconfigured by settingservices.keycloak.initialAdminPasswordand defaults tochangeme. The password isnot stored safely and should be changed immediately in theadmin panel.
Refer to theKeycloak Server Administration Guide for information onhow to administer your Keycloakinstance.
Keycloak can be used with either PostgreSQL, MariaDB orMySQL. Which one is used can beconfigured inservices.keycloak.database.type. The selecteddatabase will automatically be enabled and a database and rolecreated unlessservices.keycloak.database.host is changedfrom its default oflocalhost orservices.keycloak.database.createLocally is set tofalse.
External database access can also be configured by settingservices.keycloak.database.host,services.keycloak.database.name,services.keycloak.database.username,services.keycloak.database.useSSL andservices.keycloak.database.caCert asappropriate. Note that you need to manually create the databaseand allow the configured database user full access to it.
services.keycloak.database.passwordFilemust be set to the path to a file containing the password usedto log in to the database. Ifservices.keycloak.database.hostandservices.keycloak.database.createLocallyare kept at their defaults, the database rolekeycloak with that password is provisionedon the local database instance.
The path should be provided as a string, not a Nix path, since Nixpaths are copied into the world readable Nix store.
The hostname is used to build the public URL used as base forall frontend requests and must be configured throughservices.keycloak.settings.hostname.
If you’re migrating an old Wildfly based Keycloak instanceand want to keep compatibility with your current clients,you’ll likely want to setservices.keycloak.settings.http-relative-pathto/auth. See the option descriptionfor more details.
services.keycloak.settings.hostname-backchannel-dynamicKeycloak has the capability to offer a separate URL for backchannel requests,enabling internal communication while maintaining the use of a public URLfor frontchannel requests. Moreover, the backchannel is dynamicallyresolved based on incoming headers endpoint.
For more information on hostname configuration, see theHostnamesection of the Keycloak Server Installation and ConfigurationGuide.
By default, Keycloak won’t acceptunsecured HTTP connections originating from outside its localnetwork.
HTTPS support requires a TLS/SSL certificate and a private key,bothPEM formatted.Their paths should be set throughservices.keycloak.sslCertificate andservices.keycloak.sslCertificateKey.
The paths should be provided as a strings, not a Nix paths,since Nix paths are copied into the world readable Nix store.
You can package custom themes and make them visible toKeycloak throughservices.keycloak.themes. See theThemes section of the Keycloak Server Development Guide and the description of the aforementioned NixOS option formore information.
Keycloak server configuration parameters can be set inservices.keycloak.settings. These corresponddirectly to options inconf/keycloak.conf. Some of the mostimportant parameters are documented as suboptions, the rest canbe found in theAllconfiguration section of the Keycloak Server Installation andConfiguration Guide.
Options containing secret data should be set to an attributeset containing the attribute_secret - astring pointing to a file containing the value the optionshould be set to. See the description ofservices.keycloak.settings for an example.
A basic configuration with some custom settings could look like this:
{ services.keycloak = { enable = true; settings = { hostname = "keycloak.example.com"; hostname-strict-backchannel = true; }; initialAdminPassword = "e6Wcm0RrtegMEHl"; # change on first login sslCertificate = "/run/keys/ssl_cert"; sslCertificateKey = "/run/keys/ssl_key"; database.passwordFile = "/run/keys/db_password"; };}Table of Contents
With Jitsi Meet on NixOS you can quickly configure a complete,private, self-hosted video conferencing solution.
A minimal configuration using Let’s Encrypt for TLS certificates looks like this:
{ services.jitsi-meet = { enable = true; hostName = "jitsi.example.com"; }; services.jitsi-videobridge.openFirewall = true; networking.firewall.allowedTCPPorts = [ 80 443 ]; security.acme.email = "me@example.com"; security.acme.acceptTerms = true;}Jitsi Meet depends on the Prosody XMPP server only for message passing fromthe web browser while the default Prosody configuration is intended for usewith standalone XMPP clients and XMPP federation. If you only use Prosody asa backend for Jitsi Meet it is therefore recommended to also enableservices.jitsi-meet.prosody.lockdown option to disable unnecessaryProsody features such as federation or the file proxy.
Here is the minimal configuration with additional configurations:
{ services.jitsi-meet = { enable = true; hostName = "jitsi.example.com"; prosody.lockdown = true; config = { enableWelcomePage = false; prejoinPageEnabled = true; defaultLang = "fi"; }; interfaceConfig = { SHOW_JITSI_WATERMARK = false; SHOW_WATERMARK_FOR_GUESTS = false; }; }; services.jitsi-videobridge.openFirewall = true; networking.firewall.allowedTCPPorts = [ 80 443 ]; security.acme.email = "me@example.com"; security.acme.acceptTerms = true;}Table of Contents
Immich is a self-hosted photo and video managementsolution, similar to SaaS offerings like Google Photos.
pgvecto-rs to VectorChord (pre-25.11 installations)Immich instances that were setup before 25.11 (as insystem.stateVersion = 25.11;) will be automatically migrated to VectorChord.Note that this migration is not reversible, so database dumps should be createdif desired.
SeeImmich documentation for more details aboutthe automatic migration.
After a successful migration,pgvecto-rs should be removed from the databaseinstallation, unless other applications depend on it.
Make sure VectorChord is enabled (services.immich.database.enableVectorChord) and Immich has completed the migration. Refer to theImmich documentation for details.
Run the following two statements in the PostgreSQL database using a superuser role in Immich’s database.
DROP EXTENSION vectors;DROP SCHEMA vectors;You may use the following command to run these statements against the database:sudo -u postgres psql immich (Replaceimmich with the value ofservices.immich.database.name)
Disablepgvecto-rs by settingservices.immich.database.enableVectors tofalse.
Rebuild and switch.
Table of Contents
With Honk on NixOS you can quickly configure a complete ActivityPub server withminimal setup and support costs.
A minimal configuration looks like this:
{ services.honk = { enable = true; host = "0.0.0.0"; port = 8080; username = "username"; passwordFile = "/etc/honk/password.txt"; servername = "honk.example.com"; }; networking.firewall.allowedTCPPorts = [ 8080 ];}Table of Contents
Hatsu is an fully-automated ActivityPub bridge for static sites.
the minimum configuration to start hatsu server would look like this:
{ services.hatsu = { enable = true; settings = { HATSU_DOMAIN = "hatsu.local"; HATSU_PRIMARY_ACCOUNT = "example.com"; }; };}this will start the hatsu server on port 3939 and save the database in/var/lib/hatsu/hatsu.sqlite3.
Please refer to theHatsu Documentation for additional configuration options.
Table of Contents
Grocy is a web-based self-hosted groceries& household management solution for your home.
A very basic configuration may look like this:
{ pkgs, ... }:{ services.grocy = { enable = true; hostName = "grocy.tld"; };}This configures a simple vhost usingnginxwhich listens togrocy.tld with fully configured ACME/LE (this can bedisabled by settingservices.grocy.nginx.enableSSLtofalse). After the initial setup the credentialsadmin:admincan be used to login.
The application’s state is persisted at/var/lib/grocy/grocy.db in asqlite3 database. The migration is applied when requesting the/-routeof the application.
The configuration forgrocy is located at/etc/grocy/config.php.By default, the following settings can be defined in the NixOS-configuration:
{ pkgs, ... }:{ services.grocy.settings = { # The default currency in the system for invoices etc. # Please note that exchange rates aren't taken into account, this # is just the setting for what's shown in the frontend. currency = "EUR"; # The display language (and locale configuration) for grocy. culture = "de"; calendar = { # Whether or not to show the week-numbers # in the calendar. showWeekNumber = true; # Index of the first day to be shown in the calendar (0=Sunday, 1=Monday, # 2=Tuesday and so on). firstDayOfWeek = 2; }; };}If you want to alter the configuration file on your own, you can do this manually withan expression like this:
{ lib, ... }:{ environment.etc."grocy/config.php".text = lib.mkAfter '' // Arbitrary PHP code in grocy's configuration file '';}Table of Contents
GoToSocial is an ActivityPub social network server, written in Golang.
The following configuration sets up the PostgreSQL as database backend and bindsGoToSocial to127.0.0.1:8080, expecting to be run behind a HTTP proxy ongotosocial.example.com.
{ services.gotosocial = { enable = true; setupPostgresqlDB = true; settings = { application-name = "My GoToSocial"; host = "gotosocial.example.com"; protocol = "https"; bind-address = "127.0.0.1"; port = 8080; }; };}Please refer to theGoToSocial Documentationfor additional configuration options.
Although it is possible to expose GoToSocial directly, it is common practice to operate it behind anHTTP reverse proxy such as nginx.
{ networking.firewall.allowedTCPPorts = [ 80 443 ]; services.nginx = { enable = true; clientMaxBodySize = "40M"; virtualHosts = with config.services.gotosocial.settings; { "${host}" = { enableACME = true; forceSSL = true; locations = { "/" = { recommendedProxySettings = true; proxyWebsockets = true; proxyPass = "http://${bind-address}:${toString port}"; }; }; }; }; };}Please refer toSSL/TLS Certificates with ACME for details on how to provision an SSL/TLS certificate.
After the GoToSocial service is running, thegotosocial-admin utility can be used to manage users. In particular anadministrative user can be created with
$ sudo gotosocial-admin account create --username <nickname> --email <email> --password <password>$ sudo gotosocial-admin account confirm --username <nickname>$ sudo gotosocial-admin account promote --username <nickname>Table of Contents
Glance is a self-hosted dashboard that puts all your feeds in one place.
Visitthe Glance project page to learnmore about it.
Checkout theconfiguration docs to learn more.Use the following configuration to start a public instance of Glance locally:
{ services.glance = { enable = true; settings = { pages = [ { name = "Home"; columns = [ { size = "full"; widgets = [ { type = "calendar"; } { type = "weather"; location = "Nivelles, Belgium"; } ]; } ]; } ]; }; openFirewall = true; };}Table of Contents
FileSender is a software that makes it easy to send and receive big files.
FileSender usesSimpleSAMLphp for authentication, which needs to be configured separately.
Minimal working instance of FileSender that uses password-authentication would look like this:
let format = pkgs.formats.php { };in{ networking.firewall.allowedTCPPorts = [ 80 443 ]; services.filesender = { enable = true; localDomain = "filesender.example.com"; configureNginx = true; database.createLocally = true; settings = { auth_sp_saml_authentication_source = "default"; auth_sp_saml_uid_attribute = "uid"; storage_filesystem_path = "<STORAGE PATH FOR UPLOADED FILES>"; admin = "admin"; admin_email = "admin@example.com"; email_reply_to = "noreply@example.com"; }; }; services.simplesamlphp.filesender = { settings = { "module.enable".exampleauth = true; }; authSources = { admin = [ "core:AdminPassword" ]; default = format.lib.mkMixedArray [ "exampleauth:UserPass" ] { "admin:admin123" = { uid = [ "admin" ]; cn = [ "admin" ]; mail = [ "admin@example.com" ]; }; }; }; };}Example above uses hardcoded clear-text password, in production you should use other authentication method like LDAP. You can check supported authentication methodsin SimpleSAMLphp documentation.
Table of Contents
Discourse is amodern and open source discussion platform.
A minimal configuration using Let’s Encrypt for TLS certificates looks like this:
{ services.discourse = { enable = true; hostname = "discourse.example.com"; admin = { email = "admin@example.com"; username = "admin"; fullName = "Administrator"; passwordFile = "/path/to/password_file"; }; secretKeyBaseFile = "/path/to/secret_key_base_file"; }; security.acme.email = "me@example.com"; security.acme.acceptTerms = true;}Provided a proper DNS setup, you’ll be able to connect to theinstance atdiscourse.example.com and log inusing the credentials provided inservices.discourse.admin.
To set up TLS using a regular certificate and key on file, usetheservices.discourse.sslCertificateandservices.discourse.sslCertificateKeyoptions:
{ services.discourse = { enable = true; hostname = "discourse.example.com"; sslCertificate = "/path/to/ssl_certificate"; sslCertificateKey = "/path/to/ssl_certificate_key"; admin = { email = "admin@example.com"; username = "admin"; fullName = "Administrator"; passwordFile = "/path/to/password_file"; }; secretKeyBaseFile = "/path/to/secret_key_base_file"; };}Discourse uses PostgreSQL to store most of itsdata. A database will automatically be enabled and a databaseand role created unlessservices.discourse.database.host is changed fromits default ofnull orservices.discourse.database.createLocally is settofalse.
External database access can also be configured by settingservices.discourse.database.host,services.discourse.database.username andservices.discourse.database.passwordFile asappropriate. Note that you need to manually create a databasecalleddiscourse (or the name you chose inservices.discourse.database.name) andallow the configured database user full access to it.
In addition to the basic setup, you’ll want to configure an SMTPserver Discourse can use to send userregistration and password reset emails, among others. You canalso optionally let Discourse receiveemail, which enables people to reply to threads and conversationsvia email.
A basic setup which assumes you want to use your configuredhostname asemail domain can be done like this:
{ services.discourse = { enable = true; hostname = "discourse.example.com"; sslCertificate = "/path/to/ssl_certificate"; sslCertificateKey = "/path/to/ssl_certificate_key"; admin = { email = "admin@example.com"; username = "admin"; fullName = "Administrator"; passwordFile = "/path/to/password_file"; }; mail.outgoing = { serverAddress = "smtp.emailprovider.com"; port = 587; username = "user@emailprovider.com"; passwordFile = "/path/to/smtp_password_file"; }; mail.incoming.enable = true; secretKeyBaseFile = "/path/to/secret_key_base_file"; };}This assumes you have set up an MX record for the address you’veset inhostname andrequires proper SPF, DKIM and DMARC configuration to be done forthe domain you’re sending from, in order for email to be reliably delivered.
If you want to use a different domain for your outgoing email(for exampleexample.com instead ofdiscourse.example.com) you should setservices.discourse.mail.notificationEmailAddress andservices.discourse.mail.contactEmailAddress manually.
Setup of TLS for incoming email is currently only configuredautomatically when a regular TLS certificate is used, i.e. whenservices.discourse.sslCertificate andservices.discourse.sslCertificateKey areset.
Additional site settings and backend settings, for which noexplicit NixOS options are provided,can be set inservices.discourse.siteSettings andservices.discourse.backendSettings respectively.
“Site settings” are the settings that can bechanged through the DiscourseUI. Theirdefault values can be set usingservices.discourse.siteSettings.
Settings are expressed as a Nix attribute set which matches thestructure of the configuration inconfig/site_settings.yml.To find a setting’s path, you only need to care about the firsttwo levels; i.e. its category (e.g.login)and name (e.g.invite_only).
Settings containing secret data should be set to an attributeset containing the attribute_secret - astring pointing to a file containing the value the optionshould be set to. See the example.
Settings are expressed as a Nix attribute set which matches thestructure of the configuration inconfig/discourse.conf.Empty parameters can be defined by setting them tonull.
The following example sets the title and description of theDiscourse instance and enablesGitHub login in the site settings,and changes a few request limits in the backend settings:
{ services.discourse = { enable = true; hostname = "discourse.example.com"; sslCertificate = "/path/to/ssl_certificate"; sslCertificateKey = "/path/to/ssl_certificate_key"; admin = { email = "admin@example.com"; username = "admin"; fullName = "Administrator"; passwordFile = "/path/to/password_file"; }; mail.outgoing = { serverAddress = "smtp.emailprovider.com"; port = 587; username = "user@emailprovider.com"; passwordFile = "/path/to/smtp_password_file"; }; mail.incoming.enable = true; siteSettings = { required = { title = "My Cats"; site_description = "Discuss My Cats (and be nice plz)"; }; login = { enable_github_logins = true; github_client_id = "a2f6dfe838cb3206ce20"; github_client_secret._secret = /run/keys/discourse_github_client_secret; }; }; backendSettings = { max_reqs_per_ip_per_minute = 300; max_reqs_per_ip_per_10_seconds = 60; max_asset_reqs_per_ip_per_10_seconds = 250; max_reqs_per_ip_mode = "warn+block"; }; secretKeyBaseFile = "/path/to/secret_key_base_file"; };}In the resulting site settings file, thelogin.github_client_secret key will be setto the contents of the/run/keys/discourse_github_client_secretfile.
You can install Discourse pluginsusing theservices.discourse.pluginsoption. Pre-packaged plugins are provided in<your_discourse_package_here>.plugins. Ifyou want the full suite of plugins provided throughnixpkgs, you can also set theservices.discourse.package option topkgs.discourseAllPlugins.
Plugins can be built with the<your_discourse_package_here>.mkDiscoursePluginfunction. Normally, it should suffice to provide aname andsrc attribute. Ifthe plugin has Ruby dependencies, however, they need to bepackaged in accordance with theDeveloping with Rubysection of the Nixpkgs manual and theappropriate gem options set inbundlerEnvArgs(normallygemdir is sufficient). A plugin’sRuby dependencies are listed in itsplugin.rb file as function calls togem. To construct the correspondingGemfile manually, runbundle init, then add thegem lines to itverbatim.
Much of the packaging can be done automatically by thenixpkgs/pkgs/servers/web-apps/discourse/update.pyscript - just add the plugin to thepluginslist in theupdate_plugins function and runthe script:
./update.py update-pluginsSome plugins providesite settings.Their defaults can be configured usingservices.discourse.siteSettings, just likeregular site settings. To find the names of these settings, lookin theconfig/settings.yml file of the pluginrepo.
For example, to add thediscourse-spoiler-alertanddiscourse-solvedplugins, and disablediscourse-spoiler-alertby default:
{ services.discourse = { enable = true; hostname = "discourse.example.com"; sslCertificate = "/path/to/ssl_certificate"; sslCertificateKey = "/path/to/ssl_certificate_key"; admin = { email = "admin@example.com"; username = "admin"; fullName = "Administrator"; passwordFile = "/path/to/password_file"; }; mail.outgoing = { serverAddress = "smtp.emailprovider.com"; port = 587; username = "user@emailprovider.com"; passwordFile = "/path/to/smtp_password_file"; }; mail.incoming.enable = true; plugins = with config.services.discourse.package.plugins; [ discourse-spoiler-alert discourse-solved ]; siteSettings = { plugins = { spoiler_enabled = false; }; }; secretKeyBaseFile = "/path/to/secret_key_base_file"; };}Table of Contents
Davis is a caldav and carrddav server. Ithas a simple, fully translatable admin interface for sabre/dav based on Symfony5 and Bootstrap 5, initially inspired by Baïkal.
At first, an application secret is needed, this can be generated with:
$ cat /dev/urandom | tr -dc a-zA-Z0-9 | fold -w 48 | head -n 1After that,davis can be deployed like this:
{ services.davis = { enable = true; hostname = "davis.example.com"; mail = { dsn = "smtp://username@example.com:25"; inviteFromAddress = "davis@example.com"; }; adminLogin = "admin"; adminPasswordFile = "/run/secrets/davis-admin-password"; appSecretFile = "/run/secrets/davis-app-secret"; nginx = {}; };}This deploys Davis using a sqlite database running out of/var/lib/davis.
Table of Contents
Castopod is an open-source hosting platform made for podcasters who want to engage and interact with their audience.
Configure ACME (https://nixos.org/manual/nixos/unstable/#module-security-acme).Use the following configuration to start a public instance of Castopod oncastopod.example.com domain:
{ networking.firewall.allowedTCPPorts = [ 80 443 ]; services.castopod = { enable = true; database.createLocally = true; nginx.virtualHost = { serverName = "castopod.example.com"; enableACME = true; forceSSL = true; }; };}Go tohttps://castopod.example.com/cp-install to create superadmin account after applying the above configuration.
c2FmZQ is an application that can securely encrypt, store, and share files,including but not limited to pictures and videos.
The servicec2fmzq-server can be enabled by setting
{ services.c2fmzq-server.enable = true; }This will spin up an instance of the server which is API-compatible withStingle Photos and an experimental Progressive Web App(PWA) to interact with the storage via the browser.
In principle the server can be exposed directly on a public interface and thereare command line options to manage HTTPS certificates directly, but the moduleis designed to be served behind a reverse proxy or only accessed via localhost.
{ services.c2fmzq-server = { enable = true; bindIP = "127.0.0.1"; # default port = 8080; # default }; services.nginx = { enable = true; recommendedProxySettings = true; virtualHosts."example.com" = { enableACME = true; forceSSL = true; locations."/" = { proxyPass = "http://127.0.0.1:8080"; }; }; };}For more information, seehttps://github.com/c2FmZQ/c2FmZQ/.
Table of Contents
Akkoma is a lightweight ActivityPub microblogging server forked from Pleroma.
The Elixir configuration file required by Akkoma is generated automatically fromservices.akkoma.config. Secrets must beincluded from external files outside of the Nix store by setting the configuration option toan attribute set containing the attribute_secret – a string pointing to the filecontaining the actual value of the option.
For the mandatory configuration settings these secrets will be generated automatically if thereferenced file does not exist during startup, unless disabled throughservices.akkoma.initSecrets.
The following configuration binds Akkoma to the Unix socket/run/akkoma/socket, expecting tobe run behind a HTTP proxy onfediverse.example.com.
{ services.akkoma.enable = true; services.akkoma.config = { ":pleroma" = { ":instance" = { name = "My Akkoma instance"; description = "More detailed description"; email = "admin@example.com"; registration_open = false; }; "Pleroma.Web.Endpoint" = { url.host = "fediverse.example.com"; }; }; };}Please refer to theconfiguration cheat sheetfor additional configuration options.
After the Akkoma service is running, the administration utility can be used tomanage users. In particular anadministrative user can be created with
$ pleroma_ctl user new <nickname> <email> --admin --moderator --password <password>Although it is possible to expose Akkoma directly, it is common practice to operate it behind anHTTP reverse proxy such as nginx.
{ services.akkoma.nginx = { enableACME = true; forceSSL = true; }; services.nginx = { enable = true; clientMaxBodySize = "16m"; recommendedTlsSettings = true; recommendedOptimisation = true; recommendedGzipSettings = true; };}Please refer toSSL/TLS Certificates with ACME for details on how to provision an SSL/TLS certificate.
Without the media proxy function, Akkoma does not store any remote media like pictures or videolocally, and clients have to fetch them directly from the source server.
{ # Enable nginx slice module distributed with Tengine services.nginx.package = pkgs.tengine; # Enable media proxy services.akkoma.config.":pleroma".":media_proxy" = { enabled = true; proxy_opts.redirect_on_failure = true; }; # Adjust the persistent cache size as needed: # Assuming an average object size of 128 KiB, around 1 MiB # of memory is required for the key zone per GiB of cache. # Ensure that the cache directory exists and is writable by nginx. services.nginx.commonHttpConfig = '' proxy_cache_path /var/cache/nginx/cache/akkoma-media-cache levels= keys_zone=akkoma_media_cache:16m max_size=16g inactive=1y use_temp_path=off; ''; services.akkoma.nginx = { locations."/proxy" = { proxyPass = "http://unix:/run/akkoma/socket"; extraConfig = '' proxy_cache akkoma_media_cache; # Cache objects in slices of 1 MiB slice 1m; proxy_cache_key $host$uri$is_args$args$slice_range; proxy_set_header Range $slice_range; # Decouple proxy and upstream responses proxy_buffering on; proxy_cache_lock on; proxy_ignore_client_abort on; # Default cache times for various responses proxy_cache_valid 200 1y; proxy_cache_valid 206 301 304 1h; # Allow serving of stale items proxy_cache_use_stale error timeout invalid_header updating; ''; }; };}The following example enables theMediaProxyWarmingPolicy MRF policy which automaticallyfetches all media associated with a post through the media proxy, as soon as the post isreceived by the instance.
{ services.akkoma.config.":pleroma".":mrf".policies = map (pkgs.formats.elixirConf { }).lib.mkRaw [ "Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy" ];}Akkoma can generate previews for media.
{ services.akkoma.config.":pleroma".":media_preview_proxy" = { enabled = true; thumbnail_max_width = 1920; thumbnail_max_height = 1080; };}Akkoma will be deployed with theakkoma-fe andadmin-fe frontends by default. These can bemodified by settingservices.akkoma.frontends.
The following example overrides the primary frontend’s default configuration using a customderivation.
{ services.akkoma.frontends.primary.package = pkgs.runCommand "akkoma-fe" { config = builtins.toJSON { expertLevel = 1; collapseMessageWithSubject = false; stopGifs = false; replyVisibility = "following"; webPushHideIfCW = true; hideScopeNotice = true; renderMisskeyMarkdown = false; hideSiteFavicon = true; postContentType = "text/markdown"; showNavShortcuts = false; }; nativeBuildInputs = with pkgs; [ jq xorg.lndir ]; passAsFile = [ "config" ]; } '' mkdir $out lndir ${pkgs.akkoma-frontends.akkoma-fe} $out rm $out/static/config.json jq -s add ${pkgs.akkoma-frontends.akkoma-fe}/static/config.json ${config} \ >$out/static/config.json '';}Akkoma comes with a number of modules to police federation with other ActivityPub instances.The most valuable for typical users is the:mrf_simple modulewhich allows limiting federation based on instance hostnames.
This configuration snippet provides an example on how these can be used. Choosing an adequatefederation policy is not trivial and entails finding a balance between connectivity to the restof the fediverse and providing a pleasant experience to the users of an instance.
{ services.akkoma.config.":pleroma" = with (pkgs.formats.elixirConf { }).lib; { ":mrf".policies = map mkRaw [ "Pleroma.Web.ActivityPub.MRF.SimplePolicy" ]; ":mrf_simple" = { # Tag all media as sensitive media_nsfw = mkMap { "nsfw.weird.kinky" = "Untagged NSFW content"; }; # Reject all activities except deletes reject = mkMap { "kiwifarms.cc" = "Persistent harassment of users, no moderation"; }; # Force posts to be visible by followers only followers_only = mkMap { "beta.birdsite.live" = "Avoid polluting timelines with Twitter posts"; }; }; };}This example strips GPS and location metadata from uploads, deduplicates them and anonymises thethe file name.
{ services.akkoma.config.":pleroma"."Pleroma.Upload".filters = map (pkgs.formats.elixirConf { }).lib.mkRaw [ "Pleroma.Upload.Filter.Exiftool" "Pleroma.Upload.Filter.Dedupe" "Pleroma.Upload.Filter.AnonymizeFilename" ];}Pleroma instances can be migrated to Akkoma either by copying the database and upload data or bypointing Akkoma to the existing data. The necessary database migrations are run automaticallyduring startup of the service.
The configuration has to be copy‐edited manually.
Depending on the size of the database, the initial migration may take a long time and exceed thestartup timeout of the system manager. To work around this issue one may adjust the startup timeoutsystemd.services.akkoma.serviceConfig.TimeoutStartSec or simply run the migrationsmanually:
pleroma_ctl migrateCopying the Pleroma data instead of re‐using it in place may permit easier reversion to Pleroma,but allows the two data sets to diverge.
First disable Pleroma and then copy its database and upload data:
# Create a copy of the databasenix-shell -p postgresql --run 'createdb -T pleroma akkoma'# Copy upload datamkdir /var/lib/akkomacp -R --reflink=auto /var/lib/pleroma/uploads /var/lib/akkoma/After the data has been copied, enable the Akkoma service and verify that the migration has beensuccessful. If no longer required, the original data may then be deleted:
# Delete original databasenix-shell -p postgresql --run 'dropdb pleroma'# Delete original Pleroma staterm -r /var/lib/pleromaTo re‐use the Pleroma data in place, disable Pleroma and enable Akkoma, pointing it to thePleroma database and upload directory.
{ # Adjust these settings according to the database name and upload directory path used by Pleroma services.akkoma.config.":pleroma"."Pleroma.Repo".database = "pleroma"; services.akkoma.config.":pleroma".":instance".upload_dir = "/var/lib/pleroma/uploads";}Please keep in mind that after the Akkoma service has been started, any migrations applied byAkkoma have to be rolled back before the database can be used again with Pleroma. This can beachieved throughpleroma_ctl ecto.rollback. Refer to theEcto SQL documentation fordetails.
The Akkoma systemd service may be confined to a chroot with
{ services.systemd.akkoma.confinement.enable = true; }Confinement of services is not generally supported in NixOS and therefore disabled by default.Depending on the Akkoma configuration, the default confinement settings may be insufficient andlead to subtle errors at run time, requiring adjustment:
Useservices.systemd.akkoma.confinement.packagesto make packages available in the chroot.
services.systemd.akkoma.serviceConfig.BindPaths andservices.systemd.akkoma.serviceConfig.BindReadOnlyPaths permit access to outside pathsthrough bind mounts. Refer toBindPaths=ofsystemd.exec(5) for details.
Being an Elixir application, Akkoma can be deployed in a distributed fashion.
This requires settingservices.akkoma.dist.address andservices.akkoma.dist.cookie. Thespecifics depend strongly on the deployment environment. For more information please check therelevantErlang documentation.
Thesystemd-lock-handler module provides a service that bridgesD-Bus events fromlogind to user-level systemd targets:
lock.target started byloginctl lock-session,
unlock.target started byloginctl unlock-session and
sleep.target started bysystemctl suspend.
You can create a user service that starts with any of these targets.
For example, to create a service forswaylock:
{ services.systemd-lock-handler.enable = true; systemd.user.services.swaylock = { description = "Screen locker for Wayland"; documentation = [ "man:swaylock(1)" ]; # If swaylock exits cleanly, unlock the session: onSuccess = [ "unlock.target" ]; # When lock.target is stopped, stops this too: partOf = [ "lock.target" ]; # Delay lock.target until this service is ready: before = [ "lock.target" ]; wantedBy = [ "lock.target" ]; serviceConfig = { # systemd will consider this service started when swaylock forks... Type = "forking"; # ... and swaylock will fork only after it has locked the screen. ExecStart = "${lib.getExe pkgs.swaylock} -f"; # If swaylock crashes, always restart it immediately: Restart = "on-failure"; RestartSec = 0; }; };}Seeupstream documentation for more information.
Table of Contents
Kerberos is a computer-network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner.
This module provides both the MIT and Heimdal implementations of the a Kerberos server.
To enable a Kerberos server:
{ security.krb5 = { # Here you can choose between the MIT and Heimdal implementations. package = pkgs.krb5; # package = pkgs.heimdal; # Optionally set up a client on the same machine as the server enable = true; settings = { libdefaults.default_realm = "EXAMPLE.COM"; realms."EXAMPLE.COM" = { kdc = "kerberos.example.com"; admin_server = "kerberos.example.com"; }; }; }; services.kerberos-server = { enable = true; settings = { realms."EXAMPLE.COM" = { acl = [ { principal = "adminuser"; access = [ "add" "cpw" ]; } ]; }; }; };}The Heimdal documentation will sometimes assume that state is stored in/var/heimdal, but this module uses/var/lib/heimdal instead.
Due to the heimdal implementation being chosen throughsecurity.krb5.package, it is not possible to have a system with one implementation of the client and another of the server.
Whileservices.kerberos_server.settings has a common freeform type between the two implementations, the actual settings that can be set can vary between the two implementations. To figure out what settings are available, you should consult the upstream documentation for the implementation you are using.
MIT Kerberos homepage: https://web.mit.edu/kerberos
MIT Kerberos docs: https://web.mit.edu/kerberos/krb5-latest/doc/index.html
Heimdal Kerberos GitHub wiki: https://github.com/heimdal/heimdal/wiki
Heimdal kerberos doc manpages (Debian unstable): https://manpages.debian.org/unstable/heimdal-docs/index.html
Heimdal Kerberos kdc manpages (Debian unstable): https://manpages.debian.org/unstable/heimdal-kdc/index.html
Note the version number in the URLs, it may be different for the latest version.
Table of Contents
Meilisearch is a lightweight, fast and powerful search engine. Think elastic search with a much smaller footprint.
the minimum to start meilisearch is
{ services.meilisearch.enable = true; }this will start the http server included with meilisearch on port 7700.
test withcurl -X GET 'http://localhost:7700/health'
you first need to add documents to an index before you can search for documents.
movies indexcurl -X POST 'http://127.0.0.1:7700/indexes/movies/documents' --data '[{"id": "123", "title": "Superman"}, {"id": 234, "title": "Batman"}]'
movies indexcurl 'http://127.0.0.1:7700/indexes/movies/search' --data '{ "q": "botman" }' (note the typo is intentional and there to demonstrate the typo tolerant capabilities)
The default nixos package doesn’t come with thedashboard, since the dashboard features makes some assets downloads at compile time.
Anonymized Analytics sent to meilisearch are disabled by default.
Default deployment is development mode. It doesn’t require a secret master key. All routes are not protected and accessible.
the snapshot feature is not yet configurable from the module, it’s just a matter of adding the relevant environment variables.
Table of Contents
Source:modules/services/networking/yggdrasil/default.nix
Upstream documentation:https://yggdrasil-network.github.io/
Yggdrasil is an early-stage implementation of a fully end-to-end encrypted,self-arranging IPv6 network.
An annotated example of a simple configuration:
{ services.yggdrasil = { enable = true; persistentKeys = false; # The NixOS module will generate new keys and a new IPv6 address each time # it is started if persistentKeys is not enabled. settings = { Peers = [ # Yggdrasil will automatically connect and "peer" with other nodes it # discovers via link-local multicast announcements. Unless this is the # case (it probably isn't) a node needs peers within the existing # network that it can tunnel to. "tcp://1.2.3.4:1024" "tcp://1.2.3.5:1024" # Public peers can be found at # https://github.com/yggdrasil-network/public-peers ]; }; };}A node with a fixed address that announces a prefix:
let address = "210:5217:69c0:9afc:1b95:b9f:8718:c3d2"; prefix = "310:5217:69c0:9afc"; # taken from the output of "yggdrasilctl getself".in{ services.yggdrasil = { enable = true; persistentKeys = true; # Maintain a fixed public key and IPv6 address. settings = { Peers = [ "tcp://1.2.3.4:1024" "tcp://1.2.3.5:1024" ]; NodeInfo = { # This information is visible to the network. name = config.networking.hostName; location = "The North Pole"; }; }; }; boot.kernel.sysctl."net.ipv6.conf.all.forwarding" = 1; # Forward traffic under the prefix. networking.interfaces.${eth0}.ipv6.addresses = [ { # Set a 300::/8 address on the local physical device. address = prefix + "::1"; prefixLength = 64; } ]; services.radvd = { # Announce the 300::/8 prefix to eth0. enable = true; config = '' interface eth0 { AdvSendAdvert on; prefix ${prefix}::/64 { AdvOnLink on; AdvAutonomous on; }; route 200::/8 {}; }; ''; };}A NixOS container attached to the Yggdrasil network via a node running on thehost:
let yggPrefix64 = "310:5217:69c0:9afc"; # Again, taken from the output of "yggdrasilctl getself".in{ boot.kernel.sysctl."net.ipv6.conf.all.forwarding" = 1; # Enable IPv6 forwarding. networking = { bridges.br0.interfaces = [ ]; # A bridge only to containers… interfaces.br0 = { # … configured with a prefix address. ipv6.addresses = [ { address = "${yggPrefix64}::1"; prefixLength = 64; } ]; }; }; containers.foo = { autoStart = true; privateNetwork = true; hostBridge = "br0"; # Attach the container to the bridge only. config = { config, pkgs, ... }: { networking.interfaces.eth0.ipv6 = { addresses = [ { # Configure a prefix address. address = "${yggPrefix64}::2"; prefixLength = 64; } ]; routes = [ { # Configure the prefix route. address = "200::"; prefixLength = 7; via = "${yggPrefix64}::1"; } ]; }; services.httpd.enable = true; networking.firewall.allowedTCPPorts = [ 80 ]; }; };}Table of Contents
uMurmur is a minimalistic Mumble server primarily targeted to run on embedded computers. This module enables it (umurmurd).
{ services.umurmur = { enable = true; openFirewall = true; settings = { port = 7365; channels = [ { name = "root"; parent = ""; description = "Root channel. No entry."; noenter = true; } { name = "lobby"; parent = "root"; description = "Lobby channel"; } ]; default_channel = "lobby"; }; };}See a full configuration inumurmur.conf.example
Table of Contents
Prosody is an open-source, modern XMPP server.
A common struggle for most XMPP newcomers is to find the right setof XMPP Extensions (XEPs) to setup. Forget to activate a few ofthose and your XMPP experience might turn into a nightmare!
The XMPP community tackles this problem by creating a meta-XEPlisting a decent set of XEPs you should implement. This meta-XEPis issued every year, the 2020 edition beingXEP-0423.
The NixOS Prosody module will implement most of these recommendend XEPs out ofthe box. That being said, two components still require somemanual configuration: theMulti User Chat (MUC)and theHTTP File Upload ones.You’ll need to create a DNS subdomain for each of those. The current convention is to name yourMUC endpointconference.example.org and your HTTP upload domainupload.example.org.
A good configuration to start with, including aMulti User Chat (MUC)endpoint as well as aHTTP File Uploadendpoint will look like this:
{ services.prosody = { enable = true; admins = [ "root@example.org" ]; ssl.cert = "/var/lib/acme/example.org/fullchain.pem"; ssl.key = "/var/lib/acme/example.org/key.pem"; virtualHosts."example.org" = { enabled = true; domain = "example.org"; ssl.cert = "/var/lib/acme/example.org/fullchain.pem"; ssl.key = "/var/lib/acme/example.org/key.pem"; }; muc = [ { domain = "conference.example.org"; } ]; uploadHttp = { domain = "upload.example.org"; }; };}As you can see in the code snippet from theprevious section,you’ll need a single TLS certificate covering your main endpoint,the MUC one as well as the HTTP Upload one. We can generate such acertificate by leveraging the ACMEextraDomainNames module option.
Provided the setup detailed in the previous section, you’ll need the following acme configuration to generatea TLS certificate for the three endponits:
{ security.acme = { email = "root@example.org"; acceptTerms = true; certs = { "example.org" = { webroot = "/var/www/example.org"; email = "root@example.org"; extraDomainNames = [ "conference.example.org" "upload.example.org" ]; }; }; };}Table of Contents
Pleroma is a lightweight activity pub server.
Thepleroma_ctl CLI utility will prompt you some questions and it will generate an initial config file. This is an example of usage
$ mkdir tmp-pleroma$ cd tmp-pleroma$ nix-shell -p pleroma-otp$ pleroma_ctl instance gen --output config.exs --output-psql setup.psqlTheconfig.exs file can be further customized following the instructions on theupstream documentation. Many refinements can be applied also after the service is running.
First, the Postgresql service must be enabled in the NixOS configuration
{ services.postgresql = { enable = true; package = pkgs.postgresql_13; };}and activated with the usual
$ nixos-rebuild switchThen you can create and seed the database, using thesetup.psql file that you generated in the previous section, by running
$ sudo -u postgres psql -f setup.psqlIn this section we will enable the Pleroma service only locally, so its configurations can be improved incrementally.
This is an example of configuration, whereservices.pleroma.configs option contains the content of the fileconfig.exs, generatedin the first section, but with the secrets (database password, endpoint secret key, salts, etc.) removed. Removing secrets is important, because otherwise they will be stored publicly in the Nix store.
{ services.pleroma = { enable = true; secretConfigFile = "/var/lib/pleroma/secrets.exs"; configs = [ '' import Config config :pleroma, Pleroma.Web.Endpoint, url: [host: "pleroma.example.net", scheme: "https", port: 443], http: [ip: {127, 0, 0, 1}, port: 4000] config :pleroma, :instance, name: "Test", email: "admin@example.net", notify_email: "admin@example.net", limit: 5000, registrations_open: true config :pleroma, :media_proxy, enabled: false, redirect_on_failure: true config :pleroma, Pleroma.Repo, adapter: Ecto.Adapters.Postgres, username: "pleroma", database: "pleroma", hostname: "localhost" # Configure web push notifications config :web_push_encryption, :vapid_details, subject: "mailto:admin@example.net" # ... TO CONTINUE ... '' ]; };}Secrets must be moved into a file pointed byservices.pleroma.secretConfigFile, in our case/var/lib/pleroma/secrets.exs. This file can be created copying the previously generatedconfig.exs file and then removing all the settings, except the secrets. This is an example
# Pleroma instance passwordsimport Configconfig :pleroma, Pleroma.Web.Endpoint, secret_key_base: "<the secret generated by pleroma_ctl>", signing_salt: "<the secret generated by pleroma_ctl>"config :pleroma, Pleroma.Repo, password: "<the secret generated by pleroma_ctl>"# Configure web push notificationsconfig :web_push_encryption, :vapid_details, public_key: "<the secret generated by pleroma_ctl>", private_key: "<the secret generated by pleroma_ctl>"# ... TO CONTINUE ...Note that the lines of the same configuration group are comma separated (i.e. all the lines end with a comma, except the last one), so when the lines with passwords are added or removed, commas must be adjusted accordingly.
The service can be enabled with the usual
$ nixos-rebuild switchThe service is accessible only from the local127.0.0.1:4000 port. It can be tested using a port forwarding like this
$ ssh -L 4000:localhost:4000 myuser@example.netand then accessinghttp://localhost:4000 from a web browser.
After Pleroma service is running, allPleroma administration utilities can be used. In particular an admin user can be created with
$ pleroma_ctl user new <nickname> <email> --admin --moderator --password <password>In this configuration, Pleroma is listening only on the local port 4000. Nginx can be configured as a Reverse Proxy, for forwarding requests from public ports to the Pleroma service. This is an example of configuration, usingLet’s Encrypt for the TLS certificates
{ security.acme = { email = "root@example.net"; acceptTerms = true; }; services.nginx = { enable = true; addSSL = true; recommendedTlsSettings = true; recommendedOptimisation = true; recommendedGzipSettings = true; recommendedProxySettings = false; # NOTE: if enabled, the NixOS proxy optimizations will override the Pleroma # specific settings, and they will enter in conflict. virtualHosts = { "pleroma.example.net" = { http2 = true; enableACME = true; forceSSL = true; locations."/" = { proxyPass = "http://127.0.0.1:4000"; extraConfig = '' etag on; gzip on; add_header 'Access-Control-Allow-Origin' '*' always; add_header 'Access-Control-Allow-Methods' 'POST, PUT, DELETE, GET, PATCH, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, Idempotency-Key' always; add_header 'Access-Control-Expose-Headers' 'Link, X-RateLimit-Reset, X-RateLimit-Limit, X-RateLimit-Remaining, X-Request-Id' always; if ($request_method = OPTIONS) { return 204; } add_header X-XSS-Protection "1; mode=block"; add_header X-Permitted-Cross-Domain-Policies none; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header Referrer-Policy same-origin; add_header X-Download-Options noopen; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; client_max_body_size 16m; # NOTE: increase if users need to upload very big files ''; }; }; }; };}Table of Contents
NetBird is a VPN built on top of WireGuard® making it easy to create secure private networks for your organization or home.
To fully setup Netbird as a self-hosted server, we need both a Coturn server and an identity provider, the list of supported SSOs and their setup are availableon Netbird’s documentation.
There are quite a few settings that need to be passed to Netbird for it to function, and a minimal config looks like :
{ services.netbird.server = { enable = true; domain = "netbird.example.selfhosted"; enableNginx = true; coturn = { enable = true; passwordFile = "/path/to/a/secret/password"; }; management = { oidcConfigEndpoint = "https://sso.example.selfhosted/oauth2/openid/netbird/.well-known/openid-configuration"; settings = { TURNConfig = { Turns = [ { Proto = "udp"; URI = "turn:netbird.example.selfhosted:3478"; Username = "netbird"; Password._secret = "/path/to/a/secret/password"; } ]; }; }; }; };}Table of Contents
The absolute minimal configuration for the Netbird client daemon looks like this:
{ services.netbird.enable = true; }This will set up a netbird service listening on the port51820 associated to thewt0 interface.
Which is equivalent to:
{ services.netbird.clients.wt0 = { port = 51820; name = "netbird"; interface = "wt0"; hardened = false; };}This will set up anetbird.service listening on the port51820 associated to thewt0 interface. There will also benetbird-wt0 binary installed in addition tonetbird.
seeclients option documentation for more details.
Using theservices.netbird.clients option, it is possible to define more thanone netbird service running at the same time.
You must at least define aport for the service to listen on, the rest is optional:
{ services.netbird.clients.wt1.port = 51830; services.netbird.clients.wt2.port = 51831;}seeclients option documentation for more details.
You can easily expose services exclusively to Netbird network by combiningnetworking.firewall.interfaces ruleswithinterface names:
{ services.netbird.clients.priv.port = 51819; services.netbird.clients.work.port = 51818; networking.firewall.interfaces = { "${config.services.netbird.clients.priv.interface}" = { allowedUDPPorts = [ 1234 ]; }; "${config.services.netbird.clients.work.interface}" = { allowedTCPPorts = [ 8080 ]; }; };}Each Netbird client service by default:
runs in ahardened mode,
starts with the system,
opens up a firewall for direct (without TURN servers)peer-to-peer communication,
can be additionally configured with environment variables,
automatically determines whethernetbird-ui-<name> should be available,
autoStart allows you to start the client (an actual systemd service)on demand, for example to connect to work-related or otherwise conflicting network only when required.See the option description for more information.
environment allows you to pass additional configurationsthrough environment variables, but special care needs to be taken for overriding config location anddaemon address duehardened option.
Table of Contents
Mosquitto is a MQTT broker often used for IoT or home automation data transport.
A minimal configuration for Mosquitto is
{ services.mosquitto = { enable = true; listeners = [ { acl = [ "pattern readwrite #" ]; omitPasswordAuth = true; settings.allow_anonymous = true; } ]; };}This will start a broker on port 1883, listening on all interfaces of the machine, allowingread/write access to all topics to any user without password requirements.
User authentication can be configured with theusers key of listeners. A config that givesfull read access to a usermonitor and restricted write access to a userservice could looklike
{ services.mosquitto = { enable = true; listeners = [ { users = { monitor = { acl = [ "read #" ]; password = "monitor"; }; service = { acl = [ "write service/#" ]; password = "service"; }; }; } ]; };}TLS authentication is configured by setting TLS-related options of the listener:
{ services.mosquitto = { enable = true; listeners = [ { port = 8883; # port change is not required, but helpful to avoid mistakes # ... settings = { cafile = "/path/to/mqtt.ca.pem"; certfile = "/path/to/mqtt.pem"; keyfile = "/path/to/mqtt.key"; }; } ]; };}The Mosquitto configuration has four distinct types of settings:the global settings of the daemon, listeners, plugins, and bridges.Bridges and listeners are part of the global configuration, plugins are part of listeners.Users of the broker are configured as parts of listeners rather than globally, allowingconfigurations in which a given user is only allowed to log in to the broker using specificlisteners (eg to configure an admin user with full access to all topics, but restricted tolocalhost).
Almost all options of Mosquitto are available for configuration at their appropriate levels, someas NixOS options written in camel case, the remainders undersettings with their exact names inthe Mosquitto config file. The exceptions areacl_file (which is always set according to theacl attributes of a listener and its users) andper_listener_settings (which is always set totrue).
Mosquitto can be run in two modes, with a password file or without. Each listener has its ownpassword file, and different listeners may use different password files. Password file generationcan be disabled by settingomitPasswordAuth = true for a listener; in this case it is necessaryto either setsettings.allow_anonymous = true to allow all logins, or to configure otherauthentication methods like TLS client certificates withsettings.use_identity_as_username = true.
The default is to generate a password file for each listener from the users configured to thatlistener. Users with no configured password will not be added to the password file and thuswill not be able to use the broker.
Every listener has a Mosquittoacl_file attached to it. This ACL is configured via twoattributes of the config:
theacl attribute of the listener configures pattern ACL entries and topic ACL entriesfor anonymous users. Each entry must be prefixed withpattern ortopic to distinguishbetween these two cases.
theacl attribute of every user configures in the listener configured the ACL for thatgiven user. Only topic ACLs are supported by Mosquitto in this setting, so no prefix isrequired or allowed.
The default ACL for a listener is empty, disallowing all accesses from all clients. To configurea completely open ACL, setacl = [ "pattern readwrite #" ] in the listener.
Table of Contents
TheJottacloud Command-line Tool is a headlessJottacloud client.
{ services.jotta-cli.enable = true; }This addsjotta-cli toenvironment.systemPackages and starts a user service that runsjottad with the default options.
{ services.jotta-cli = { enable = true; options = [ "slow" ]; package = pkgs.jotta-cli; };}This usesjotta-cli andjottad from thepkgs.jotta-cli package and startsjottad in low memory mode.
jottad is also added toenvironment.systemPackages, sojottad --help can be used to explore options.
Table of Contents
GNS3, a network software emulator.
A minimal configuration looks like this:
{ services.gns3-server = { enable = true; auth = { enable = true; user = "gns3"; passwordFile = "/var/lib/secrets/gns3_password"; }; ssl = { enable = true; certFile = "/var/lib/gns3/ssl/cert.pem"; keyFile = "/var/lib/gns3/ssl/key.pem"; }; dynamips.enable = true; ubridge.enable = true; vpcs.enable = true; };}Table of Contents
A storage server for Firefox Sync that you can easily host yourself.
The absolute minimal configuration for the sync server looks like this:
{ services.mysql.package = pkgs.mariadb; services.firefox-syncserver = { enable = true; secrets = builtins.toFile "sync-secrets" '' SYNC_MASTER_SECRET=this-secret-is-actually-leaked-to-/nix/store ''; singleNode = { enable = true; hostname = "localhost"; url = "http://localhost:5000"; }; };}This will start a sync server that is only accessible locally. Once the services isrunning you can navigate toabout:config in your Firefox profile and setidentity.sync.tokenserver.uri tohttp://localhost:5000/1.0/sync/1.5. Your browserwill now use your local sync server for data storage.
This configuration should never be used in production. It is not encrypted andstores its secrets in a world-readable location.
Thefirefox-syncserver service provides a number of options to make setting upsmall deployment easier. These are grouped under thesingleNode element of theoption tree and allow simple configuration of the most important parameters.
Single node setup is split into two kinds of options: those that affect the syncserver itself, and those that affect its surroundings. Options that affect thesync server arecapacity, which configures how many accounts may be active onthis instance, andurl, which holds the URL under which the sync server can beaccessed. Theurl can be configured automatically when using nginx.
Options that affect the surroundings of the sync server areenableNginx,enableTLS andhostname. IfenableNginx is set the sync server module willautomatically add an nginx virtual host to the system usinghostname as thedomain and seturl accordingly. IfenableTLS is set the module will alsoenable ACME certificates on the new virtual host and force all connections tobe made via TLS.
For actual deployment it is also recommended to store thesecrets file in asecure location.
Table of Contents
DNS-over-HTTPS is a high performance DNS over HTTPS client & server. This module enables its server part (doh-server).
Setup with Nginx + ACME (recommended):
{ services.doh-server = { enable = true; settings = { upstream = [ "udp:1.1.1.1:53" ]; }; }; services.nginx = { enable = true; virtualHosts."doh.example.com" = { enableACME = true; forceSSL = true; http2 = true; locations."/".return = 404; locations."/dns-query" = { proxyPass = "http://127.0.0.1:8053/dns-query"; recommendedProxySettings = true; }; }; # and other virtual hosts ... }; security.acme = { acceptTerms = true; defaults.email = "you@example.com"; }; networking.firewall.allowedTCPPorts = [ 80 443 ];}doh-server can also work as a standalone HTTPS web server (with SSL cert and key specified), but this is not recommended asdoh-server does not do OCSP Stabbing.
Setup a standalone instance with ACME:
let domain = "doh.example.com";in{ security.acme.certs.${domain} = { dnsProvider = "cloudflare"; credentialFiles."CF_DNS_API_TOKEN_FILE" = "/run/secrets/cf-api-token"; }; services.doh-server = { enable = true; settings = { listen = [ ":443" ]; upstream = [ "udp:1.1.1.1:53" ]; }; useACMEHost = domain; }; networking.firewall.allowedTCPPorts = [ 443 ];}See a full configuration in https://github.com/m13253/dns-over-https/blob/master/doh-server/doh-server.conf.
Table of Contents
Dnsmasq is an integrated DNS, DHCP and TFTP server for small networks.
On a home network, you can use Dnsmasq as a DHCP and DNS server. New devices onyour network will be configured by Dnsmasq, and instructed to use it as the DNSserver by default. This allows you to rely on your own server to perform DNSqueries and caching, with DNSSEC enabled.
The following example assumes that
you have disabled your router’s integrated DHCP server, if it has one
your router’s address is set innetworking.defaultGateway.address
your system’s Ethernet interface iseth0
you have configured the address(es) to forward DNS queries innetworking.nameservers
{ services.dnsmasq = { enable = true; settings = { interface = "eth0"; bind-interfaces = true; # Only bind to the specified interface dhcp-authoritative = true; # Should be set when dnsmasq is definitely the only DHCP server on a network server = config.networking.nameservers; # Upstream dns servers to which requests should be forwarded dhcp-host = [ # Give the current system a fixed address of 192.168.0.254 "dc:a6:32:0b:ea:b9,192.168.0.254,${config.networking.hostName},infinite" ]; dhcp-option = [ # Address of the gateway, i.e. your router "option:router,${config.networking.defaultGateway.address}" ]; dhcp-range = [ # Range of IPv4 addresses to give out # <range start>,<range end>,<lease time> "192.168.0.10,192.168.0.253,24h" # Enable stateless IPv6 allocation "::f,::ff,constructor:eth0,ra-stateless" ]; dhcp-rapid-commit = true; # Faster DHCP negotiation for IPv6 local-service = true; # Accept DNS queries only from hosts whose address is on a local subnet log-queries = true; # Log results of all DNS queries bogus-priv = true; # Don't forward requests for the local address ranges (192.168.x.x etc) to upstream nameservers domain-needed = true; # Don't forward requests without dots or domain parts to upstream nameservers dnssec = true; # Enable DNSSEC # DNSSEC trust anchor. Source: https://data.iana.org/root-anchors/root-anchors.xml trust-anchor = ".,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D"; }; };}Upstream website:https://dnsmasq.org
Table of Contents
Crab-hole is a cross platform Pi-hole clone written in Rust usinghickory-dns/trust-dns.It can be used as a network wide ad and spy blocker or run on your local PC.
For a secure and private communication, crab-hole has builtin support for DoH(HTTPS), DoQ(QUIC) and DoT(TLS) for down- and upstreams and DNSSEC for upstreams.It also comes with privacy friendly default logging settings.
As an example config file using Cloudflare as DoT upstream, you can use thiscrab-hole.toml
The following is a basic nix config using UDP as a downstream and Cloudflare as upstream.
{ services.crab-hole = { enable = true; settings = { blocklist = { include_subdomains = true; lists = [ "https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/fakenews-gambling-porn/hosts" "https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt" ]; }; downstream = [ { protocol = "udp"; listen = "127.0.0.1"; port = 53; } { protocol = "udp"; listen = "::1"; port = 53; } ]; upstream = { name_servers = [ { socket_addr = "1.1.1.1:853"; protocol = "tls"; tls_dns_name = "1dot1dot1dot1.cloudflare-dns.com"; trust_nx_responses = false; } { socket_addr = "[2606:4700:4700::1111]:853"; protocol = "tls"; tls_dns_name = "1dot1dot1dot1.cloudflare-dns.com"; trust_nx_responses = false; } ]; }; }; };}To test your setup, just query the DNS server with any domain likeexample.com.To test if a domain gets blocked, just choose one of the domains from the blocklist.If the server does not return an IP, this worked correctly.
There are multiple protocols which are supported for the downstream:UDP, TLS, HTTPS and QUIC.Below you can find a brief overview over the various protocol options together with an example for each protocol.
UDP is the simplest downstream, but it is not encrypted.If you want encryption, you need to use another protocol.Note: This also opens a TCP port
{ services.crab-hole.settings.downstream = [ { protocol = "udp"; listen = "localhost"; port = 53; } ];}TLS is a simple encrypted options to serve DNS.It comes with similar settings to UDP,but you additionally need a valid TLS certificate and its private key.The later are specified via a path to the files.A valid TLS certificate and private key can be obtained using services like ACME.Make sure the crab-hole service user has access to these files.Additionally you can set an optional timeout value.
{ services.crab-hole.settings.downstream = [ { protocol = "tls"; listen = "[::]"; port = 853; certificate = ./dns.example.com.crt; key = "/dns.example.com.key"; # optional (default = 3000) timeout_ms = 3000; } ];}HTTPS has similar settings to TLS, with the only difference being the additionaldns_hostname option.This protocol might need a reverse proxy if other HTTPS services are to share the same port.Make sure the service has permissions to access the certificate and key.
Note: this config is untested
{ services.crab-hole.settings.downstream = [ { protocol = "https"; listen = "[::]"; port = 443; certificate = ./dns.example.com.crt; key = "/dns.example.com.key"; # optional dns_hostname = "dns.example.com"; # optional (default = 3000) timeout_ms = 3000; } ];}QUIC has identical settings to the HTTPS protocol.Since by default it doesn’t run on the standard HTTPS port, you shouldn’t need a reverse proxy.Make sure the service has permissions to access the certificate and key.
{ services.crab-hole.settings.downstream = [ { protocol = "quic"; listen = "127.0.0.1"; port = 853; certificate = ./dns.example.com.crt; key = "/dns.example.com.key"; # optional dns_hostname = "dns.example.com"; # optional (default = 3000) timeout_ms = 3000; } ];}You can set additional options of the underlying DNS server. A full list of all the options can be found in thehickory-dns documentation.
This can look like the following example.
{ services.crab-hole.settings.upstream.options = { validate = false; };}Due to an upstream issue ofhickory-dns, sites without DNSSEC will not be resolved ifvalidate = true.Only DNSSEC capable sites will be resolved with this setting.To prevent this, setvalidate = false or omit the[upstream.options].
The API allows a user to fetch statistic and information about the crab-hole instance.Basic information is available for everyone, while more detailed information is secured by a key, which will be set with theadmin_key option.
{ services.crab-hole.settings.api = { listen = "127.0.0.1"; port = 8080; # optional (default = false) show_doc = true; # OpenAPI doc loads content from third party websites # optional admin_key = "1234"; };}The documentation can be enabled separately for the instance withshow_doc.This will then create an additional webserver, which hosts the API documentation.An additional resource is in work in thecrab-hole repository.
You can check for errors usingsystemctl status crab-hole orjournalctl -xeu crab-hole.service.
Some options of the service are in freeform and not type checked.This can lead to a config which is not valid or cannot be parsed by crab-hole.The error message will tell you what config value could not be parsed.For more information check theexample config.
It can happen that the created certificates for TLS, HTTPS or QUIC are owned by another user or group.For ACME for example this would beacme:acme.To give the crab-hole service access to these files, the group which owns the certificate can be added as a supplementary group to the service.For ACME for example:
{ services.crab-hole.supplementaryGroups = [ "acme" ]; }Table of Contents
Anubis is a scraper defense software that blocks AI scrapers. It is designed to sitbetween a reverse proxy and the service to be protected.
This module is designed to use Unix domain sockets as the socket paths can be automatically configured for multipleinstances, but TCP sockets are also supported.
A minimal configuration withnginx may look like the following:
{ config, ... }:{ services.anubis.instances.default.settings.TARGET = "http://localhost:8000"; # required due to unix socket permissions users.users.nginx.extraGroups = [ config.users.groups.anubis.name ]; services.nginx.virtualHosts."example.com" = { locations = { "/".proxyPass = "http://unix:${config.services.anubis.instances.default.settings.BIND}"; }; };}If Unix domain sockets are not needed or desired, this module supports operating with only TCP sockets.
{ services.anubis = { instances.default = { settings = { TARGET = "http://localhost:8080"; BIND = ":9000"; BIND_NETWORK = "tcp"; METRICS_BIND = "127.0.0.1:9001"; METRICS_BIND_NETWORK = "tcp"; }; }; };}It is possible to configure default settings for all instances of Anubis, viaservices.anubis.defaultOptions.
{ services.anubis.defaultOptions = { botPolicy = { dnsbl = false; }; settings.DIFFICULTY = 3; };}Note that at the moment, a custom bot policy is not merged with the baked-in one. That means to only override a settinglikednsbl, copying the entire bot policy is required. Checkthe upstream repositoryfor the policy.
Table of Contents
Samba, a SMB/CIFS file, print, and login server for Unix.
A minimal configuration looks like this:
{ services.samba.enable = true; }This configuration automatically enablessmbd,nmbd andwinbindd services by default.
Samba configuration is located in the/etc/samba/smb.conf file.
This configuration will configure Samba to serve apublic file sharewhich is read-only and accessible without authentication:
{ services.samba = { enable = true; settings = { "public" = { "path" = "/public"; "read only" = "yes"; "browseable" = "yes"; "guest ok" = "yes"; "comment" = "Public samba share."; }; }; };}Table of Contents
Litestream is a standalone streamingreplication tool for SQLite.
Litestream service is managed by a dedicated user namedlitestreamwhich needs permission to the database file. Here’s an example config which givesrequired permissions to accessgrafana database:
{ pkgs, ... }:{ users.users.litestream.extraGroups = [ "grafana" ]; systemd.services.grafana.serviceConfig.ExecStartPost = "+" + pkgs.writeShellScript "grant-grafana-permissions" '' timeout=10 while [ ! -f /var/lib/grafana/data/grafana.db ]; do if [ "$timeout" == 0 ]; then echo "ERROR: Timeout while waiting for /var/lib/grafana/data/grafana.db." exit 1 fi sleep 1 ((timeout--)) done find /var/lib/grafana -type d -exec chmod -v 775 {} \; find /var/lib/grafana -type f -exec chmod -v 660 {} \; ''; services.litestream = { enable = true; environmentFile = "/run/secrets/litestream"; settings = { dbs = [ { path = "/var/lib/grafana/data/grafana.db"; replicas = [ { url = "s3://mybkt.litestream.io/grafana"; } ]; } ]; }; };}Table of Contents
Prometheus exporters provide metrics for theprometheus monitoring system.
One of the most common exporters is thenode exporter,it provides hardware and OS metrics from the host it’srunning on. The exporter could be configured as follows:
{ services.prometheus.exporters.node = { enable = true; port = 9100; enabledCollectors = [ "logind" "systemd" ]; disabledCollectors = [ "textfile" ]; openFirewall = true; firewallFilter = "-i br0 -p tcp -m tcp --dport 9100"; };}It should now serve all metrics from the collectors that are explicitlyenabled and the ones that areenabled by default,via http under/metrics. In thisexample the firewall should just allow incoming connections to theexporter’s port on the bridge interfacebr0 (this wouldhave to be configured separately of course). For more information aboutconfiguration seeman configuration.nix or search throughtheavailable options.
Prometheus can now be configured to consume the metrics produced by the exporter:
{ services.prometheus = { # ... scrapeConfigs = [ { job_name = "node"; static_configs = [ { targets = [ "localhost:${toString config.services.prometheus.exporters.node.port}" ]; } ]; } ]; # ... };}To add a new exporter, it has to be packaged first (seenixpkgs/pkgs/servers/monitoring/prometheus/ forexamples), then a module can be added. The postfix exporter is used in thisexample:
Some default options for all exporters are provided bynixpkgs/nixos/modules/services/monitoring/prometheus/exporters.nix:
enable
port
listenAddress
extraFlags
openFirewall
firewallFilter
firewallRules
user
group
As there is already a package available, the module can now be added. Thisis accomplished by adding a new file to thenixos/modules/services/monitoring/prometheus/exporters/directory, which will be called postfix.nix and contains all exporterspecific options and configuration:
# nixpkgs/nixos/modules/services/prometheus/exporters/postfix.nix{ config, lib, pkgs, options,}:let # for convenience we define cfg here cfg = config.services.prometheus.exporters.postfix;in{ port = 9154; # The postfix exporter listens on this port by default # `extraOpts` is an attribute set which contains additional options # (and optional overrides for default options). # Note that this attribute is optional. extraOpts = { telemetryPath = lib.mkOption { type = lib.types.str; default = "/metrics"; description = '' Path under which to expose metrics. ''; }; logfilePath = lib.mkOption { type = lib.types.path; default = /var/log/postfix_exporter_input.log; example = /var/log/mail.log; description = '' Path where Postfix writes log entries. This file will be truncated by this exporter! ''; }; showqPath = lib.mkOption { type = lib.types.path; default = /var/spool/postfix/public/showq; example = /var/lib/postfix/queue/public/showq; description = '' Path at which Postfix places its showq socket. ''; }; }; # `serviceOpts` is an attribute set which contains configuration # for the exporter's systemd service. One of # `serviceOpts.script` and `serviceOpts.serviceConfig.ExecStart` # has to be specified here. This will be merged with the default # service configuration. # Note that by default 'DynamicUser' is 'true'. serviceOpts = { serviceConfig = { DynamicUser = false; ExecStart = '' ${pkgs.prometheus-postfix-exporter}/bin/postfix_exporter \ --web.listen-address ${cfg.listenAddress}:${toString cfg.port} \ --web.telemetry-path ${cfg.telemetryPath} \ ${lib.concatStringsSep " \\\n " cfg.extraFlags} ''; }; };}This should already be enough for the postfix exporter. Additionally onecould now add assertions and conditional default values. This can be donein the ‘meta-module’ that combines all exporter definitions and generatesthe submodules:nixpkgs/nixos/modules/services/prometheus/exporters.nix
Should an exporter option change at some point, it is possible to addinformation about the change to the exporter definition similar tonixpkgs/nixos/modules/rename.nix:
{ config, lib, pkgs, options,}:let cfg = config.services.prometheus.exporters.nginx;in{ port = 9113; extraOpts = { # additional module options # ... }; serviceOpts = { # service configuration # ... }; imports = [ # 'services.prometheus.exporters.nginx.telemetryEndpoint' -> 'services.prometheus.exporters.nginx.telemetryPath' (lib.mkRenamedOptionModule [ "telemetryEndpoint" ] [ "telemetryPath" ]) # removed option 'services.prometheus.exporters.nginx.insecure' (lib.mkRemovedOptionModule [ "insecure" ] '' This option was replaced by 'prometheus.exporters.nginx.sslVerify' which defaults to true. '') ({ options.warnings = options.warnings; }) ];}Table of Contents
parsedmarc is a servicewhich parses incomingDMARC reports and storesor sends them to a downstream service for further analysis. Incombination with Elasticsearch, Grafana and the included Grafanadashboard, it provides a handy overview of DMARC reports over time.
A very minimal setup which reads incoming reports from an externalemail address and saves them to a local Elasticsearch instance lookslike this:
{ services.parsedmarc = { enable = true; settings.imap = { host = "imap.example.com"; user = "alice@example.com"; password = "/path/to/imap_password_file"; }; provision.geoIp = false; # Not recommended! };}Note that GeoIP provisioning is disabled in the example forsimplicity, but should be turned on for fully functional reports.
Instead of watching an external inbox, a local inbox can beautomatically provisioned. The recipient’s name is by default set todmarc, but can be configured inservices.parsedmarc.provision.localMail.recipientName. Youneed to add an MX record pointing to the host. More concretely: forthe example to work, an MX record needs to be set up formonitoring.example.com and the complete email address that should beconfigured in the domain’s dmarc policy isdmarc@monitoring.example.com.
{ services.parsedmarc = { enable = true; provision = { localMail = { enable = true; hostname = monitoring.example.com; }; geoIp = false; # Not recommended! }; };}The reports can be visualized and summarized with parsedmarc’sofficial Grafana dashboard. For all views to work, and for the data tobe complete, GeoIP databases are also required. The following exampleshows a basic deployment where the provisioned Elasticsearch instanceis automatically added as a Grafana datasource, and the dashboard isadded to Grafana as well.
{ services.parsedmarc = { enable = true; provision = { localMail = { enable = true; hostname = url; }; grafana = { datasource = true; dashboard = true; }; }; }; # Not required, but recommended for full functionality services.geoipupdate = { settings = { AccountID = 0; LicenseKey = "/path/to/license_key_file"; }; }; services.grafana = { enable = true; addr = "0.0.0.0"; domain = url; rootUrl = "https://" + url; protocol = "socket"; security = { adminUser = "admin"; adminPasswordFile = "/path/to/admin_password_file"; secretKeyFile = "/path/to/secret_key_file"; }; }; services.nginx = { enable = true; recommendedTlsSettings = true; recommendedOptimisation = true; recommendedGzipSettings = true; recommendedProxySettings = true; upstreams.grafana.servers."unix:/${config.services.grafana.socket}" = { }; virtualHosts.${url} = { root = config.services.grafana.staticRootPath; enableACME = true; forceSSL = true; locations."/".tryFiles = "$uri @grafana"; locations."@grafana".proxyPass = "http://grafana"; }; }; users.users.nginx.extraGroups = [ "grafana" ];}Table of Contents
OCS Inventory NG or Open Computers and Software inventoryis an application designed to help IT administrator to keep track of the hardware and softwareconfigurations of computers that are installed on their network.
OCS Inventory collects information about the hardware and software of networked machinesthrough theOCS Inventory Agent program.
This NixOS module enables you to install and configure this agent so that it sends information from your computer to the OCS Inventory server.
For more technical information about OCS Inventory Agent, refer tothe Wiki documentation.
A minimal configuration looks like this:
{ services.ocsinventory-agent = { enable = true; settings = { server = "https://ocsinventory.localhost:8080/ocsinventory"; tag = "01234567890123"; }; };}This configuration will periodically run the ocsinventory-agent SystemD service.
The OCS Inventory Agent will inventory the computer and then sends the results to the specified OCS Inventory Server.
Table of Contents
goss is a YAML based serverspec alternative toolfor validating a server’s configuration.
A minimal configuration looks like this:
{ services.goss = { enable = true; environment = { GOSS_FMT = "json"; GOSS_LOGLEVEL = "TRACE"; }; settings = { addr."tcp://localhost:8080" = { reachable = true; local-address = "127.0.0.1"; }; command."check-goss-version" = { exec = "${lib.getExe pkgs.goss} --version"; exit-status = 0; }; dns.localhost.resolvable = true; file."/nix" = { filetype = "directory"; exists = true; }; group.root.exists = true; kernel-param."kernel.ostype".value = "Linux"; service.goss = { enabled = true; running = true; }; user.root.exists = true; }; };}Table of Contents
Cert Spotter is a tool for monitoringCertificate Transparencylogs.
A basic config that notifies you of all certificate changes for yourdomain would look as follows:
{ services.certspotter = { enable = true; # replace example.org with your domain name watchlist = [ ".example.org" ]; emailRecipients = [ "webmaster@example.org" ]; }; # Configure an SMTP client programs.msmtp.enable = true; # Or you can use any other module that provides sendmail, like # services.nullmailer, services.opensmtpd, services.postfix}In this case, the leading dot in".example.org" means that CertSpotter should monitor not onlyexample.org, but also all of itssubdomains.
By default, NixOS configures Cert Spotter to skip all certificatesissued before its first launch, because checking the entireCertificate Transparency logs requires downloading tens of terabytes ofdata. If you want to check theentire logs for previously issuedcertificates, you have to setservices.certspotter.startAtEnd tofalse and remove all previously saved log state in/var/lib/certspotter/logs. The downloaded logs aren’t saved, so if youadd a new domain to the watchlist and want Cert Spotter to go throughthe logs again, you will have to remove/var/lib/certspotter/logsagain.
After catching up with the logs, Cert Spotter will start monitoring livelogs. As of October 2023, it uses around20 Mbps of traffic onaverage.
Cert Spotter supports running custom hooks instead of (or in additionto) sending emails. Hooks are shell scripts that will be passed certainenvironment variables.
To see hook documentation, see Cert Spotter’s man pages:
nix-shell -p certspotter --run 'man 8 certspotter-script'For example, you can removeemailRecipients and send emailnotifications manually using the following hook:
{ services.certspotter.hooks = [ (pkgs.writeShellScript "certspotter-hook" '' function print_email() { echo "Subject: [certspotter] $SUMMARY" echo "Mime-Version: 1.0" echo "Content-Type: text/plain; charset=US-ASCII" echo cat "$TEXT_FILENAME" } print_email | ${config.services.certspotter.sendmailPath} -i webmaster@example.org '') ];}Table of Contents
WeeChat is a fast andextensible IRC client.
By default, the module creates asystemdunit which runs the chat client in a detachedscreensession.
This can be done by enabling theweechat service:
{ ... }:{ services.weechat.enable = true;}The service is managed by a dedicated user namedweechatin the state directory/var/lib/weechat.
WeeChat runs in a screen session owned by a dedicated user. To explicitlyallow your another user to attach to this session, thescreenrc needs to be tweaked by addingmultiusersupport:
{ programs.screen.screenrc = '' multiuser on acladd normal_user '';}Now, the session can be re-attached like this:
screen -x weechat/weechat-screenThe session name can be changed usingservices.weechat.sessionName.
Table of Contents
Taskserver is the server component of the now deprecated version 2 ofTaskwarrior, a free andopen source todo list application.
Taskwarrior 3.0.0 was released in March2024,and the sync functionality was rewritten entirely. With it, a NixOS modulenamedtaskchampion-sync-serverwas added to Nixpkgs. Many people still want to use the oldTaskwarrior2.6.x,and Taskserver along with it. Hence this module and this documentation willstay here for the near future.
Taskserver does all of its authentication via TLS using client certificates,so you either need to roll your own CA or purchase a certificate from aknown CA, which allows creation of client certificates. These certificatesare usually advertised as “server certificates”.
So in order to make it easier to handle your own CA, there is a helper toolcallednixos-taskserver which manages the custom CA alongwith Taskserver organisations, users and groups.
While the client certificates in Taskserver only authenticate whether a useris allowed to connect, every user has its own UUID which identifies it as anentity.
Withnixos-taskserver the client certificate is createdalong with the UUID of the user, so it handles all of the credentials neededin order to setup the Taskwarrior 2 client to work with a Taskserver.
Because Taskserver by default only provides scripts to setup usersimperatively, thenixos-taskserver tool is used foraddition and deletion of organisations along with users and groups definedbyservices.taskserver.organisations and as well forimperative set up.
The tool is designed to not interfere if the command is used to manually setup some organisations, users or groups.
For example if you add a new organisation usingnixos-taskserver org add foo, the organisation is not modified and deleted nomatter what you define inservices.taskserver.organisations, even if you’re addingthe same organisation in that option.
The tool is modelled to imitate the officialtaskdcommand, documentation for each subcommand can be shown by using the--help switch.
Everything is done according to what you specify in the module options,however in order to set up a Taskwarrior 2 client for synchronisation with aTaskserver instance, you have to transfer the keys and certificates to theclient machine.
This is done usingnixos-taskserver user export $orgname $username which is printing a shell script fragment to stdoutwhich can either be used verbatim or adjusted to import the user on theclient machine.
For example, let’s say you have the following configuration:
{ services.taskserver.enable = true; services.taskserver.fqdn = "server"; services.taskserver.listenHost = "::"; services.taskserver.organisations.my-company.users = [ "alice" ];}This creates an organisation calledmy-company with theuseralice.
Now in order to import thealice user to another machinealicebox, all we need to do is something like this:
$ ssh server nixos-taskserver user export my-company alice | shOf course, if no SSH daemon is available on the server you can also copy& paste it directly into a shell.
After this step the user should be set up and you can start synchronisingyour tasks for the first time withtask sync init onalicebox.
Subsequent synchronisation requests merely require the commandtask sync after that stage.
If you set any options withinservice.taskserver.pki.manual.*,nixos-taskserver won’t issue certificates, but you canstill use it for adding or removing user accounts.
Table of Contents
Sourcehut is an open-source,self-hostable software development platform. The server setup can be automated usingservices.sourcehut.
Sourcehut is a Python and Go based set of applications.This NixOS module also provides basic configuration integrating Sourcehut into locally runningservices.nginx,services.redis.servers.sourcehut,services.postfixandservices.postgresql services.
A very basic configuration may look like this:
{ pkgs, ... }:let fqdn = let join = hostName: domain: hostName + optionalString (domain != null) ".${domain}"; in join config.networking.hostName config.networking.domain;in{ networking = { hostName = "srht"; domain = "tld"; firewall.allowedTCPPorts = [ 22 80 443 ]; }; services.sourcehut = { enable = true; git.enable = true; man.enable = true; meta.enable = true; nginx.enable = true; postfix.enable = true; postgresql.enable = true; redis.enable = true; settings = { "sr.ht" = { environment = "production"; global-domain = fqdn; origin = "https://${fqdn}"; # Produce keys with srht-keygen from sourcehut.coresrht. network-key = "/run/keys/path/to/network-key"; service-key = "/run/keys/path/to/service-key"; }; webhooks.private-key = "/run/keys/path/to/webhook-key"; }; }; security.acme.certs."${fqdn}".extraDomainNames = [ "meta.${fqdn}" "man.${fqdn}" "git.${fqdn}" ]; services.nginx = { enable = true; # only recommendedProxySettings are strictly required, but the rest make sense as well. recommendedTlsSettings = true; recommendedOptimisation = true; recommendedGzipSettings = true; recommendedProxySettings = true; # Settings to setup what certificates are used for which endpoint. virtualHosts = { "${fqdn}".enableACME = true; "meta.${fqdn}".useACMEHost = fqdn; "man.${fqdn}".useACMEHost = fqdn; "git.${fqdn}".useACMEHost = fqdn; }; };}ThehostName option is used internally to configure the nginxreverse-proxy. Thesettings attribute set isused by the configuration generator and the result is placed in/etc/sr.ht/config.ini.
All configuration parameters are also stored in/etc/sr.ht/config.ini which is generated bythe module and linked from the store to ensure that all values fromconfig.inican be modified by the module.
httpd)By default,nginx is used as reverse-proxy forsourcehut.However, it’s possible to use e.g.httpd by explicitly disablingnginx usingservices.nginx.enable and fixing thesettings.
Table of Contents
GitLab is a feature-rich git hosting service.
Thegitlab service exposes only an Unix socket at/run/gitlab/gitlab-workhorse.socket. You need toconfigure a webserver to proxy HTTP requests to the socket.
For instance, the following configuration could be used to use nginx asfrontend proxy:
{ services.nginx = { enable = true; recommendedGzipSettings = true; recommendedOptimisation = true; recommendedProxySettings = true; recommendedTlsSettings = true; virtualHosts."git.example.com" = { enableACME = true; forceSSL = true; locations."/".proxyPass = "http://unix:/run/gitlab/gitlab-workhorse.socket"; }; };}GitLab depends on both PostgreSQL and Redis and will automatically enableboth services. In the case of PostgreSQL, a database and a role will becreated.
The default state dir is/var/gitlab/state. This is whereall data like the repositories and uploads will be stored.
A basic configuration with some custom settings could look like this:
{ services.gitlab = { enable = true; databasePasswordFile = "/var/keys/gitlab/db_password"; initialRootPasswordFile = "/var/keys/gitlab/root_password"; https = true; host = "git.example.com"; port = 443; user = "git"; group = "git"; smtp = { enable = true; address = "localhost"; port = 25; }; secrets = { dbFile = "/var/keys/gitlab/db"; secretFile = "/var/keys/gitlab/secret"; otpFile = "/var/keys/gitlab/otp"; jwsFile = "/var/keys/gitlab/jws"; }; extraConfig = { gitlab = { email_from = "gitlab-no-reply@example.com"; email_display_name = "Example GitLab"; email_reply_to = "gitlab-no-reply@example.com"; default_projects_features = { builds = false; }; }; }; };}If you’re setting up a new GitLab instance, generate newsecrets. You for instance usetr -dc A-Za-z0-9 < /dev/urandom | head -c 128 > /var/keys/gitlab/db togenerate a new db secret. Make sure the files can be read by, andonly by, the user specified byservices.gitlab.user. GitLabencrypts sensitive data stored in the database. If you’re restoringan existing GitLab instance, you must specify the secrets secretfromconfig/secrets.yml located in your GitLabstate folder.
Whenincoming_mail.enabled is set totrueinextraConfig an additionalservice calledgitlab-mailroom is enabled for fetching incoming mail.
Refer toAppendix A for all available configurationoptions for theservices.gitlab module.
Backups can be configured with the options inservices.gitlab.backup. Usetheservices.gitlab.backup.startAtoption to configure regular backups.
To run a manual backup, start thegitlab-backup service:
$ systemctl start gitlab-backup.serviceYou can run GitLab’s rake tasks withgitlab-rakewhich will be available on the system when GitLab is enabled. Youwill have to run the command as the user that you configured to runGitLab with.
A list of all available rake tasks can be obtained by running:
$ sudo -u git -H gitlab-rake -TTable of Contents
Forgejo is a soft-fork of gitea, with strong community focus, as wellas on self-hosting and federation.Codeberg isdeployed from it.
Seeupstream docs.
The method of choice for running forgejo is usingservices.forgejo.
Running forgejo usingservices.gitea.package = pkgs.forgejo is no longerrecommended.If you experience issues with your instance usingservices.gitea,DO NOT report them to theservices.gitea module maintainers.DO report them to theservices.forgejo module maintainers instead.
Migrating is, while not strictly necessary at this point, highly recommended.Both modules and projects are likely to diverge further with each release.Which might lead to an even more involved migration.
This will migrate the state directory (data), rename and chown the database anddelete the gitea user.
This will also change the git remote ssh-url user fromgitea@ toforgejo@,when using the host’s openssh server (default) instead of the integrated one.
Instructions for PostgreSQL (default). Adapt accordingly for other databases:
systemctl stop giteamv /var/lib/gitea /var/lib/forgejorunuser -u postgres -- psql -c ' ALTER USER gitea RENAME TO forgejo; ALTER DATABASE gitea RENAME TO forgejo;'nixos-rebuild switchsystemctl stop forgejochown -R forgejo:forgejo /var/lib/forgejosystemctl restart forgejoAlternatively, instead of renaming the database, copying the state folder andchanging the user, the forgejo module can be set up to re-use the old storagelocations and database, instead of having to copy or rename them.Make sure to disableservices.gitea, when doing this.
{ services.gitea.enable = false; services.forgejo = { enable = true; user = "gitea"; group = "gitea"; stateDir = "/var/lib/gitea"; database.name = "gitea"; database.user = "gitea"; }; users.users.gitea = { home = "/var/lib/gitea"; useDefaultShell = true; group = "gitea"; isSystemUser = true; }; users.groups.gitea = { };}Table of Contents
dump1090-fa is a demodulator and decoder for ADS-B, Mode S, and Mode 3A/3C aircraft transponder messages. It can receive and decode these messages from an attached software-defined radio or from data received over a network connection.
When enabled, this module automatically creates a systemd service to start thedump1090-fa application. The application will then write its JSON output files to/run/dump1090-fa.
Exposing the integrated web interface is left to the user’s configuration. Below is a minimal example demonstrating how to serve it using Nginx:
{ pkgs, ... }:{ services.dump1090-fa.enable = true; services.nginx = { enable = true; virtualHosts."dump1090-fa" = { locations = { "/".alias = "${pkgs.dump1090-fa}/share/dump1090/"; "/data/".alias = "/run/dump1090-fa/"; }; }; };}Table of Contents
Apache Kafka is an open-source distributed eventstreaming platform
The Apache Kafka service is configured almost exclusively through itssettings option, with each attributecorresponding to theupstream configurationmanual broker settings.
Unlike in Zookeeper mode, Kafka inKRaft mode requires each logdir to be “formatted” (which means a cluster-specific a metadata file mustexist in each log dir)
The upstream intention is for users to execute thestoragetool to achieve this,but this module contains a few extra options to automate this:
Migrating a cluster to the newsettings-based changes requires adapting removed options to the corresponding upstream settings.
This means that the upstreamBroker Configs documentation should be followed closely.
Note that dotted options in the upstream docs donot correspond to nested Nix attrsets, but instead as quoted top levelsettings attributes, as inservices.apache-kafka.settings."broker.id",NOTservices.apache-kafka.settings.broker.id.
Care should be taken, especially when migrating clusters from the old module, to ensure that the same intended configuration is reproduced faithfully viasettings.
To assist in the comparison, the final config can be inspected by building the config file itself, ie. with:nix-build <nixpkgs/nixos> -A config.services.apache-kafka.configFiles.serverProperties.
Notable changes to be aware of include:
Removal ofservices.apache-kafka.extraProperties andservices.apache-kafka.serverProperties
Translate using arbitrary properties usingservices.apache-kafka.settings
The intention is for all broker properties to be fully representable viaservices.apache-kafka.settings.
If this is not the case, please do consider raising an issue.
Until it can be remedied, youcan bail out by usingservices.apache-kafka.configFiles.serverProperties to the path of a fully rendered properties file.
Removal ofservices.apache-kafka.hostname andservices.apache-kafka.port
Translate using:services.apache-kafka.settings.listeners
Removal ofservices.apache-kafka.logDirs
Translate using:services.apache-kafka.settings."log.dirs"
Removal ofservices.apache-kafka.brokerId
Translate using:services.apache-kafka.settings."broker.id"
Removal ofservices.apache-kafka.zookeeper
Translate using:services.apache-kafka.settings."zookeeper.connect"
Table of Contents
Anki Sync Server is the built-insync server, present in recent versions of Anki. Advanced users who cannot ordo not wish to use AnkiWeb can use this sync server instead of AnkiWeb.
This module is compatible only with Anki versions >=2.1.66, due torecentenhancements to the Nix ankipackage.
By default, the module creates asystemdunit which runs the sync server with an isolated user using the systemdDynamicUser option.
This can be done by enabling theanki-sync-server service:
{ ... }:{ services.anki-sync-server.enable = true;}It is necessary to set at least one username-password pair underservices.anki-sync-server.users. For example
{ services.anki-sync-server.users = [ { username = "user"; passwordFile = /etc/anki-sync-server/user; } ];}Here,passwordFile is the path to a file containing just the password inplaintext. Make sure to set permissions to make this file unreadable to anyuser besides root.
By default, synced data are stored in/var/lib/anki-sync-server/ankiuser.You can change the directory by usingservices.anki-sync-server.baseDirectory
{ services.anki-sync-server.baseDirectory = "/home/anki/data"; }By default, the server listen addressservices.anki-sync-server.hostis set to localhost, listening on portservices.anki-sync-server.port, and does not open the firewall. Thisis suitable for purely local testing, or to be used behind a reverse proxy. Ifyou want to expose the sync server directly to other computers (not recommendedin most circumstances, because the sync server doesn’t use HTTPS), then set thefollowing options:
{ services.anki-sync-server.address = "0.0.0.0"; services.anki-sync-server.openFirewall = true;}Table of Contents
Matrix is an open standard forinteroperable, decentralised, real-time communication over IP. It can be usedto power Instant Messaging, VoIP/WebRTC signalling, Internet of Thingscommunication - or anywhere you need a standard HTTP API for publishing andsubscribing to data whilst tracking the conversation history.
This chapter will show you how to set up your own, self-hosted Matrixhomeserver using the Synapse reference homeserver, and how to serve your owncopy of the Element web client. See theTry Matrix Now!overview page for links to Element Apps for Android and iOS,desktop clients, as well as bridges to other networks and other projectsaround Matrix.
Synapse isthe reference homeserver implementation of Matrix from the core developmentteam at matrix.org.
Before deploying synapse server, a postgresql database must be set up.For that, please make sure that postgresql is running and the followingSQL statements to create a user & database calledmatrix-synapse wereexecuted before synapse starts up:
CREATE ROLE "matrix-synapse";CREATE DATABASE "matrix-synapse" WITH OWNER "matrix-synapse" TEMPLATE template0 LC_COLLATE = "C" LC_CTYPE = "C";Usually, it’s sufficient to do this once manually beforecontinuing with the installation.
Please make sure to set a different password.
The following configuration example will set up asynapse server for theexample.org domain, served fromthe hostmyhostname.example.org. For more information,please refer to theinstallation instructions of Synapse .
{ pkgs, lib, config, ...}:let fqdn = "${config.networking.hostName}.${config.networking.domain}"; baseUrl = "https://${fqdn}"; clientConfig."m.homeserver".base_url = baseUrl; serverConfig."m.server" = "${fqdn}:443"; mkWellKnown = data: '' default_type application/json; add_header Access-Control-Allow-Origin *; return 200 '${builtins.toJSON data}'; '';in{ networking.hostName = "myhostname"; networking.domain = "example.org"; networking.firewall.allowedTCPPorts = [ 80 443 ]; services.postgresql.enable = true; services.nginx = { enable = true; recommendedTlsSettings = true; recommendedOptimisation = true; recommendedGzipSettings = true; recommendedProxySettings = true; virtualHosts = { # If the A and AAAA DNS records on example.org do not point on the same host as the # records for myhostname.example.org, you can easily move the /.well-known # virtualHost section of the code to the host that is serving example.org, while # the rest stays on myhostname.example.org with no other changes required. # This pattern also allows to seamlessly move the homeserver from # myhostname.example.org to myotherhost.example.org by only changing the # /.well-known redirection target. "${config.networking.domain}" = { enableACME = true; forceSSL = true; # This section is not needed if the server_name of matrix-synapse is equal to # the domain (i.e. example.org from @foo:example.org) and the federation port # is 8448. # Further reference can be found in the docs about delegation under # https://element-hq.github.io/synapse/latest/delegate.html locations."= /.well-known/matrix/server".extraConfig = mkWellKnown serverConfig; # This is usually needed for homeserver discovery (from e.g. other Matrix clients). # Further reference can be found in the upstream docs at # https://spec.matrix.org/latest/client-server-api/#getwell-knownmatrixclient locations."= /.well-known/matrix/client".extraConfig = mkWellKnown clientConfig; }; "${fqdn}" = { enableACME = true; forceSSL = true; # It's also possible to do a redirect here or something else, this vhost is not # needed for Matrix. It's recommended though to *not put* element # here, see also the section about Element. locations."/".extraConfig = '' return 404; ''; # Forward all Matrix API calls to the synapse Matrix homeserver. A trailing slash # *must not* be used here. locations."/_matrix".proxyPass = "http://[::1]:8008"; # Forward requests for e.g. SSO and password-resets. locations."/_synapse/client".proxyPass = "http://[::1]:8008"; }; }; }; services.matrix-synapse = { enable = true; settings.server_name = config.networking.domain; # The public base URL value must match the `base_url` value set in `clientConfig` above. # The default value here is based on `server_name`, so if your `server_name` is different # from the value of `fqdn` above, you will likely run into some mismatched domain names # in client applications. settings.public_baseurl = baseUrl; settings.listeners = [ { port = 8008; bind_addresses = [ "::1" ]; type = "http"; tls = false; x_forwarded = true; resources = [ { names = [ "client" "federation" ]; compress = true; } ]; } ]; };}If you want to run a server with public registration by anybody, you canthen enableservices.matrix-synapse.settings.enable_registration = true;.Otherwise, or you can generate a registration secret withpwgen -s 64 1 and set it withservices.matrix-synapse.settings.registration_shared_secret.To create a new user or admin from the terminal your client listenermust be configured to use TCP sockets. Then you can run the followingafter you have set the secret and have rebuilt NixOS:
$ nix-shell -p matrix-synapse$ register_new_matrix_user -k your-registration-shared-secret http://localhost:8008New user localpart: your-usernamePassword:Confirm password:Make admin [no]:Success!In the example, this would create a user with the Matrix Identifier@your-username:example.org.
When usingservices.matrix-synapse.settings.registration_shared_secret, the secretwill end up in the world-readable store. Instead it’s recommended to deploy the secretin an additional file like this:
Create a file with the following contents:
registration_shared_secret: your-very-secret-secretDeploy the file with a secret-manager such asdeployment.keysfromnixops(1) orsops-nix toe.g./run/secrets/matrix-shared-secret and ensure that it’s readablebymatrix-synapse.
Include the file like this in your configuration:
{ services.matrix-synapse.extraConfigFiles = [ "/run/secrets/matrix-shared-secret" ];}It’s also possible to user alternative authentication mechanism such asLDAP (viamatrix-synapse-ldap3)orOpenID.
Element Web isthe reference web client for Matrix and developed by the core team atmatrix.org. Element was formerly known as Riot.im, see theElement introductory blog postfor more information. The following snippet can be optionally added to the code beforeto complete the synapse installation with a web client served athttps://element.myhostname.example.org andhttps://element.example.org. Alternatively, you can use the hostedcopy athttps://app.element.io/,or use other web clients or native client applications. Due to the/.well-known urls set up done above, many clients shouldfill in the required connection details automatically when you enter yourMatrix Identifier. SeeTry Matrix Now!for a list of existing clients and their supported featureset.
{ services.nginx.virtualHosts."element.${fqdn}" = { enableACME = true; forceSSL = true; serverAliases = [ "element.${config.networking.domain}" ]; root = pkgs.element-web.override { conf = { default_server_config = clientConfig; # see `clientConfig` from the snippet above. }; }; };}The Element developers do not recommend running Element and your Matrixhomeserver on the same fully-qualified domain name for security reasons. Inthe example, this means that you should not reuse themyhostname.example.org virtualHost to also serve Element,but instead serve it on a different subdomain, likeelement.example.org in the example. See theElement Important Security Notesfor more information on this subject.
Table of Contents
This chapter will show you how to set up your own, self-hostedMjolnir instance.
As an all-in-one moderation tool, it can protect your server frommalicious invites, spam messages, and whatever else you don’t want.In addition to server-level protection, Mjolnir is great for communitieswanting to protect their rooms without having to use their personalaccounts for moderation.
The bot by default includes support for bans, redactions, anti-spam,server ACLs, room directory changes, room alias transfers, accountdeactivation, room shutdown, and more.
See theREADMEpage and theModerator’s guidefor additional instructions on how to setup and use Mjolnir.
Foradditional settingsseethe default configuration.
First create a new Room which will be used as a management room for Mjolnir. Inthis room, Mjolnir will log possible errors and debugging information. You’llneed to set this Room-ID inservices.mjolnir.managementRoom.
Next, create a new user for Mjolnir on your homeserver, if not present already.
The Mjolnir Matrix user expects to be free of any rate limiting.SeeSynapse #6286for an example on how to achieve this.
If you want Mjolnir to be able to deactivate users, move room aliases, shutdown rooms, etc.you’ll need to make the Mjolnir user a Matrix server admin.
Now invite the Mjolnir user to the management room.
It is recommended to usePantalaimon,so your management room can be encrypted. This also applies if you are looking to moderate an encrypted room.
To enable the Pantalaimon E2E Proxy for mjolnir, enableservices.mjolnir.pantalaimon. This willautoconfigure a new Pantalaimon instance, which will connect to the homeserverset inservices.mjolnir.homeserverUrl and Mjolnir itselfwill be configured to connect to the new Pantalaimon instance.
{ services.mjolnir = { enable = true; homeserverUrl = "https://matrix.domain.tld"; pantalaimon = { enable = true; username = "mjolnir"; passwordFile = "/run/secrets/mjolnir-password"; }; protectedRooms = [ "https://matrix.to/#/!xxx:domain.tld" ]; managementRoom = "!yyy:domain.tld"; };}If you are using a managed“Element Matrix Services (EMS)”server, you will need to consent to the terms and conditions. Upon startup, an errorlog entry with a URL to the consent page will be generated.
A Synapse module is also available to apply the same rulesets the botuses across an entire homeserver.
To use the Antispam Module, addmatrix-synapse-plugins.matrix-synapse-mjolnir-antispamto the Synapse plugin list and enable themjolnir.Module module.
{ services.matrix-synapse = { plugins = with pkgs; [ matrix-synapse-plugins.matrix-synapse-mjolnir-antispam ]; extraConfig = '' modules: - module: mjolnir.Module config: # Prevent servers/users in the ban lists from inviting users on this # server to rooms. Default true. block_invites: true # Flag messages sent by servers/users in the ban lists as spam. Currently # this means that spammy messages will appear as empty to users. Default # false. block_messages: false # Remove users from the user directory search by filtering matrix IDs and # display names by the entries in the user ban list. Default false. block_usernames: false # The room IDs of the ban lists to honour. Unlike other parts of Mjolnir, # this list cannot be room aliases or permalinks. This server is expected # to already be joined to the room - Mjolnir will not automatically join # these rooms. ban_lists: - "!roomid:example.org" ''; };}Table of Contents
Mautrix-Whatsapp is a Matrix-Whatsapp puppeting bridge.
Setservices.mautrix-whatsapp.enable totrue. The service will useSQLite by default.
To create your configuration check the default configuration forservices.mautrix-whatsapp.settings. To obtain the complete defaultconfiguration, runnix-shell -p mautrix-whatsapp --run "mautrix-whatsapp -c default.yaml -e".
Mautrix-Whatsapp allows for some options likeencryption.pickle_key,provisioning.shared_secret, allow the valuegenerate to be set.Since the configuration file is regenerated on every start of theservice, the generated values would be discarded and might break yourinstallation. Instead, set those values viaservices.mautrix-whatsapp.environmentFile.
With Mautrix-Whatsapp v0.11.0 the configuration has been rearranged. Mautrix-Whatsappperforms an automatic configuration migration so your pre-0.7.0 configurationshould just continue to work.
In case you want to update your NixOS configuration, compare the migrated configurationat/var/lib/mautrix-whatsapp/config.yaml with the default configuration(nix-shell -p mautrix-whatsapp --run "mautrix-whatsapp -c example.yaml -e") andupdate your module configuration accordingly.
Table of Contents
Mautrix-Signal is a Matrix-Signal puppeting bridge.
Setservices.mautrix-signal.enable totrue. The service will useSQLite by default.
To create your configuration check the default configuration forservices.mautrix-signal.settings. To obtain the complete defaultconfiguration, runnix-shell -p mautrix-signal --run "mautrix-signal -c default.yaml -e".
Mautrix-Signal allows for some options likeencryption.pickle_key,provisioning.shared_secret, allow the valuegenerate to be set.Since the configuration file is regenerated on every start of theservice, the generated values would be discarded and might break yourinstallation. Instead, set those values viaservices.mautrix-signal.environmentFile.
With Mautrix-Signal v0.7.0 the configuration has been rearranged. Mautrix-Signalperforms an automatic configuration migration so your pre-0.7.0 configurationshould just continue to work.
In case you want to update your NixOS configuration, compare the migrated configurationat/var/lib/mautrix-signal/config.yaml with the default configuration(nix-shell -p mautrix-signal --run "mautrix-signal -c example.yaml -e") andupdate your module configuration accordingly.
Table of Contents
Maubot is a plugin-based botframework for Matrix.
Setservices.maubot.enable totrue. The service will useSQLite by default.
If you want to use PostgreSQL instead of SQLite, do this:
{ services.maubot.settings.database = "postgresql://maubot@localhost/maubot"; }If the PostgreSQL connection requires a password, you will have toadd it later on step 8.
If you plan to expose your Maubot interface to the web, do somethinglike this:
{ services.nginx.virtualHosts."matrix.example.org".locations = { "/_matrix/maubot/" = { proxyPass = "http://127.0.0.1:${toString config.services.maubot.settings.server.port}"; proxyWebsockets = true; }; }; services.maubot.settings.server.public_url = "matrix.example.org"; # do the following only if you want to use something other than /_matrix/maubot... services.maubot.settings.server.ui_base_path = "/another/base/path";}Optionally, setservices.maubot.pythonPackages to a list of python3packages to make available for Maubot plugins.
Optionally, setservices.maubot.plugins to a list of Maubotplugins (full list available at https://plugins.maubot.xyz/):
{ services.maubot.plugins = with config.services.maubot.package.plugins; [ reactbot # This will only change the default config! After you create a # plugin instance, the default config will be copied into that # instance's config in Maubot's database, and further base config # changes won't affect the running plugin. (rss.override { base_config = { update_interval = 60; max_backoff = 7200; spam_sleep = 2; command_prefix = "rss"; admins = [ "@chayleaf:pavluk.org" ]; }; }) ]; # ...or... services.maubot.plugins = config.services.maubot.package.plugins.allOfficialPlugins; # ...or... services.maubot.plugins = config.services.maubot.package.plugins.allPlugins; # ...or... services.maubot.plugins = with config.services.maubot.package.plugins; [ (weather.override { # you can pass base_config as a string base_config = '' default_location: New York default_units: M default_language: show_link: true show_image: false ''; }) ];}Start Maubot at least once before doing the following steps (it’snecessary to generate the initial config).
If your PostgreSQL connection requires a password, adddatabase: postgresql://user:password@localhost/maubotto/var/lib/maubot/config.yaml. This overrides the Nix-providedconfig. Even then, don’t remove thedatabase line from Nix configso the module knows you use PostgreSQL!
To create a user account for logging into Maubot web UI andconfiguring it, generate a password using the shell commandmkpasswd -R 12 -m bcrypt, and edit/var/lib/maubot/config.yamlwith the following:
admins: admin_username: $2b$12$g.oIStUeUCvI58ebYoVMtO/vb9QZJo81PsmVOomHiNCFbh0dJpZVaWhereadmin_username is your username, and$2b... is the bcryptedpassword.
Optional: if you want to be able to register new users with theMaubot CLI (mbc), and your homeserver is private, add yourhomeserver’s registration key to/var/lib/maubot/config.yaml:
homeservers: matrix.example.org: url: https://matrix.example.org secret: your-very-secret-keyRestart Maubot after editing/var/lib/maubot/config.yaml,andMaubot will be available athttps://matrix.example.org/_matrix/maubot. If you want to use thembc CLI, it’s available using themaubot package (nix-shell -p maubot).
Table of Contents
This chapter will show you how to set up your own, self-hostedDraupnir instance.
As an all-in-one moderation tool, it can protect your server frommalicious invites, spam messages, and whatever else you don’t want.In addition to server-level protection, Draupnir is great for communitieswanting to protect their rooms without having to use their personalaccounts for moderation.
The bot by default includes support for bans, redactions, anti-spam,server ACLs, room directory changes, room alias transfers, accountdeactivation, room shutdown, and more. (This depends on homeserver configuration and implementation.)
See theREADMEpage and theModerator’s guidefor additional instructions on how to setup and use Draupnir.
Foradditional settingsseethe default configuration.
First create a new unencrypted, private room which will be used as the management room for Draupnir.This is the room in which moderators will interact with Draupnir and where it will log possible errors and debugging information.You’ll need to set this room ID or alias inservices.draupnir.settings.managementRoom.
Next, create a new user for Draupnir on your homeserver, if one does not already exist.
The Draupnir Matrix user expects to be free of any rate limiting.SeeSynapse #6286for an example on how to achieve this.
If you want Draupnir to be able to deactivate users, move room aliases, shut down rooms, etc.you’ll need to make the Draupnir user a Matrix server admin.
Now invite the Draupnir user to the management room.Draupnir will automatically try to join this room on startup.
{ services.draupnir = { enable = true; settings = { homeserverUrl = "https://matrix.org"; managementRoom = "!yyy:example.org"; }; secrets = { accessToken = "/path/to/secret/containing/access-token"; }; };}If you are using a managed“Element Matrix Services (EMS)”server, you will need to consent to the terms and conditions. Upon startup, an errorlog entry with a URL to the consent page will be generated.
Table of Contents
Mailman is freesoftware for managing electronic mail discussion and e-newsletterlists. Mailman and its web interface can be configured using thecorresponding NixOS module. Note that this service is best used withan existing, securely configured Postfix setup, as it does not automatically configure this.
For a basic configuration with Postfix as the MTA, the following settings are suggested:
{ config, ... }:{ services.postfix = { enable = true; relayDomains = [ "hash:/var/lib/mailman/data/postfix_domains" ]; sslCert = config.security.acme.certs."lists.example.org".directory + "/full.pem"; sslKey = config.security.acme.certs."lists.example.org".directory + "/key.pem"; config = { transport_maps = [ "hash:/var/lib/mailman/data/postfix_lmtp" ]; local_recipient_maps = [ "hash:/var/lib/mailman/data/postfix_lmtp" ]; }; }; services.mailman = { enable = true; serve.enable = true; hyperkitty.enable = true; webHosts = [ "lists.example.org" ]; siteOwner = "mailman@example.org"; }; services.nginx.virtualHosts."lists.example.org".enableACME = true; networking.firewall.allowedTCPPorts = [ 25 80 443 ];}DNS records will also be required:
AAAA andA records pointing to the host in question, in order for browsers to be able to discover the address of the web server;
AnMX record pointing to a domain name at which the host is reachable, in order for other mail servers to be able to deliver emails to the mailing lists it hosts.
After this has been done and appropriate DNS records have beenset up, the Postorius mailing list manager and the Hyperkittyarchive browser will be available athttps://lists.example.org/. Note that this setup is notsufficient to deliver emails to most email providers nor toavoid spam – a number of additional measures for authenticatingincoming and outgoing mails, such as SPF, DMARC and DKIM arenecessary, but outside the scope of the Mailman module.
Mailman also supports other MTA, though with a little bit more configuration. For example, to use Mailman with Exim, you can use the following settings:
{ config, ... }:{ services = { mailman = { enable = true; siteOwner = "mailman@example.org"; enablePostfix = false; settings.mta = { incoming = "mailman.mta.exim4.LMTP"; outgoing = "mailman.mta.deliver.deliver"; lmtp_host = "localhost"; lmtp_port = "8024"; smtp_host = "localhost"; smtp_port = "25"; configuration = "python:mailman.config.exim4"; }; }; exim = { enable = true; # You can configure Exim in a separate file to reduce configuration.nix clutter config = builtins.readFile ./exim.conf; }; };}The exim config needs some special additions to work with Mailman. CurrentlyNixOS can’t manage Exim config with such granularity. Please refer toMailman documentationfor more info on configuring Mailman for working with Exim.
Trezor is an open-source cryptocurrency hardware wallet and security tokenallowing secure storage of private keys.
It offers advanced features such U2F two-factor authorization, SSH loginthroughTrezor SSH agent,GPG and apassword manager.For more information, guides and documentation, seehttps://wiki.trezor.io.
To enable Trezor support, add the following to yourconfiguration.nix:
services.trezord.enable = true;This will add all necessary udev rules and start Trezor Bridge.
Table of Contents
This section describes how to customize display configuration using:
kernel modes
EDID files
Example situations it can help you with:
display controllers (external hardware) not advertising EDID at all,
misbehaving graphics drivers,
loading custom display configuration before the Display Manager is running,
In case of very wrong monitor controller and/or video driver combination you canforce the display to be enabledand skip some driver-side checks by addingvideo=<OUTPUT>:e toboot.kernelParams.This is exactly the case withamdgpu drivers
{ # force enabled output to skip `amdgpu` checks hardware.display.outputs."DP-1".mode = "e"; # completely disable output no matter what is connected to it hardware.display.outputs."VGA-2".mode = "d"; /* equals boot.kernelParams = [ "video=DP-1:e" "video=VGA-2:d" ]; */}To make custom EDID binaries discoverable you should first create a derivation storing them at$out/lib/firmware/edid/ and secondly add that derivation tohardware.display.edid.packages NixOS option:
{ hardware.display.edid.packages = [ (pkgs.runCommand "edid-custom" { } '' mkdir -p $out/lib/firmware/edid base64 -d > "$out/lib/firmware/edid/custom1.bin" <<'EOF' <insert your base64 encoded EDID file here `base64 < /sys/class/drm/card0-.../edid`> EOF base64 -d > "$out/lib/firmware/edid/custom2.bin" <<'EOF' <insert your base64 encoded EDID file here `base64 < /sys/class/drm/card1-.../edid`> EOF '') ];}There are 2 options significantly easing preparation of EDID files:
hardware.display.edid.linuxhw
hardware.display.edid.modelines
To assign available custom EDID binaries to your monitor (video output) usehardware.display.outputs."<NAME>".edid option.Under the hood it addsdrm.edid_firmware entry toboot.kernelParams NixOS option for each configured output:
{ hardware.display.outputs."VGA-1".edid = "custom1.bin"; hardware.display.outputs."VGA-2".edid = "custom2.bin"; /* equals: boot.kernelParams = [ "drm.edid_firmware=VGA-1:edid/custom1.bin,VGA-2:edid/custom2.bin" ]; */}hardware.display.edid.linuxhw utilizespkgs.linuxhw-edid-fetcher to extract EDID filesfrom https://github.com/linuxhw/EDID based on simple string/regexp search identifying exact entries:
{ hardware.display.edid.linuxhw."PG278Q_2014" = [ "PG278Q" "2014" ]; /* equals: hardware.display.edid.packages = [ (pkgs.linuxhw-edid-fetcher.override { displays = { "PG278Q_2014" = [ "PG278Q" "2014" ]; }; }) ]; */}hardware.display.edid.modelines utilizespkgs.edid-generator package allowing you toconveniently useXFree86 Modeline entries as EDID binaries:
{ hardware.display.edid.modelines."PG278Q_60" = " 241.50 2560 2608 2640 2720 1440 1443 1448 1481 -hsync +vsync"; hardware.display.edid.modelines."PG278Q_120" = " 497.75 2560 2608 2640 2720 1440 1443 1448 1525 +hsync -vsync"; /* equals: hardware.display.edid.packages = [ (pkgs.edid-generator.overrideAttrs { clean = true; modelines = '' Modeline "PG278Q_60" 241.50 2560 2608 2640 2720 1440 1443 1448 1481 -hsync +vsync Modeline "PG278Q_120" 497.75 2560 2608 2640 2720 1440 1443 1448 1525 +hsync -vsync ''; }) ]; */}And finally this is a complete working example for a 2014 (first) batch ofAsus PG278Q monitor withamdgpu drivers:
{ hardware.display.edid.modelines."PG278Q_60" = " 241.50 2560 2608 2640 2720 1440 1443 1448 1481 -hsync +vsync"; hardware.display.edid.modelines."PG278Q_120" = " 497.75 2560 2608 2640 2720 1440 1443 1448 1525 +hsync -vsync"; hardware.display.outputs."DP-1".edid = "PG278Q_60.bin"; hardware.display.outputs."DP-1".mode = "e";}Table of Contents
Emacs is anextensible, customizable, self-documenting real-time display editor — andmore. At its core is an interpreter for Emacs Lisp, a dialect of the Lispprogramming language with extensions to support text editing.
Emacs runs within a graphical desktop environment using the X Window System,but works equally well on a text terminal. UndermacOS, a “Mac port” edition is available, whichuses Apple’s native GUI frameworks.
Nixpkgs provides a superior environment forrunning Emacs. It’s simple to create custom buildsby overriding the default packages. Chaotic collections of Emacs Lisp codeand extensions can be brought under control using declarative packagemanagement. NixOS even provides asystemd user service for automatically starting the Emacsdaemon.
Emacs can be installed in the normal way for Nix (seePackage Management). In addition, a NixOSservice can be enabled.
Nixpkgs defines several basic Emacs packages.The following are attributes belonging to thepkgs set:
emacsThe latest stable version of Emacs using theGTK 2widget toolkit.
emacs-noxEmacs built without any dependency on X11 libraries.
emacsMacportEmacs with the “Mac port” patches, providing a more native look andfeel under macOS.
If those aren’t suitable, then the following imitation Emacs editors arealso available in Nixpkgs:Zile,mg,Yi,jmacs.
Emacs includes an entire ecosystem of functionality beyond text editing,including a project planner, mail and news reader, debugger interface,calendar, and more.
Most extensions are gotten with the Emacs packaging system(package.el) fromEmacs Lisp Package Archive (ELPA),MELPA,MELPA Stable, andOrg ELPA. Nixpkgs isregularly updated to mirror all these archives.
Under NixOS, you can continue to usepackage-list-packages andpackage-install to install packages. You can alsodeclare the set of Emacs packages you need using the derivations fromNixpkgs. The rest of this section discusses declarative installation ofEmacs packages through nixpkgs.
The first step to declare the list of packages you want in your Emacsinstallation is to create a dedicated derivation. This can be done in adedicatedemacs.nix file such as:
emacs.nix)/* This is a nix expression to build Emacs and some Emacs packages I like from source on any distribution where Nix is installed. This will install all the dependencies from the nixpkgs repository and build the binary files without interfering with the host distribution. To build the project, type the following from the current directory: $ nix-build emacs.nix To run the newly compiled executable: $ ./result/bin/emacs*/# The first non-comment line in this file indicates that# the whole file represents a function.{ pkgs ? import <nixpkgs> { },}:let # The let expression below defines a myEmacs binding pointing to the # current stable version of Emacs. This binding is here to separate # the choice of the Emacs binary from the specification of the # required packages. myEmacs = pkgs.emacs; # This generates an emacsWithPackages function. It takes a single # argument: a function from a package set to a list of packages # (the packages that will be available in Emacs). emacsWithPackages = (pkgs.emacsPackagesFor myEmacs).emacsWithPackages; # The rest of the file specifies the list of packages to install. In the # example, two packages (magit and zerodark-theme) are taken from # MELPA stable.inemacsWithPackages ( epkgs: (with epkgs.melpaStablePackages; [ magit # ; Integrate git <C-x g> zerodark-theme # ; Nicolas' theme ]) # Two packages (undo-tree and zoom-frm) are taken from MELPA. ++ (with epkgs.melpaPackages; [ undo-tree # ; <C-x u> to show the undo tree zoom-frm # ; increase/decrease font size for all buffers %lt;C-x C-+> ]) # Three packages are taken from GNU ELPA. ++ (with epkgs.elpaPackages; [ auctex # ; LaTeX mode beacon # ; highlight my cursor when scrolling nameless # ; hide current package name everywhere in elisp code ]) # notmuch is taken from a nixpkgs derivation which contains an Emacs mode. ++ [ pkgs.notmuch # From main packages set ])The result of this configuration will be anemacscommand which launches Emacs with all of your chosen packages in theload-path.
You can check that it works by executing this in a terminal:
$ nix-build emacs.nix$ ./result/bin/emacs -qand then typingM-x package-initialize. Check that youcan use all the packages you want in this Emacs instance. For example, tryswitching to the zerodark theme throughM-x load-theme <RET> zerodark <RET> y.
A few popular extensions worth checking out are: auctex, company,edit-server, flycheck, helm, iedit, magit, multiple-cursors, projectile,and yasnippet.
The list of available packages in the various ELPA repositories can be seenwith the following commands:
nix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.elpaPackagesnix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.melpaPackagesnix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.melpaStablePackagesnix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.orgPackagesIf you are on NixOS, you can install this particular Emacs for all users byputting theemacs.nix file in/etc/nixos and adding it to the list ofsystem packages (seethe section called “Declarative Package Management”). Simply modify yourfileconfiguration.nix to make it contain:
configuration.nix{ environment.systemPackages = [ # [...] (import ./emacs.nix { inherit pkgs; }) ];}In this case, the nextnixos-rebuild switch will takecare of adding youremacs to thePATHenvironment variable (seeChanging the Configuration).
If you are not on NixOS or want to install this particular Emacs only foryourself, you can do so by puttingemacs.nix in~/.config/nixpkgs andadding it to your~/.config/nixpkgs/config.nix (seeNixpkgs manual):
~/.config/nixpkgs/config.nix{ packageOverrides = super: let self = super.pkgs; in { myemacs = import ./emacs.nix { pkgs = self; }; };}In this case, the nextnix-env -f '<nixpkgs>' -iA myemacs will take care of adding your emacs to thePATH environment variable.
If you want, you can tweak the Emacs package itself from youremacs.nix. For example, if you want to have aGTK 3-based Emacs instead of the default GTK 2-based binary and remove theautomatically generatedemacs.desktop (useful if youonly useemacsclient), you can change your fileemacs.nix in this way:
{ pkgs ? import <nixpkgs> { },}:let myEmacs = (pkgs.emacs.override { # Use gtk3 instead of the default gtk2 withGTK3 = true; withGTK2 = false; }).overrideAttrs (attrs: { # I don't want emacs.desktop file because I only use # emacsclient. postInstall = (attrs.postInstall or "") + '' rm $out/share/applications/emacs.desktop ''; });in[ # ...]After building this file as shown inExample 7, youwill get an GTK 3-based Emacs binary pre-loaded with your favorite packages.
NixOS provides an optionalsystemd service which launchesEmacs daemonwith the user’s login session.
Source:modules/services/editors/emacs.nix
To install and enable thesystemd user service for Emacsdaemon, add the following to yourconfiguration.nix:
{ services.emacs.enable = true; }Theservices.emacs.package option allows a customderivation to be used, for example, one created byemacsWithPackages.
Ensure that the Emacs server is enabled for your user’s Emacsconfiguration, either by customizing theserver-modevariable, or by adding(server-start) to~/.emacs.d/init.el.
To start the daemon, execute the following:
$ nixos-rebuild switch # to activate the new configuration.nix$ systemctl --user daemon-reload # to force systemd reload$ systemctl --user start emacs.service # to start the Emacs daemonThe server should now be ready to serve Emacs clients.
Ensure that the Emacs server is enabled, either by customizing theserver-mode variable, or by adding(server-start) to~/.emacs.
To connect to the Emacs daemon, run one of the following:
emacsclient FILENAMEemacsclient --create-frame # opens a new frame (window)emacsclient --create-frame --tty # opens a new frame on the current terminalEDITOR variableIfservices.emacs.defaultEditor istrue, theEDITOR variable will be setto a wrapper script which launchesemacsclient.
Any setting ofEDITOR in the shell config files willoverrideservices.emacs.defaultEditor. To make sureEDITOR refers to the Emacs wrapper script, remove anyexistingEDITOR assignment from.profile,.bashrc,.zshenv or any other shell config file.
If you have formed certain bad habits when editing files, these can becorrected with a shell alias to the wrapper script:
alias vi=$EDITORIn general,systemd user services are globally enabledby symlinks in/etc/systemd/user. In the case whereEmacs daemon is not wanted for all users, it is possible to install theservice but not globally enable it:
{ services.emacs.enable = false; services.emacs.install = true;}To enable thesystemd user service for just thecurrently logged in user, run:
systemctl --user enable emacsThis will add the symlink~/.config/systemd/user/emacs.service.
If you want to only use extension packages from Nixpkgs, you can add(setq package-archives nil) to your init file.
After the declarative Emacs package configuration has been tested,previously downloaded packages can be cleaned up by removing~/.emacs.d/elpa (do make a backup first, in case youforgot a package).
Of interest may bemelpaPackages.nix-mode, whichprovides syntax highlighting for the Nix language. This is particularlyconvenient if you regularly edit Nix files.
You can usewoman to get completion of all availableman pages. For example, typeM-x woman <RET> nixos-rebuild <RET>.
Table of Contents
Livebook is a web application for writinginteractive and collaborative code notebooks.
Enabling thelivebook service creates a usersystemd unitwhich runs the server.
{ ... }:{ services.livebook = { enableUserService = true; environment = { LIVEBOOK_PORT = 20123; LIVEBOOK_PASSWORD = "mypassword"; }; # See note below about security environmentFile = "/var/lib/livebook.env"; };}The Livebook server has the ability to run any command as the user itis running under, so securing access to it with a password is highlyrecommended.
Putting the password in the Nix configuration like above is an easy way to getstarted but it is not recommended in the real world because the resultingenvironment variables can be read by unprivileged users. A better approachwould be to put the password in some secure user-readable location and setenvironmentFile = /home/user/secure/livebook.env.
TheLivebookdocumentationlists all the applicable environment variables. It is recommended to at leastsetLIVEBOOK_PASSWORD orLIVEBOOK_TOKEN_ENABLED=false.
By default, the Livebook service is run with minimum dependencies, butsome features require additional packages. For example, the machinelearning Kinos requiregcc andgnumake. To add these, useextraPackages:
{ services.livebook.extraPackages = with pkgs; [ gcc gnumake ];}Source:modules/services/development/blackfire.nix
Upstream documentation:https://blackfire.io/docs/introduction
Blackfire is a proprietary tool for profiling applications. There are several languages supported by the product but currently only PHP support is packaged in Nixpkgs. The back-end consists of a module that is loaded into the language runtime (calledprobe) and a service (agent) that the probe connects to and that sends the profiles to the server.
To use it, you will need to enable the agent and the probe on your server. The exact method will depend on the way you use PHP but here is an example of NixOS configuration for PHP-FPM:
let php = pkgs.php.withExtensions ({ enabled, all }: enabled ++ (with all; [ blackfire ]));in{ # Enable the probe extension for PHP-FPM. services.phpfpm = { phpPackage = php; }; # Enable and configure the agent. services.blackfire-agent = { enable = true; settings = { # You will need to get credentials at https://blackfire.io/my/settings/credentials # You can also use other options described in https://blackfire.io/docs/up-and-running/configuration/agent server-id = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"; server-token = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"; }; }; # Make the agent run on start-up. # (WantedBy= from the upstream unit not respected: https://github.com/NixOS/nixpkgs/issues/81138) # Alternately, you can start it manually with `systemctl start blackfire-agent`. systemd.services.blackfire-agent.wantedBy = [ "phpfpm-foo.service" ];}On your developer machine, you will also want to installthe client (seeblackfire package) or the browser extension to actually trigger the profiling.
Table of Contents
Source:modules/services/development/athens.nix
Upstream documentation:https://docs.gomods.io/
Athensis a Go module datastore and proxy
The main goal of Athens is providing a Go proxy ($GOPROXY) in regions without access tohttps://proxy.golang.org or toimprove the speed of Go module downloads for CI/CD systems.
A complete list of options for the Athens module may be foundhere.
A very basic configuration for Athens that acts as a caching and forwarding HTTP proxy is:
{ services.athens = { enable = true; };}If you want to prevent Athens from writing to disk, you can instead configure it to cache modules only in memory:
{ services.athens = { enable = true; storageType = "memory"; };}To use the local proxy in Go builds (outside ofnix), you can set the proxy as environment variable:
{ environment.variables = { GOPROXY = "http://localhost:3000"; };}To also use the local proxy for Go builds happening innix (withbuildGoModule), the nix daemon can be configured to pass the GOPROXY environment variable to thegoModules fixed-output derivation.
This can either be done via the nix-daemon systemd unit:
{ systemd.services.nix-daemon.environment.GOPROXY = "http://localhost:3000"; }or via theimpure-env experimental feature:
{ nix.settings.experimental-features = [ "configurable-impure-env" ]; nix.settings.impure-env = "GOPROXY=http://localhost:3000";}Source:modules/services/desktop/flatpak.nix
Upstream documentation:https://github.com/flatpak/flatpak/wiki
Flatpak is a system for building, distributing, and running sandboxed desktopapplications on Linux.
To enable Flatpak, add the following to yourconfiguration.nix:
{ services.flatpak.enable = true; }For the sandboxed apps to work correctly, desktop integration portals need tobe installed. If you run GNOME, this will be handled automatically for you;in other cases, you will need to add something like the following to yourconfiguration.nix:
{ xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ]; xdg.portal.config.common.default = "gtk";}Then, you will need to add a repository, for example,Flathub,either using the following commands:
$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo$ flatpak updateor by opening therepository file in GNOME Software.
Finally, you can search and install programs:
$ flatpak search bustle$ flatpak install flathub org.freedesktop.Bustle$ flatpak run org.freedesktop.BustleAgain, GNOME Software offers graphical interface for these tasks.
Table of Contents
Source:modules/services/databases/tigerbeetle.nix
Upstream documentation:https://docs.tigerbeetle.com/
TigerBeetle is a distributed financial accounting database designed for mission critical safety and performance.
To enable TigerBeetle, add the following to yourconfiguration.nix:
{ services.tigerbeetle.enable = true; }When first started, the TigerBeetle service will create its data file at/var/lib/tigerbeetle unless the file already exists, in which case it will just use the existing file.If you make changes to the configuration of TigerBeetle after its data file was already created (for example increasing the replica count), you may need to remove the existing file to avoid conflicts.
By default, TigerBeetle will only listen on a local interface.To configure it to listen on a different interface (and to configure it to connect to other replicas, if you’re creating more than one), you’ll have to set theaddresses option.Note that the TigerBeetle module won’t open any firewall ports automatically, so if you configure it to listen on an external interface, you’ll need to ensure that connections can reach it:
{ services.tigerbeetle = { enable = true; addresses = [ "0.0.0.0:3001" ]; }; networking.firewall.allowedTCPPorts = [ 3001 ];}A complete list of options for TigerBeetle can be foundhere.
Usually, TigerBeetle’supgrade process only requires replacing the binary used for the servers.This is not directly possible with NixOS since the new binary will be located at a different place in the Nix store.
However, since TigerBeetle is managed through systemd on NixOS, the only action you need to take when upgrading is to make sure the version of TigerBeetle you’re upgrading to supports upgrades from the version you’re currently running.This information will be on therelease notes for the version you’re upgrading to.
Table of Contents
Source:modules/services/databases/postgresql.nix
Upstream documentation:https://www.postgresql.org/docs/
PostgreSQL is an advanced, free, relational database.
To enable PostgreSQL, add the following to yourconfiguration.nix:
{ services.postgresql.enable = true; services.postgresql.package = pkgs.postgresql_15;}The default PostgreSQL version is approximately the latest major version available on the NixOS release matching yoursystem.stateVersion.This is because PostgreSQL upgrades require a manual migration process (see below).Hence, upgrades must happen by settingservices.postgresql.package explicitly.
By default, PostgreSQL stores its databases in/var/lib/postgresql/$psqlSchema. You can override this usingservices.postgresql.dataDir, e.g.
{ services.postgresql.dataDir = "/data/postgresql"; }As of NixOS 24.05,services.postgresql.ensureUsers.*.ensurePermissions has beenremoved, after a change to default permissions in PostgreSQL 15invalidated most of its previous use cases:
In psql < 15,ALL PRIVILEGES used to includeCREATE TABLE, wherein psql >= 15 that would be a separate permission
psql >= 15 instead gives only the database owner create permissions
Even on psql < 15 (or databases migrated to >= 15), it isrecommended to manually assign permissions along these lines
https://www.postgresql.org/docs/release/15.0/
https://www.postgresql.org/docs/15/ddl-schemas.html#DDL-SCHEMAS-PRIV
Usually, the database owner should be a database user of the samename. This can be done withservices.postgresql.ensureUsers.*.ensureDBOwnership = true;.
If the database user name equals the connecting system user name,postgres by default will accept a passwordless connection via unixdomain socket. This makes it possible to run many postgres-backedservices without creating any database secrets at all.
For many cases, it will be enough to have the database user be theowner. Untilservices.postgresql.ensureUsers.*.ensurePermissions hasbeen re-thought, if more users need access to the database, please useone of the following approaches:
WARNING:services.postgresql.initialScript is not recommendedforensurePermissions replacement, as that isonly run on firststart of PostgreSQL.
NOTE: all of these methods may be obsoleted, whenensure* isreworked, but it is expected that they will stay viable for runningdatabase migrations.
NOTE: please make sure that any added migrations are idempotent (re-runnable).
Advantage: compatible with postgres < 15, because it’s runas the database superuserpostgres.
postStartDisadvantage: need to take care of ordering yourself. In thisexample,mkAfter ensures that permissions are assigned after anydatabases fromensureDatabases andextraUser1 fromensureUsersare already created.
{ systemd.services.postgresql.postStart = lib.mkAfter '' $PSQL service1 -c 'GRANT SELECT ON ALL TABLES IN SCHEMA public TO "extraUser1"' $PSQL service1 -c 'GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO "extraUser1"' # .... '';}{ systemd.services."migrate-service1-db1" = { serviceConfig.Type = "oneshot"; requiredBy = "service1.service"; before = "service1.service"; after = "postgresql.service"; serviceConfig.User = "postgres"; environment.PSQL = "psql --port=${toString services.postgresql.settings.port}"; path = [ postgresql ]; script = '' $PSQL service1 -c 'GRANT SELECT ON ALL TABLES IN SCHEMA public TO "extraUser1"' $PSQL service1 -c 'GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO "extraUser1"' # .... ''; };}Advantage: re-uses systemd’s dependency ordering;
Disadvantage: relies on service user having grant permission. To be combined withensureDBOwnership.
preStart{ environment.PSQL = "psql --port=${toString services.postgresql.settings.port}"; path = [ postgresql ]; systemd.services."service1".preStart = '' $PSQL -c 'GRANT SELECT ON ALL TABLES IN SCHEMA public TO "extraUser1"' $PSQL -c 'GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO "extraUser1"' # .... '';}{ systemd.services."migrate-service1-db1" = { serviceConfig.Type = "oneshot"; requiredBy = "service1.service"; before = "service1.service"; after = "postgresql.service"; serviceConfig.User = "service1"; environment.PSQL = "psql --port=${toString services.postgresql.settings.port}"; path = [ postgresql ]; script = '' $PSQL -c 'GRANT SELECT ON ALL TABLES IN SCHEMA public TO "extraUser1"' $PSQL -c 'GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO "extraUser1"' # .... ''; };}Local connections are made through unix sockets by default and supportpeer authentication.This allows system users to login with database roles of the same name.For example, thepostgres system user is allowed to login with the database rolepostgres.
System users and database roles might not always match.In this case, to allow access for a service, you can create auser name map between system roles and an existing database role.
Assume that your app creates a roleadmin and you want theroot user to be able to login with it.You can then useservices.postgresql.identMap to define the map andservices.postgresql.authentication to enable it:
{ services.postgresql = { identMap = '' admin root admin ''; authentication = '' local all admin peer map=admin ''; };}To avoid conflicts with other modules, you should never apply a map toall roles.Because PostgreSQL will stop on the first matching line inpg_hba.conf, a line matching all roles would lock out other services.Each module should only manage user maps for the database roles that belong to this module.Best practice is to name the map after the database role it manages to avoid name conflicts.
The steps below demonstrate how to upgrade from an older version topkgs.postgresql_13.These instructions are also applicable to other versions.
Major PostgreSQL upgrades require a downtime and a few imperative steps to be called. This is the case becauseeach major version has some internal changes in the databases’ state. Because of that,NixOS places the state into/var/lib/postgresql/<version> where eachversioncan be obtained like this:
$ nix-instantiate --eval -A postgresql_13.psqlSchema"13"For an upgrade, a script like this can be used to simplify the process:
{ config, lib, pkgs, ...}:{ environment.systemPackages = [ ( let # XXX specify the postgresql package you'd like to upgrade to. # Do not forget to list the extensions you need. newPostgres = pkgs.postgresql_13.withPackages (pp: [ # pp.plv8 ]); cfg = config.services.postgresql; in pkgs.writeScriptBin "upgrade-pg-cluster" '' set -eux # XXX it's perhaps advisable to stop all services that depend on postgresql systemctl stop postgresql export NEWDATA="/var/lib/postgresql/${newPostgres.psqlSchema}" export NEWBIN="${newPostgres}/bin" export OLDDATA="${cfg.dataDir}" export OLDBIN="${cfg.finalPackage}/bin" install -d -m 0700 -o postgres -g postgres "$NEWDATA" cd "$NEWDATA" sudo -u postgres "$NEWBIN/initdb" -D "$NEWDATA" ${lib.escapeShellArgs cfg.initdbArgs} sudo -u postgres "$NEWBIN/pg_upgrade" \ --old-datadir "$OLDDATA" --new-datadir "$NEWDATA" \ --old-bindir "$OLDBIN" --new-bindir "$NEWBIN" \ "$@" '' ) ];}The upgrade process is:
Add the above to yourconfiguration.nix and rebuild. Alternatively, add that into a separate file and reference it in theimports list.
Login as root (sudo su -).
Runupgrade-pg-cluster. This will stop the old postgresql cluster, initialize a new one and migrate the old one to the new one. You may supply arguments like--jobs 4 and--link to speedup the migration process. Seehttps://www.postgresql.org/docs/current/pgupgrade.html for details.
Change the postgresql package in NixOS configuration to the one you were upgrading to viaservices.postgresql.package. Rebuild NixOS. This should start the new postgres version using the upgraded data directory and all services you stopped during the upgrade.
After the upgrade it’s advisable to analyze the new cluster:
For PostgreSQL ≥ 14, use thevacuumdb command printed by the upgrades script.
For PostgreSQL < 14, run (assu -l postgres in theservices.postgresql.dataDir, in this example/var/lib/postgresql/13):
$ ./analyze_new_cluster.shThe next step removes the old state-directory!
$ ./delete_old_cluster.shPostgreSQL’s versioning policy is describedhere. TLDR:
Each major version is supported for 5 years.
Every three months there will be a new minor release, containing bug and security fixes.
For criticial/security fixes there could be more minor releases inbetween. This happensvery infrequently.
After five years, a final minor version is released. This usually happens in early November.
After that a version is considered end-of-life (EOL).
Around February each year is the first time an EOL-release will not have received regular updates anymore.
Technically, we’d not want to have EOL’ed packages in a stable NixOS release, which is to be supported until one month after the previous release. Thus, with NixOS’ release schedule in May and November, the oldest PostgreSQL version in nixpkgs would have to be supported until December. It could be argued that a soon-to-be-EOL-ed version should thus be removed in May for the .05 release already. But since new security vulnerabilities are first disclosed in February of the following year, we agreed on keeping the oldest PostgreSQL major version around one more cycle in#310580.
Thus:
In September/October the new major version will be released and added to nixos-unstable.
In November the last minor version for the oldest major will be released.
Both the current stable .05 release and nixos-unstable should be updated to the latest minor that will usually be released in November.
This is relevant for people who need to use this major for as long as possible. In that case its desirable to be able to pin nixpkgs to a commit that still has it, at the latest minor available.
In November, before branch-off for the .11 release and after the update to the latest minor, the EOL-ed major will be removed from nixos-unstable.
This leaves a small gap of a couple of weeks after the latest minor release and the end of our support window for the .05 release, in which there could be an emergency release to other major versions of PostgreSQL - but not the oldest major we have in that branch. In that case: If we can’t trivially patch the issue, we will mark the package/version as insecureimmediately.
pg_configpg_config is not part of thepostgresql-package itself.It is available underpostgresql_<major>.pg_config andlibpq.pg_config.Use thepg_config from the postgresql package you’re using in your build.
Also,pg_config is a shell-script that replicates the behavior of the upstreampg_config and ensures at build-time that the output doesn’t change.
This approach is done for the following reasons:
By using a shell script, cross compilation of extensions is made easier.
The separation allowed a massive reduction of the runtime closure’s size.Any attempts to movepg_config into$dev resulted in brittle and more complex solutions(see commits0c47767,435f51c).
pg_config is only needed to build extensions or in some exceptions for building client libraries linking tolibpq.so.If such a build works withoutpg_config, this is strictly preferable over addingpg_config to the build environment.
With the current approach it’s now explicit that this is needed.
A complete list of options for the PostgreSQL module may be foundhere.
The collection of plugins for each PostgreSQL version can be accessed with.pkgs. For example, for thepkgs.postgresql_15 package, its plugin collection is accessed bypkgs.postgresql_15.pkgs:
$ nix repl '<nixpkgs>'Loading '<nixpkgs>'...Added 10574 variables.nix-repl> postgresql_15.pkgs.<TAB><TAB>postgresql_15.pkgs.cstore_fdw postgresql_15.pkgs.pg_repackpostgresql_15.pkgs.pg_auto_failover postgresql_15.pkgs.pg_safeupdatepostgresql_15.pkgs.pg_bigm postgresql_15.pkgs.pg_similaritypostgresql_15.pkgs.pg_cron postgresql_15.pkgs.pg_topnpostgresql_15.pkgs.pg_hll postgresql_15.pkgs.pgjwtpostgresql_15.pkgs.pg_partman postgresql_15.pkgs.pgroonga...To add plugins via NixOS configuration, setservices.postgresql.extensions:
{ services.postgresql.package = pkgs.postgresql_17; services.postgresql.extensions = ps: with ps; [ pg_repack postgis ];}You can build a custompostgresql-with-plugins (to be used outside of NixOS) using the function.withPackages. For example, creating a custom PostgreSQL package in an overlay can look like this:
self: super: { postgresql_custom = self.postgresql_17.withPackages (ps: [ ps.pg_repack ps.postgis ]);}Here’s a recipe on how to override a particular plugin through an overlay:
self: super: { postgresql_15 = super.postgresql_15 // { pkgs = super.postgresql_15.pkgs // { pg_repack = super.postgresql_15.pkgs.pg_repack.overrideAttrs (_: { name = "pg_repack-v20181024"; src = self.fetchzip { url = "https://github.com/reorg/pg_repack/archive/923fa2f3c709a506e111cc963034bf2fd127aa00.tar.gz"; sha256 = "17k6hq9xaax87yz79j773qyigm4fwk8z4zh5cyp6z0sxnwfqxxw5"; }; }); }; };}PostgreSQL ships the additional procedural languages PL/Perl, PL/Python and PL/Tcl as extensions.They are packaged as plugins and can be made available in the same way as external extensions:
{ services.postgresql.extensions = ps: with ps; [ plperl plpython3 pltcl ];}Each procedural language plugin provides a.withPackages helper to make language specific packages available at run-time.
For example, to makepython3Packages.base58 available:
{ services.postgresql.extensions = pgps: with pgps; [ (plpython3.withPackages (pyps: with pyps; [ base58 ])) ];}This currently works for:
plperl by re-usingperl.withPackages
plpython3 by re-usingpython3.withPackages
plr by exposingrPackages
pltcl by exposingtclPackages
JIT-support in the PostgreSQL packageis disabled by default because of the ~600MiB closure-size increase from the LLVM dependency. Itcan be optionally enabled in PostgreSQL with the following config option:
{ services.postgresql.enableJIT = true; }This makes sure that thejit-settingis set toon and a PostgreSQL package with JIT enabled is used. Further tweaking of the JIT compiler, e.g. setting a differentquery cost threshold viajit_above_costcan be done manually viaservices.postgresql.settings.
The attribute-names of JIT-enabled PostgreSQL packages are suffixed with_jit, i.e. for eachpkgs.postgresql(andpkgs.postgresql_<major>) innixpkgs there’s also apkgs.postgresql_jit (andpkgs.postgresql_<major>_jit).Alternatively, a JIT-enabled variant can be derived from a givenpostgresql package viapostgresql.withJIT.This is also useful if it’s not clear which attribute fromnixpkgs was originally used (e.g. when working withconfig.services.postgresql.package or if the package was modified via anoverlay) since all modifications are propagated towithJIT. I.e.
with import <nixpkgs> { overlays = [ (self: super: { postgresql = super.postgresql.overrideAttrs (_: { pname = "foobar"; }); }) ];};postgresql.withJIT.pnameevaluates to"foobar".
The service created by thepostgresql-module usesseveral common hardening options fromsystemd, most notably:
Memory pages must not be both writable and executable (this only applies to non-JIT setups).
A system call filter (seesystemd.exec(5) for details on@system-service).
A stricter default UMask (0027).
Only sockets of typeAF_INET/AF_INET6/AF_NETLINK/AF_UNIX allowed.
Restricted filesystem access (private/tmp, most of the file-system hierarchy is mounted read-only, only process directories in/proc that are owned by the same user).
When usingTABLESPACEs, make sure to add the filesystem paths toReadWritePaths like this:
{ systemd.services.postgresql.serviceConfig.ReadWritePaths = [ "/path/to/tablespace/location" ];}The NixOS module also contains necessary adjustments for extensions fromnixpkgs,if these are enabled. If an extension or a postgresql feature fromnixpkgs breakswith hardening, it’s considered a bug.
When using extensions that are not packaged innixpkgs, hardening adjustments maybecome necessary.
To avoid circular dependencies between default and -dev outputs, the output of thepg_config system view has been removed.
Table of Contents
Source:modules/services/databases/foundationdb.nix
Upstream documentation:https://apple.github.io/foundationdb/
Maintainer: Austin Seipp
Available version(s): 7.1.x
FoundationDB (or “FDB”) is an open source, distributed, transactionalkey-value store.
To enable FoundationDB, add the following to yourconfiguration.nix:
{ services.foundationdb.enable = true; services.foundationdb.package = pkgs.foundationdb73; # FoundationDB 7.3.x}Theservices.foundationdb.package option is required, andmust always be specified. Due to the fact FoundationDB network protocols andon-disk storage formats may change between (major) versions, and upgradesmust be explicitly handled by the user, you must always manually specifythis yourself so that the NixOS module will use the proper version. Notethat minor, bugfix releases are always compatible.
After runningnixos-rebuild, you can verify whetherFoundationDB is running by executingfdbcli (which isadded toenvironment.systemPackages):
$ sudo -u foundationdb fdbcliUsing cluster file `/etc/foundationdb/fdb.cluster'.The database is available.Welcome to the fdbcli. For help, type `help'.fdb> statusUsing cluster file `/etc/foundationdb/fdb.cluster'.Configuration: Redundancy mode - single Storage engine - memory Coordinators - 1Cluster: FoundationDB processes - 1 Machines - 1 Memory availability - 5.4 GB per process on machine with least available Fault Tolerance - 0 machines Server time - 04/20/18 15:21:14...fdb>You can also write programs using the available client libraries. Forexample, the following Python program can be run in order to grab thecluster status, as a quick example. (This example usesnix-shell shebang support to automatically supply thenecessary Python modules).
a@link> cat fdb-status.py#! /usr/bin/env nix-shell#! nix-shell -i python -p python pythonPackages.foundationdb73import fdbimport jsondef main(): fdb.api_version(520) db = fdb.open() @fdb.transactional def get_status(tr): return str(tr['\xff\xff/status/json']) obj = json.loads(get_status(db)) print('FoundationDB available: %s' % obj['client']['database_status']['available'])if __name__ == "__main__": main()a@link> chmod +x fdb-status.pya@link> ./fdb-status.pyFoundationDB available: Truea@link>FoundationDB is run under thefoundationdb user and groupby default, but this may be changed in the NixOS configuration. The systemdunitfoundationdb.service controls thefdbmonitor process.
By default, the NixOS module for FoundationDB creates a single SSD-storagebased database for development and basic usage. This storage engine isdesigned for SSDs and will perform poorly on HDDs; however it can handle farmore data than the alternative “memory” engine and is a better defaultchoice for most deployments. (Note that you can change the storage backendon-the-fly for a given FoundationDB cluster usingfdbcli.)
Furthermore, only 1 server process and 1 backup agent are started in thedefault configuration. See below for more on scaling to increase this.
FoundationDB stores all data for all server processes under/var/lib/foundationdb. You can override this usingservices.foundationdb.dataDir, e.g.
{ services.foundationdb.dataDir = "/data/fdb"; }Similarly, logs are stored under/var/log/foundationdbby default, and there is a correspondingservices.foundationdb.logDir as well.
Scaling the number of server processes is quite easy; simply specifyservices.foundationdb.serverProcesses to be the number ofFoundationDB worker processes that should be started on the machine.
FoundationDB worker processes typically require 4GB of RAM per-process atminimum for good performance, so this option is set to 1 by default sincethe maximum amount of RAM is unknown. You’re advised to abide by thisrestriction, so pick a number of processes so that each has 4GB or more.
A similar option exists in order to scale backup agent processes,services.foundationdb.backupProcesses. Backup agents arenot as performance/RAM sensitive, so feel free to experiment with the numberof available backup processes.
FoundationDB on NixOS works similarly to other Linux systems, so thissection will be brief. Please refer to the full FoundationDB documentationfor more on clustering.
FoundationDB organizes clusters using a set ofcoordinators, which are just specially-designatedworker processes. By default, every installation of FoundationDB on NixOSwill start as its own individual cluster, with a single coordinator: thefirst worker process onlocalhost.
Coordinators are specified globally using the/etc/foundationdb/fdb.cluster file, which all servers andclient applications will use to find and join coordinators. Note that thisfilecan not be managed by NixOS so easily:FoundationDB is designed so that it will rewrite the file at runtime for allclients and nodes when cluster coordinators change, with clientstransparently handling this without intervention. It is fundamentally amutable file, and you should not try to manage it in any way in NixOS.
When dealing with a cluster, there are two main things you want to do:
Add a node to the cluster for storage/compute.
Promote an ordinary worker to a coordinator.
A node must already be a member of the cluster in order to properly bepromoted to a coordinator, so you must always add it first if you wish topromote it.
To add a machine to a FoundationDB cluster:
Choose one of the servers to start as the initial coordinator.
Copy the/etc/foundationdb/fdb.cluster file from thisserver to all the other servers. Restart FoundationDB on all of theseother servers, so they join the cluster.
All of these servers are now connected and working together in thecluster, under the chosen coordinator.
At this point, you can add as many nodes as you want by just repeating theabove steps. By default there will still be a single coordinator: you canusefdbcli to change this and add new coordinators.
As a convenience, FoundationDB can automatically assign coordinators basedon the redundancy mode you wish to achieve for the cluster. Once all thenodes have been joined, simply set the replication policy, and then issuethecoordinators auto command
For example, assuming we have 3 nodes available, we can enable doubleredundancy mode, then auto-select coordinators. For double redundancy, 3coordinators is ideal: therefore FoundationDB will makeevery node a coordinator automatically:
fdbcli> configure double ssdfdbcli> coordinators autoThis will transparently update all the servers within seconds, andappropriately rewrite thefdb.cluster file, as well asinforming all client processes to do the same.
By default, all clients must use the currentfdb.clusterfile to access a given FoundationDB cluster. This file is located by defaultin/etc/foundationdb/fdb.cluster on all machines with theFoundationDB service enabled, so you may copy the active one from yourcluster to a new node in order to connect, if it is not part of the cluster.
By default, any user who can connect to a FoundationDB process with thecorrect cluster configuration can access anything. FoundationDB uses apluggable design to transport security, and out of the box it supports aLibreSSL-based plugin for TLS support. This plugin not only does in-flightencryption, but also performs client authorization based on the givenendpoint’s certificate chain. For example, a FoundationDB server may beconfigured to only accept client connections over TLS, where the client TLScertificate is from organizationAcme Co in theResearch and Development unit.
Configuring TLS with FoundationDB is done using theservices.foundationdb.tls options in order to control thepeer verification string, as well as the certificate and its private key.
Note that the certificate and its private key must be accessible to theFoundationDB user account that the server runs under. These files are alsoNOT managed by NixOS, as putting them into the store may reveal privateinformation.
After you have a key and certificate file in place, it is not enough tosimply set the NixOS module options – you must also configure thefdb.cluster file to specify that a given set ofcoordinators use TLS. This is as simple as adding the suffix:tls to your cluster coordinator configuration, after theport number. For example, assuming you have a coordinator on localhost withthe default configuration, simply specifying:
XXXXXX:XXXXXX@127.0.0.1:4500:tlswill configure all clients and server processes to use TLS from now on.
The usual rules for doing FoundationDB backups apply on NixOS as written inthe FoundationDB manual. However, one important difference is the securityprofile for NixOS: by default, thefoundationdb systemdunit usesLinux namespaces to restrict write access tothe system, except for the log directory, data directory, and the/etc/foundationdb/ directory. This is enforced by defaultand cannot be disabled.
However, a side effect of this is that thefdbbackupcommand doesn’t work properly for local filesystem backups: FoundationDBuses a server process alongside the database processes to perform backupsand copy the backups to the filesystem. As a result, this process is putunder the restricted namespaces above: the backup process can only write toa limited number of paths.
In order to allow flexible backup locations on local disks, the FoundationDBNixOS module supports aservices.foundationdb.extraReadWritePaths option. Thisoption takes a list of paths, and adds them to the systemd unit, allowingthe processes inside the service to write (and read) the specifieddirectories.
For example, to create backups in/opt/fdb-backups, firstset up the paths in the module options:
{ services.foundationdb.extraReadWritePaths = [ "/opt/fdb-backups" ]; }Restart the FoundationDB service, and it will now be able to write to thisdirectory (even if it does not yet exist.) Note: this pathmust exist before restarting the unit. Otherwise,systemd will not include it in the private FoundationDB namespace (and itwill not add it dynamically at runtime).
You can now perform a backup:
$ sudo -u foundationdb fdbbackup start -t default -d file:///opt/fdb-backups$ sudo -u foundationdb fdbbackup status -t defaultThe FoundationDB setup for NixOS should currently be considered beta.FoundationDB is not new software, but the NixOS compilation and integrationhas only undergone fairly basic testing of all the available functionality.
There is no way to specify individual parameters for individualfdbserver processes. Currently, all server processesinherit all the globalfdbmonitor settings.
Ruby bindings are not currently installed.
Go bindings are not currently installed.
NixOS’s FoundationDB module allows you to configure all of the most relevantconfiguration options forfdbmonitor, matching it quiteclosely. A complete list of options for the FoundationDB module may be foundhere. You shouldalso read the FoundationDB documentation as well.
FoundationDB is a complex piece of software, and requires carefuladministration to properly use. Full documentation for administration can befound here:https://apple.github.io/foundationdb/.
Table of Contents
Source:modules/services/backup/borgbackup.nix
Upstream documentation:https://borgbackup.readthedocs.io/
BorgBackup (short: Borg)is a deduplicating backup program. Optionally, it supports compression andauthenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backupdata. The data deduplication technique used makes Borg suitable for dailybackups since only changes are stored. The authenticated encryption techniquemakes it suitable for backups to not fully trusted targets.
A complete list of options for the Borgbase module may be foundhere.
A very basic configuration for backing up to a locally accessible directory is:
{ services.borgbackup.jobs = { rootBackup = { paths = "/"; exclude = [ "/nix" "/path/to/local/repo" ]; repo = "/path/to/local/repo"; doInit = true; encryption = { mode = "repokey"; passphrase = "secret"; }; compression = "auto,lzma"; startAt = "weekly"; }; };}If you do not want the passphrase to be stored in the world-readableNix store, use passCommand. You find an example below.
You should use a different SSH key for each repository you write to,because the specified keys are restricted to running borg serve and can onlyaccess this single repository. You need the output of the generate pub file.
# sudo ssh-keygen -N '' -t ed25519 -f /run/keys/id_ed25519_my_borg_repo# cat /run/keys/id_ed25519_my_borg_repossh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID78zmOyA+5uPG4Ot0hfAy+sLDPU1L4AiIoRYEIVbbQ/ root@nixosAdd the following snippet to your NixOS configuration:
{ services.borgbackup.repos = { my_borg_repo = { authorizedKeys = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID78zmOyA+5uPG4Ot0hfAy+sLDPU1L4AiIoRYEIVbbQ/ root@nixos" ]; path = "/var/lib/my_borg_repo"; }; };}The following NixOS snippet creates an hourly backup to the service(on the host nixos) as created in the section above. We assumethat you have stored a secret passphrasse in the file/run/keys/borgbackup_passphrase, which should be onlyaccessible by root
{ services.borgbackup.jobs = { backupToLocalServer = { paths = [ "/etc/nixos" ]; doInit = true; repo = "borg@nixos:."; encryption = { mode = "repokey-blake2"; passCommand = "cat /run/keys/borgbackup_passphrase"; }; environment = { BORG_RSH = "ssh -i /run/keys/id_ed25519_my_borg_repo"; }; compression = "auto,lzma"; startAt = "hourly"; }; };}The following few commands (run as root) let you test your backup.
> nixos-rebuild switch...restarting the following units: polkit.service> systemctl restart borgbackup-job-backupToLocalServer> sleep 10> systemctl restart borgbackup-job-backupToLocalServer> export BORG_PASSPHRASE=topSecret> borg list --rsh='ssh -i /run/keys/id_ed25519_my_borg_repo' borg@nixos:.nixos-backupToLocalServer-2020-03-30T21:46:17 Mon, 2020-03-30 21:46:19 [84feb97710954931ca384182f5f3cb90665f35cef214760abd7350fb064786ac]nixos-backupToLocalServer-2020-03-30T21:46:30 Mon, 2020-03-30 21:46:32 [e77321694ecd160ca2228611747c6ad1be177d6e0d894538898de7a2621b6e68]Several companies offer(paid) hosting servicesfor Borg repositories.
To backup your home directory to borgbase you have to:
Generate a SSH key without a password, to access the remote server. E.g.
sudo ssh-keygen -N '' -t ed25519 -f /run/keys/id_ed25519_borgbaseCreate the repository on the server by following the instructions for yourhosting server.
Initialize the repository on the server. Eg.
sudo borg init --encryption=repokey-blake2 \ --rsh "ssh -i /run/keys/id_ed25519_borgbase" \ zzz2aaaaa@zzz2aaaaa.repo.borgbase.com:repoAdd it to your NixOS configuration, e.g.
{ services.borgbackup.jobs = { my_Remote_Backup = { paths = [ "/" ]; exclude = [ "/nix" "'**/.cache'" ]; repo = "zzz2aaaaa@zzz2aaaaa.repo.borgbase.com:repo"; encryption = { mode = "repokey-blake2"; passCommand = "cat /run/keys/borgbackup_passphrase"; }; environment = { BORG_RSH = "ssh -i /run/keys/id_ed25519_borgbase"; }; compression = "auto,lzma"; startAt = "daily"; }; };}}Vorta is a backup client for macOS and Linux desktops. It integrates themighty BorgBackup with your desktop environment to protect your data fromdisk failure, ransomware and theft.
It can be installed in NixOS e.g. by addingpkgs.vortatoenvironment.systemPackages.
Details about using Vorta can be found underhttps://vorta.borgbase.com .
Table of Contents
NixOS supports automatic domain validation & certificate retrieval andrenewal using the ACME protocol. Any provider can be used, but by defaultNixOS uses Let’s Encrypt. The alternative ACME clientlego is used underthe hood.
Automatic cert validation and configuration for Apache and Nginx virtualhosts is included in NixOS, however if you would like to generate a wildcardcert or you are not using a web server you will have to configure DNSbased validation.
To use the ACME module, you must accept the provider’s terms of serviceby settingsecurity.acme.acceptTermstotrue. The Let’s Encrypt ToS can be foundhere.
You must also set an email address to be used when creating accounts withLet’s Encrypt. You can set this for all certs withsecurity.acme.defaults.emailand/or on a per-cert basis withsecurity.acme.certs.<name>.email.This address is only used for registration and renewal reminders,and cannot be used to administer the certificates in any way.
Alternatively, you can use a different ACME server by changing thesecurity.acme.defaults.server optionto a provider of your choosing, or just change the server for one cert withsecurity.acme.certs.<name>.server.
You will need an HTTP server or DNS server for verification. For HTTP,the server must have a webroot defined that can serve.well-known/acme-challenge. This directory must bewriteable by the user that will run the ACME client. For DNS, you mustset up credentials with your provider/server for use with lego.
NixOS supports fetching ACME certificates for you by settingenableACME = true; in a virtualHost config. We first create self-signedplaceholder certificates in place of the real ACME certs. The placeholdercerts are overwritten when the ACME certs arrive. Forfoo.example.com the config would look like this:
{ security.acme.acceptTerms = true; security.acme.defaults.email = "admin+acme@example.com"; services.nginx = { enable = true; virtualHosts = { "foo.example.com" = { forceSSL = true; enableACME = true; # All serverAliases will be added as extra domain names on the certificate. serverAliases = [ "bar.example.com" ]; locations."/" = { root = "/var/www"; }; }; # We can also add a different vhost and reuse the same certificate # but we have to append extraDomainNames manually beforehand: # security.acme.certs."foo.example.com".extraDomainNames = [ "baz.example.com" ]; "baz.example.com" = { forceSSL = true; useACMEHost = "foo.example.com"; locations."/" = { root = "/var/www"; }; }; }; };}Using ACME certificates with Apache virtual hosts is identicalto using them with Nginx. The attribute names are all the same, just replace“nginx” with “httpd” where appropriate.
First off you will need to set up a virtual host to serve the challenges.This example uses a vhost calledcerts.example.com, withthe intent that you will generate certs for all your vhosts and redirecteveryone to HTTPS.
{ security.acme.acceptTerms = true; security.acme.defaults.email = "admin+acme@example.com"; # /var/lib/acme/.challenges must be writable by the ACME user # and readable by the Nginx user. The easiest way to achieve # this is to add the Nginx user to the ACME group. users.users.nginx.extraGroups = [ "acme" ]; services.nginx = { enable = true; virtualHosts = { "acmechallenge.example.com" = { # Catchall vhost, will redirect users to HTTPS for all vhosts serverAliases = [ "*.example.com" ]; locations."/.well-known/acme-challenge" = { root = "/var/lib/acme/.challenges"; }; locations."/" = { return = "301 https://$host$request_uri"; }; }; }; }; # Alternative config for Apache users.users.wwwrun.extraGroups = [ "acme" ]; services.httpd = { enable = true; virtualHosts = { "acmechallenge.example.com" = { # Catchall vhost, will redirect users to HTTPS for all vhosts serverAliases = [ "*.example.com" ]; # /var/lib/acme/.challenges must be writable by the ACME user and readable by the Apache user. # By default, this is the case. documentRoot = "/var/lib/acme/.challenges"; extraConfig = '' RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} !^/\.well-known/acme-challenge [NC] RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301] ''; }; }; };}Now you need to configure ACME to generate a certificate.
{ security.acme.certs."foo.example.com" = { webroot = "/var/lib/acme/.challenges"; email = "foo@example.com"; # Ensure that the web server you use can read the generated certs # Take a look at the group option for the web server you choose. group = "nginx"; # Since we have a wildcard vhost to handle port 80, # we can generate certs for anything! # Just make sure your DNS resolves them. extraDomainNames = [ "mail.example.com" ]; };}The private keykey.pem and certificatefullchain.pem will be put into/var/lib/acme/foo.example.com.
Refer toAppendix A for all available configurationoptions for thesecurity.acmemodule.
This is useful if you want to generate a wildcard certificate, sinceACME servers will only hand out wildcard certs over DNS validation.There are a number of supported DNS providers and servers you can utilise,see thelego docsfor provider/server specific configuration values. For the sake of thesedocs, we will provide a fully self-hosted example using bind.
{ services.bind = { enable = true; extraConfig = '' include "/var/lib/secrets/dnskeys.conf"; ''; zones = [ rec { name = "example.com"; file = "/var/db/bind/${name}"; master = true; extraConfig = "allow-update { key rfc2136key.example.com.; };"; } ]; }; # Now we can configure ACME security.acme.acceptTerms = true; security.acme.defaults.email = "admin+acme@example.com"; security.acme.certs."example.com" = { domain = "*.example.com"; dnsProvider = "rfc2136"; environmentFile = "/var/lib/secrets/certs.secret"; # We don't need to wait for propagation since this is a local DNS server dnsPropagationCheck = false; };}Thednskeys.conf andcerts.secretmust be kept secure and thus you should not keep their contents in yourNix config. Instead, generate them one time with a systemd service:
{ systemd.services.dns-rfc2136-conf = { requiredBy = [ "acme-example.com.service" "bind.service" ]; before = [ "acme-example.com.service" "bind.service" ]; unitConfig = { ConditionPathExists = "!/var/lib/secrets/dnskeys.conf"; }; serviceConfig = { Type = "oneshot"; UMask = 77; }; path = [ pkgs.bind ]; script = '' mkdir -p /var/lib/secrets chmod 755 /var/lib/secrets tsig-keygen rfc2136key.example.com > /var/lib/secrets/dnskeys.conf chown named:root /var/lib/secrets/dnskeys.conf chmod 400 /var/lib/secrets/dnskeys.conf # extract secret value from the dnskeys.conf while read x y; do if [ "$x" = "secret" ]; then secret="''${y:1:''${#y}-3}"; fi; done < /var/lib/secrets/dnskeys.conf cat > /var/lib/secrets/certs.secret << EOF RFC2136_NAMESERVER='127.0.0.1:53' RFC2136_TSIG_ALGORITHM='hmac-sha256.' RFC2136_TSIG_KEY='rfc2136key.example.com' RFC2136_TSIG_SECRET='$secret' EOF chmod 400 /var/lib/secrets/certs.secret ''; };}Now you’re all set to generate certs! You should monitor the first invocationby runningsystemctl start acme-example.com.service & journalctl -fu acme-example.com.service and watching its log output.
It is possible to use DNS-01 validation with all certificates,including those automatically configured via the Nginx/ApacheenableACMEoption. This configuration pattern is fullysupported and part of the module’s test suite for Nginx + Apache.
You must follow the guide above on configuring DNS-01 validationfirst, however instead of setting the options for one certificate(e.g.security.acme.certs.<name>.dnsProvider)you will set them as defaults(e.g.security.acme.defaults.dnsProvider).
{ # Configure ACME appropriately security.acme.acceptTerms = true; security.acme.defaults.email = "admin+acme@example.com"; security.acme.defaults = { dnsProvider = "rfc2136"; environmentFile = "/var/lib/secrets/certs.secret"; # We don't need to wait for propagation since this is a local DNS server dnsPropagationCheck = false; }; # For each virtual host you would like to use DNS-01 validation with, # set acmeRoot = null services.nginx = { enable = true; virtualHosts = { "foo.example.com" = { enableACME = true; acmeRoot = null; }; }; };}And that’s it! Next time your configuration is rebuilt, or whenyou add a new virtualHost, it will be DNS-01 validated.
Some services refuse to start if the configured certificate filesare not owned by root. PostgreSQL and OpenSMTPD are examples of these.There is no way to change the user the ACME module uses (it will always beacme), however you can use systemd’sLoadCredential feature to resolve this elegantly.Below is an example configuration for OpenSMTPD, but this patterncan be applied to any service.
{ # Configure ACME however you like (DNS or HTTP validation), adding # the following configuration for the relevant certificate. # Note: You cannot use `systemctl reload` here as that would mean # the LoadCredential configuration below would be skipped and # the service would continue to use old certificates. security.acme.certs."mail.example.com".postRun = '' systemctl restart opensmtpd ''; # Now you must augment OpenSMTPD's systemd service to load # the certificate files. systemd.services.opensmtpd.requires = [ "acme-finished-mail.example.com.target" ]; systemd.services.opensmtpd.serviceConfig.LoadCredential = let certDir = config.security.acme.certs."mail.example.com".directory; in [ "cert.pem:${certDir}/cert.pem" "key.pem:${certDir}/key.pem" ]; # Finally, configure OpenSMTPD to use these certs. services.opensmtpd = let credsDir = "/run/credentials/opensmtpd.service"; in { enable = true; setSendmail = false; serverConfiguration = '' pki mail.example.com cert "${credsDir}/cert.pem" pki mail.example.com key "${credsDir}/key.pem" listen on localhost tls pki mail.example.com action act1 relay host smtp://127.0.0.1:10027 match for local action act1 ''; };}Should you need to regenerate a particular certificate in a hurry, suchas when a vulnerability is found in Let’s Encrypt, there is now a convenientmechanism for doing so. Runningsystemctl clean --what=state acme-example.com.servicewill remove all certificate files and the account data for the given domain,allowing you to thensystemctl start acme-example.com.serviceto generate fresh ones.
It is possible that your account credentials file may become corrupt and needto be regenerated. In this scenario lego will produce the errorJWS verification error.The solution is to simply delete the associated accounts file andre-run the affected service(s).
# Find the accounts folder for the certificatesystemctl cat acme-example.com.service | grep -Po 'accounts/[^:]*'export accountdir="$(!!)"# Move this folder to some place elsemv /var/lib/acme/.lego/$accountdir{,.bak}# Recreate the folder using systemd-tmpfilessystemd-tmpfiles --create# Get a new account and reissue certificates# Note: Do this for all certs that share the same account email addresssystemctl start acme-example.com.serviceoh-my-zsh is a framework to manage yourZSHconfiguration including completion scripts for several CLI tools or customprompt themes.
The module uses theoh-my-zsh package with all availablefeatures. The initial setup using Nix expressions is fairly similar to theconfiguration format ofoh-my-zsh.
{ programs.zsh.ohMyZsh = { enable = true; plugins = [ "git" "python" "man" ]; theme = "agnoster"; };}For a detailed explanation of these arguments please refer to theoh-my-zsh docs.
The expression generates the needed configuration and writes it into your/etc/zshrc.
Sometimes third-party or custom scripts such as a modified theme may beneeded.oh-my-zsh provides theZSH_CUSTOMenvironment variable for this which points to a directory with additionalscripts.
The module can do this as well:
{ programs.zsh.ohMyZsh.custom = "~/path/to/custom/scripts"; }There are several extensions foroh-my-zsh packaged innixpkgs. One of them isnix-zsh-completionswhich bundles completion scripts and a plugin foroh-my-zsh.
Rather than using a single mutable path forZSH_CUSTOM,it’s also possible to generate this path from a list of Nix packages:
{ pkgs, ... }:{ programs.zsh.ohMyZsh.customPkgs = [ pkgs.nix-zsh-completions # and even more... ];}Internally a single store path will be created usingbuildEnv. Please refer to the docs ofbuildEnvfor further reference.
Please keep in mind that this is not compatible withprograms.zsh.ohMyZsh.custom as it requires an immutablestore path whilecustom shall remain mutable! Anevaluation failure will be thrown if bothcustom andcustomPkgs are set.
If third-party customizations (e.g. new themes) are supposed to be added tooh-my-zsh there are several pitfalls to keep in mind:
To comply with the default structure ofZSH the entireoutput needs to be written to$out/share/zsh.
Completion scripts are supposed to be stored at$out/share/zsh/site-functions. This directory is part of thefpathand the package should be compatible with pureZSHsetups. The module will automatically link the contents ofsite-functions to completions directory in the properstore path.
Theplugins directory needs the structurepluginname/pluginname.plugin.zsh as structured in theupstream repo.
A derivation foroh-my-zsh may look like this:
{ stdenv, fetchFromGitHub }:stdenv.mkDerivation rec { name = "exemplary-zsh-customization-${version}"; version = "1.0.0"; src = fetchFromGitHub { # path to the upstream repository }; dontBuild = true; installPhase = '' mkdir -p $out/share/zsh/site-functions cp {themes,plugins} $out/share/zsh cp completions $out/share/zsh/site-functions '';}Source:modules/programs/plotinus.nix
Upstream documentation:https://github.com/p-e-w/plotinus
Plotinus is a searchable command palette in every modern GTK application.
When in a GTK 3 application and Plotinus is enabled, you can pressCtrl+Shift+P to open the command palette. The commandpalette provides a searchable list of of all menu items in the application.
To enable Plotinus, add the following to yourconfiguration.nix:
{ programs.plotinus.enable = true; }Digital Bitbox is a hardware wallet and second-factor authenticator.
Thedigitalbitbox programs module may be installed by settingprograms.digitalbitbox totrue in a manner similar to
{ programs.digitalbitbox.enable = true; }and bundles thedigitalbitbox package (seethe section called “Package”),which contains thedbb-app anddbb-cli binaries, along with the hardwaremodule (seethe section called “Hardware”) which sets up the necessaryudev rules to access the device.
Enabling the digitalbitbox module is pretty much the easiest way to get aDigital Bitbox device working on your system.
For more information, seehttps://digitalbitbox.com/start_linux.
The binaries,dbb-app (a GUI tool) anddbb-cli (a CLI tool), are availablethrough thedigitalbitbox package which could be installed as follows:
{ environment.systemPackages = [ pkgs.digitalbitbox ]; }The digitalbitbox hardware package enables the udev rules for Digital Bitboxdevices and may be installed as follows:
{ hardware.digitalbitbox.enable = true; }In order to alter the udev rules, one may provide different values for theudevRule51 andudevRule52 attributes by means of overriding as follows:
{ programs.digitalbitbox = { enable = true; package = pkgs.digitalbitbox.override { udevRule51 = "something else"; }; };}Input methods are an operating system component that allows any data, such askeyboard strokes or mouse movements, to be received as input. In this wayusers can enter characters and symbols not found on their input devices.Using an input method is obligatory for any language that has more graphemesthan there are keys on the keyboard.
The following input methods are available in NixOS:
IBus: The intelligent input bus.
Fcitx5: The next generation of fcitx, addons (including engines, dictionaries, skins) can be added usingi18n.inputMethod.fcitx5.addons.
Nabi: A Korean input method based on XIM.
Uim: The universal input method, is a library with a XIM bridge.
Hime: An extremely easy-to-use input method framework.
Kime: Korean IME
IBus is an Intelligent Input Bus. It provides full featured and userfriendly input method user interface.
The following snippet can be used to configure IBus:
{ i18n.inputMethod = { enable = true; type = "ibus"; ibus.engines = with pkgs.ibus-engines; [ anthy hangul mozc ]; };}i18n.inputMethod.ibus.engines is optional and can be usedto add extra IBus engines.
Available extra IBus engines are:
Anthy (ibus-engines.anthy): Anthy is a system forJapanese input method. It converts Hiragana text to Kana Kanji mixed text.
Hangul (ibus-engines.hangul): Korean input method.
libpinyin (ibus-engines.libpinyin): A Chinese input method.
m17n (ibus-engines.m17n): m17n is an input method thatuses input methods and corresponding icons in the m17n database.
mozc (ibus-engines.mozc): A Japanese input method fromGoogle.
Table (ibus-engines.table): An input method that loadtables of input methods.
table-others (ibus-engines.table-others): Varioustable-based input methods. To use this, and any other table-based inputmethods, it must appear in the list of engines along withtable. For example:
{ ibus.engines = with pkgs.ibus-engines; [ table table-others ];}To use any input method, the package must be added in the configuration, asshown above, and also (after runningnixos-rebuild) theinput method must be added from IBus’ preference dialog.
If IBus works in some applications but not others, a likely cause of thisis that IBus is depending on a different version ofglibto what the applications are depending on. This can be checked by runningnix-store -q --requisites <path> | grep glib,where<path> is the path of either IBus or anapplication in the Nix store. Theglib packages mustmatch exactly. If they do not, uninstalling and reinstalling theapplication is a likely fix.
Fcitx5 is an input method framework with extension support. It has threebuilt-in Input Method Engine, Pinyin, QuWei and Table-based input methods.
The following snippet can be used to configure Fcitx:
{ i18n.inputMethod = { enable = true; type = "fcitx5"; fcitx5.addons = with pkgs; [ fcitx5-mozc fcitx5-hangul fcitx5-m17n ]; };}i18n.inputMethod.fcitx5.addons is optional and can beused to add extra Fcitx5 addons.
Available extra Fcitx5 addons are:
Anthy (fcitx5-anthy): Anthy is a system forJapanese input method. It converts Hiragana text to Kana Kanji mixed text.
Chewing (fcitx5-chewing): Chewing is anintelligent Zhuyin input method. It is one of the most popular inputmethods among Traditional Chinese Unix users.
Hangul (fcitx5-hangul): Korean input method.
Unikey (fcitx5-unikey): Vietnamese input method.
m17n (fcitx5-m17n): m17n is an input method thatuses input methods and corresponding icons in the m17n database.
mozc (fcitx5-mozc): A Japanese input method fromGoogle.
table-others (fcitx5-table-other): Varioustable-based input methods.
chinese-addons (fcitx5-chinese-addons): Various chinese input methods.
rime (fcitx5-rime): RIME support for fcitx5.
Nabi is an easy to use Korean X input method. It allows you to enterphonetic Korean characters (hangul) and pictographic Korean characters(hanja).
The following snippet can be used to configure Nabi:
{ i18n.inputMethod = { enable = true; type = "nabi"; };}Uim (short for “universal input method”) is a multilingual input methodframework. Applications can use it through so-called bridges.
The following snippet can be used to configure uim:
{ i18n.inputMethod = { enable = true; type = "uim"; };}Note: Thei18n.inputMethod.uim.toolbar option can beused to choose uim toolbar.
Hime is an extremely easy-to-use input method framework. It is lightweight,stable, powerful and supports many commonly used input methods, includingCangjie, Zhuyin, Dayi, Rank, Shrimp, Greek, Korean Pinyin, Latin Alphabet,etc…
The following snippet can be used to configure Hime:
{ i18n.inputMethod = { enable = true; type = "hime"; };}Kime is Korean IME. it’s built with Rust language and let you get simple, safe, fast Korean typing
The following snippet can be used to configure Kime:
{ i18n.inputMethod = { enable = true; type = "kime"; };}Table of Contents
In some cases, it may be desirable to take advantage of commonly-used,predefined configurations provided by nixpkgs, but different from thosethat come as default. This is a role fulfilled by NixOS’s Profiles,which come as files living in<nixpkgs/nixos/modules/profiles>. Thatis to say, expected usage is to add them to the imports list of your/etc/configuration.nix as such:
{ imports = [ <nixpkgs/nixos/modules/profiles/profile-name.nix> ]; }Even if some of these profiles seem only useful in the context ofinstall media, many are actually intended to be used in real installs.
What follows is a brief explanation on the purpose and use-case for eachprofile. Detailing each option configured by each one is out of scope.
Enables all hardware supported by NixOS: i.e., all firmware is included, andall devices from which one may boot are enabled in the initrd. Its primaryuse is in the NixOS installation CDs.
The enabled kernel modules include support for SATA and PATA, SCSI(partially), USB, Firewire (untested), Virtio (QEMU, KVM, etc.), VMware, andHyper-V. Additionally,hardware.enableAllFirmware isenabled, and the firmware for the ZyDAS ZD1211 chipset is specificallyinstalled.
Defines the software packages included in the “minimal” installation CD. Itinstalls several utilities useful in a simple recovery or install media, suchas a text-mode web browser, and tools for manipulating block devices,networking, hardware diagnostics, and filesystems (with their respectivekernel modules).
This profile is used in installer images. It provides an editableconfiguration.nix that imports all the modules that were also used whencreating the image in the first place. As a result it allows users to editand rebuild the live-system.
On images where the installation media also becomes an installation target,copying overconfiguration.nix should be disabled bysettinginstaller.cloneConfig tofalse.For example, this is done insd-image-aarch64-installer.nix.
This profile just enables ademo user, with passworddemo, uid1000,wheel group andautologin in the SDDM display manager.
This is the profile from which the Docker images are generated. It prepares aworking system by importing theMinimal andClone Config profiles, andsetting appropriate configuration options that are useful inside a containercontext, likeboot.isContainer.
Defines a NixOS configuration with the Plasma 5 desktop. It’s used by thegraphical installation CD.
It setsservices.xserver.enable,services.displayManager.sddm.enable,services.xserver.desktopManager.plasma5.enable,andservices.libinput.enable to true. It alsoincludes glxinfo and firefox in the system packages list.
A profile with most (vanilla) hardening options enabled by default,potentially at the cost of stability, features and performance.
This includes a hardened kernel, and limiting the system informationavailable to processes through the/sys and/proc filesystems. It also disables the User Namespacesfeature of the kernel, which stops Nix from being able to build anything(this particular setting can be overridden viasecurity.allowUserNamespaces). See theprofile sourcefor further detail on which settings are altered.
This profile enables options that are known to affect systemstability. If you experience any stability issues when using theprofile, try disabling it. If you report an issue and use thisprofile, always mention that you do.
Common configuration for headless machines (e.g., Amazon EC2 instances).
Disablesvesa, serial consoles,emergency mode,grub splash imagesand configures the kernel to reboot automatically on panic.
Provides a basic configuration for installation devices like CDs.This enables redistributable firmware, includes theClone Config profileand a copy of the Nixpkgs channel, sonixos-installworks out of the box.
Documentation forNixpkgsandNixOS areforcefully enabled (to override theMinimal profile preference); theNixOS manual is shown automatically on TTY 8, udisks is disabled.Autologin is enabled asnixos user, while passwordlesslogin as bothroot andnixos is possible.Passwordlesssudo is enabled too.wpa_supplicant isenabled, but configured to not autostart.
It is explained how to login, start the ssh server, and if available,how to start the display manager.
Several settings are tweaked so that the installer has a better chance ofsucceeding under low-memory environments.
Render your system completely perlless (i.e. without the perl interpreter). Thisincludes a mechanism so that your build fails if it contains a Nix store paththat references the string “perl”.
This profile defines a small NixOS configuration. It does not contain anygraphical stuff. It’s a very short file that sets the supported localesto only support the user-selected locale, anddisables packages’ documentation.
This profile contains common configuration for virtual machines running underQEMU (using virtio).
It makes virtio modules available on the initrd and sets the system time fromthe hardware clock to work around a bug in qemu-kvm.
The NixOS Mattermost module lets you buildMattermostinstances for collaboration over chat, optionally with custom builds of pluginsspecific to your instance.
To enable Mattermost using Postgres, use a config like this:
{ services.mattermost = { enable = true; # You can change this if you are reverse proxying. host = "0.0.0.0"; port = 8065; # Allow modifications to the config from Mattermost. mutableConfig = true; # Override modifications to the config with your NixOS config. preferNixConfig = true; socket = { # Enable control with the `mmctl` socket. enable = true; # Exporting the control socket will add `mmctl` to your PATH, and export # MMCTL_LOCAL_SOCKET_PATH systemwide. Otherwise, you can get the socket # path out of `config.mattermost.socket.path` and set it manually. export = true; }; # For example, to disable auto-installation of prepackaged plugins. settings.PluginSettings.AutomaticPrepackagedPlugins = false; };}As of NixOS 25.05, Mattermost uses peer authentication with Postgres orMySQL by default. If you previously used password auth on localhost,this will automatically be configured if yourstateVersion is set to at least25.05.
The nixpkgsmattermost derivation runs the entire test suite during thecheckPhase. This test suite is run with a live MySQL and Postgres databaseinstance in the sandbox. If you are building Mattermost, this can take a while,especially if it is building on a resource-constrained system.
The following passthrus are designed to assist with enabling or disablingthecheckPhase:
mattermost.withTests
mattermost.withoutTests
The default (mattermost) is an alias formattermost.withTests.
You can configure Mattermost plugins by either using prebuilt binaries or bybuilding your own. We test building and using plugins in the NixOS test suite.
Mattermost plugins are tarballs containing a system-specific statically linkedGo binary and webapp resources.
Here is an example with a prebuilt plugin tarball:
{ services.mattermost = { plugins = with pkgs; [ # todo # 0.7.1 # https://github.com/mattermost/mattermost-plugin-todo/releases/tag/v0.7.1 (fetchurl { # Note: Don't unpack the tarball; the NixOS module will repack it for you. url = "https://github.com/mattermost-community/mattermost-plugin-todo/releases/download/v0.7.1/com.mattermost.plugin-todo-0.7.1.tar.gz"; hash = "sha256-P+Z66vqE7FRmc2kTZw9FyU5YdLLbVlcJf11QCbfeJ84="; }) ]; };}Once the plugin is installed and the config rebuilt, you can enable this pluginin the System Console.
Themattermost derivation includes thebuildPlugin passthru for buildingplugins that use the “standard” Mattermost plugin build template atmattermost-plugin-demo.
Since this is a “de facto” standard for building Mattermost plugins that makesassumptions about the build environment, thebuildPlugin helper tries to fitthese assumptions the best it can.
Here is how to build the above Todo plugin. Note that we rely onpackage-lock.json being assembled correctly, so must use a version where it is!If there is no lockfile or the lockfile is incorrect, Nix cannot fetch NPM buildand runtime dependencies for a sandbox build.
{ services.mattermost = { plugins = with pkgs; [ (mattermost.buildPlugin { pname = "mattermost-plugin-todo"; version = "0.8-pre"; src = fetchFromGitHub { owner = "mattermost-community"; repo = "mattermost-plugin-todo"; # 0.7.1 didn't work, seems to use an older set of node dependencies. rev = "f25dc91ea401c9f0dcd4abcebaff10eb8b9836e5"; hash = "sha256-OM+m4rTqVtolvL5tUE8RKfclqzoe0Y38jLU60Pz7+HI="; }; vendorHash = "sha256-5KpechSp3z/Nq713PXYruyNxveo6CwrCSKf2JaErbgg="; npmDepsHash = "sha256-o2UOEkwb8Vx2lDWayNYgng0GXvmS6lp/ExfOq3peyMY="; extraGoModuleAttrs = { npmFlags = [ "--legacy-peer-deps" ]; }; }) ]; };}Seepkgs/by-name/ma/mattermost/build-plugin.nix for all the options.As in the previous example, once the plugin is installed and the config rebuilt,you can enable this plugin in the System Console.
The NixOS Kubernetes module is a collective term for a handful ofindividual submodules implementing the Kubernetes cluster components.
There are generally two ways of enabling Kubernetes on NixOS. One way isto enable and configure cluster components appropriately by hand:
{ services.kubernetes = { apiserver.enable = true; controllerManager.enable = true; scheduler.enable = true; addonManager.enable = true; proxy.enable = true; flannel.enable = true; };}Another way is to assign cluster roles (“master” and/or “node”) tothe host. This enables apiserver, controllerManager, scheduler,addonManager, kube-proxy and etcd:
{ services.kubernetes.roles = [ "master" ]; }While this will enable the kubelet and kube-proxy only:
{ services.kubernetes.roles = [ "node" ]; }Assigning both the master and node roles is usable if you want a singlenode Kubernetes cluster for dev or testing purposes:
{ services.kubernetes.roles = [ "master" "node" ];}Note: Assigning either role will also default bothservices.kubernetes.flannel.enableandservices.kubernetes.easyCertsto true. This sets up flannel as CNI and activates automatic PKI bootstrapping.
It is mandatory to configure:services.kubernetes.masterAddress.The masterAddress must be resolveable and routeable by all cluster nodes.In single node clusters, this can be set tolocalhost.
Role-based access control (RBAC) authorization mode is enabled bydefault. This means that anonymous requests to the apiserver secure portwill expectedly cause a permission denied error. All cluster componentsmust therefore be configured with x509 certificates for two-way tlscommunication. The x509 certificate subject section determines the rolesand permissions granted by the apiserver to perform clusterwide ornamespaced operations. See also: Using RBACAuthorization.
The NixOS kubernetes module provides an option for automatic certificatebootstrapping and configuration,services.kubernetes.easyCerts.The PKI bootstrapping process involves setting up a certificate authority (CA)daemon (cfssl) on the kubernetes master node. cfssl generates a CA-certfor the cluster, and uses the CA-cert for signing subordinate certs issuedto each of the cluster components. Subsequently, the certmgr daemon monitorsactive certificates and renews them when needed. For single node Kubernetesclusters, settingservices.kubernetes.easyCerts= true is sufficient and no further action is required. For joining extra nodemachines to an existing cluster on the other hand, establishing initialtrust is mandatory.
To add new nodes to the cluster: On any (non-master) cluster node whereservices.kubernetes.easyCertsis enabled, the helper scriptnixos-kubernetes-node-join is available on PATH.Given a token on stdin, it will copy the token to the kubernetes secrets directoryand restart the certmgr service. As requested certificates are issued, thescript will restart kubernetes cluster components as needed for them topick up new keypairs.
Multi-master (HA) clusters are not supported by the easyCerts module.
In order to interact with an RBAC-enabled cluster as an administrator,one needs to have cluster-admin privileges. By default, when easyCertsis enabled, a cluster-admin kubeconfig file is generated and linked into/etc/kubernetes/cluster-admin.kubeconfig as determined byservices.kubernetes.pki.etcClusterAdminKubeconfig.export KUBECONFIG=/etc/kubernetes/cluster-admin.kubeconfig will makekubectl use this kubeconfig to access and authenticate the cluster. Thecluster-admin kubeconfig references an auto-generated keypair owned byroot. Thus, only root on the kubernetes master may obtain cluster-adminrights by means of this file.
This chapter describes various aspects of managing a running NixOS system, such as how to use thesystemd service manager.
Table of Contents
In NixOS, all system services are started and monitored using thesystemd program. systemd is the “init” process of the system (i.e. PID1), the parent of all other processes. It manages a set of so-called“units”, which can be things like system services (programs), but alsomount points, swap files, devices, targets (groups of units) and more.Units can have complex dependencies; for instance, one unit can requirethat another unit must be successfully started before the first unit canbe started. When the system boots, it starts a unit nameddefault.target; the dependencies of this unit cause all systemservices to be started, file systems to be mounted, swap files to beactivated, and so on.
The commandsystemctl is the main way to interact withsystemd. Thefollowing paragraphs demonstrate ways to interact with any OS runningsystemd as init system. NixOS is of no exception. Thenext section explains NixOS specific things worthknowing.
Without any arguments,systemctl the status of active units:
$ systemctl-.mount loaded active mounted /swapfile.swap loaded active active /swapfilesshd.service loaded active running SSH Daemongraphical.target loaded active active Graphical Interface...You can ask for detailed status information about a unit, for instance,the PostgreSQL database service:
$ systemctl status postgresql.servicepostgresql.service - PostgreSQL Server Loaded: loaded (/nix/store/pn3q73mvh75gsrl8w7fdlfk3fq5qm5mw-unit/postgresql.service) Active: active (running) since Mon, 2013-01-07 15:55:57 CET; 9h ago Main PID: 2390 (postgres) CGroup: name=systemd:/system/postgresql.service ├─2390 postgres ├─2418 postgres: writer process ├─2419 postgres: wal writer process ├─2420 postgres: autovacuum launcher process ├─2421 postgres: stats collector process └─2498 postgres: zabbix zabbix [local] idleJan 07 15:55:55 hagbard postgres[2394]: [1-1] LOG: database system was shut down at 2013-01-07 15:55:05 CETJan 07 15:55:57 hagbard postgres[2390]: [1-1] LOG: database system is ready to accept connectionsJan 07 15:55:57 hagbard postgres[2420]: [1-1] LOG: autovacuum launcher startedJan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.Note that this shows the status of the unit (active and running), allthe processes belonging to the service, as well as the most recent logmessages from the service.
Units can be stopped, started or restarted:
# systemctl stop postgresql.service# systemctl start postgresql.service# systemctl restart postgresql.serviceThese operations are synchronous: they wait until the service hasfinished starting or stopping (or has failed). Starting a unit willcause the dependencies of that unit to be started as well (ifnecessary).
Packages in Nixpkgs sometimes provide systemd units with them, usuallyin e.g#pkg-out#/lib/systemd/. Putting such a package inenvironment.systemPackages doesn’t make the service available tousers or the system.
In order to enable a systemdsystem service with provided upstreampackage, use (e.g):
{ systemd.packages = [ pkgs.packagekit ]; }Usually NixOS modules written by the community do the above, plus takecare of other details. If a module was written for a service you areinterested in, you’d probably need only to useservices.#name#.enable = true;. These services are defined inNixpkgs’nixos/modules/ directory. In casethe service is simple enough, the above method should work, and startthe service on boot.
User systemd services on the other hand, should be treateddifferently. Given a package that has a systemd unit file at#pkg-out#/lib/systemd/user/, usingsystemd.packages willmake you able to start the service viasystemctl --user start, but itwon’t start automatically on login. However, You can imperativelyenable it by adding the package’s attribute tosystemd.packages and then do this (e.g):
$ mkdir -p ~/.config/systemd/user/default.target.wants$ ln -s /run/current-system/sw/lib/systemd/user/syncthing.service ~/.config/systemd/user/default.target.wants/$ systemctl --user daemon-reload$ systemctl --user enable syncthing.serviceIf you are interested in a timer file, usetimers.target.wants insteadofdefault.target.wants in the 1st and 2nd command.
Usingsystemctl --user enable syncthing.service instead of the above,will work, but it’ll use the absolute path ofsyncthing.service forthe symlink, and this path is in/nix/store/.../lib/systemd/user/.Hencegarbage collection will remove that file and youwill wind up with a broken symlink in your systemd configuration, whichin turn will not make the service / timer start on login.
systemd supports templated units where a base unit can be started multipletimes with a different parameter. The syntax to accomplish this isservice-name@instance-name.service. Units get the instance name passed tothem (seesystemd.unit(5)). NixOS has support for these kinds of units andfor template-specific overrides. A service needs to be defined twice, oncefor the base unit and once for the instance. All instances must includeoverrideStrategy = "asDropin" for the change detection to work. Thisexample illustrates this:
{ systemd.services = { "base-unit@".serviceConfig = { ExecStart = "..."; User = "..."; }; "base-unit@instance-a" = { overrideStrategy = "asDropin"; # needed for templates to work wantedBy = [ "multi-user.target" ]; # causes NixOS to manage the instance }; "base-unit@instance-b" = { overrideStrategy = "asDropin"; # needed for templates to work wantedBy = [ "multi-user.target" ]; # causes NixOS to manage the instance serviceConfig.User = "root"; # also override something for this specific instance }; };}The system can be shut down (and automatically powered off) by doing:
# shutdownThis is equivalent to runningsystemctl poweroff.
To reboot the system, run
# rebootwhich is equivalent tosystemctl reboot. Alternatively, you canquickly reboot the system usingkexec, which bypasses the BIOS bydirectly loading the new kernel into memory:
# systemctl kexecThe machine can be suspended to RAM (if supported) usingsystemctl suspend,and suspended to disk usingsystemctl hibernate.
These commands can be run by any user who is logged in locally, i.e. ona virtual console or in X11; otherwise, the user is asked forauthentication.
Systemd keeps track of all users who are logged into the system (e.g. ona virtual console or remotely via SSH). The commandloginctl allowsquerying and manipulating user sessions. For instance, to list all usersessions:
$ loginctl SESSION UID USER SEAT c1 500 eelco seat0 c3 0 root seat0 c4 500 aliceThis shows that two users are logged in locally, while another is loggedin remotely. (“Seats” are essentially the combinations of displays andinput devices attached to the system; usually, there is only one seat.)To get information about a session:
$ loginctl session-status c3c3 - root (0) Since: Tue, 2013-01-08 01:17:56 CET; 4min 42s ago Leader: 2536 (login) Seat: seat0; vc3 TTY: /dev/tty3 Service: login; type tty; class user State: online CGroup: name=systemd:/user/root/c3 ├─ 2536 /nix/store/10mn4xip9n7y9bxqwnsx7xwx2v2g34xn-shadow-4.1.5.1/bin/login -- ├─10339 -bash └─10355 w3m nixos.orgThis shows that the user is logged in on virtual console 3. It alsolists the processes belonging to this session. Since systemd keeps trackof this, you can terminate a session in a way that ensures that all thesession’s processes are gone:
# loginctl terminate-session c3To keep track of the processes in a running system, systemd usescontrol groups (cgroups). A control group is a set of processes usedto allocate resources such as CPU, memory or I/O bandwidth. There can bemultiple control group hierarchies, allowing each kind of resource to bemanaged independently.
The commandsystemd-cgls lists all control groups in thesystemdhierarchy, which is what systemd uses to keep track of the processesbelonging to each service or user session:
$ systemd-cgls├─user│ └─eelco│ └─c1│ ├─ 2567 -:0│ ├─ 2682 kdeinit4: kdeinit4 Running...│ ├─ ...│ └─10851 sh -c less -R└─system ├─httpd.service │ ├─2444 httpd -f /nix/store/3pyacby5cpr55a03qwbnndizpciwq161-httpd.conf -DNO_DETACH │ └─... ├─dhcpcd.service │ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf └─ ...Similarly,systemd-cgls cpu shows the cgroups in the CPU hierarchy,which allows per-cgroup CPU scheduling priorities. By default, everysystemd service gets its own CPU cgroup, while all user sessions are inthe top-level CPU cgroup. This ensures, for instance, that a thousandrun-away processes in thehttpd.service cgroup cannot starve the CPUfor one process in thepostgresql.service cgroup. (By contrast, itthey were in the same cgroup, then the PostgreSQL process would get1/1001 of the cgroup’s CPU time.) You can limit a service’s CPU share inconfiguration.nix:
{ systemd.services.httpd.serviceConfig.CPUShares = 512; }By default, every cgroup has 1024 CPU shares, so this will halve the CPUallocation of thehttpd.service cgroup.
There also is amemory hierarchy that controls memory allocationlimits; by default, all processes are in the top-level cgroup, so anyservice or session can exhaust all available memory. Per-cgroup memorylimits can be specified inconfiguration.nix; for instance, to limithttpd.service to 512 MiB of RAM (excluding swap):
{ systemd.services.httpd.serviceConfig.MemoryLimit = "512M"; }The commandsystemd-cgtop shows a continuously updated list of allcgroups with their CPU and memory usage.
System-wide logging is provided by systemd’sjournal, which subsumestraditional logging daemons such as syslogd and klogd. Log entries arekept in binary files in/var/log/journal/. The commandjournalctlallows you to see the contents of the journal. For example,
$ journalctl -bshows all journal entries since the last reboot. (The output ofjournalctl is piped intoless by default.) You can use variousoptions and match operators to restrict output to messages of interest.For instance, to get all messages from PostgreSQL:
$ journalctl -u postgresql.service-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --...Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down-- Reboot --Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG: database system was shut down at 2013-01-07 15:44:14 CETJan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to accept connectionsOr to get all messages since the last reboot that have at least a“critical” severity level:
$ journalctl -b -p critDec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)The system journal is readable by root and by users in thewheel andsystemd-journal groups. All users have a private journal that can beread usingjournalctl.
Normally — on systems with a persistentrootfs — system services can persist state tothe filesystem without administrator intervention.
However, it is possible and not-uncommon to createimpermanent systems, whoserootfs is either atmpfs or reset during boot. While NixOS itself supportsthis kind of configuration, special care needs to be taken.
/nixNixOS needs the entirety of/nix to be persistent, as it includes:
/nix/store, which contains all the system’s executables, libraries, and supporting data;
/nix/var/nix, which contains:
the Nix daemon’s database;
roots whose transitive closure is preserved when garbage-collecting the Nix store;
system-wide and per-user profiles.
/boot/boot should also be persistent, as it contains:
the kernel and initrd which the bootloader loads,
the bootloader’s configuration, including the kernel’s command-line whichdetermines the store path to use as system environment.
machine-id(5)systemd uses per-machine identifier —machine-id(5) — which must beunique and persistent; otherwise, the system journal may fail to list earlierboots, etc.
systemd generates a randommachine-id(5) during boot if it does not already exist,and persists it in/etc/machine-id. As such, it suffices to make that file persistent.
Alternatively, it is possible to generate a randommachine-id(5); while thespecification allows forany hex-encoded 128b value, systemd itself usesUUIDv4,i.e. random UUIDs, and it is thus preferable to do so as well, incase some software assumesmachine-id(5) to be a UUIDv4. Those can begenerated withuuidgen -r | tr -d - (tr being used to remove the dashes).
Such amachine-id(5) can be set by writing it to/etc/machine-id or throughthe kernel’s command-line, though NixOS’ systemd maintainersdiscourage thelatter approach.
/var/lib/systemdMoreover,systemd expects its state directory —/var/lib/systemd — to persist, for:
systemd-random-seed(8), which loads a 256b “seed” into the kernel’s RNGat boot time, and saves a fresh one during shutdown;
systemd.timer(5) withPersistent=yes, which are then run after boot ifthe timer would have triggered during the time the system was shut down;
systemd-coredump(8) to store core dumps there by default;(seecoredump.conf(5))
systemd-backlight(8) andsystemd-rfkill(8) persist hardware-relatedstate;
possibly other things, this list is not meant to be exhaustive.
In any case, making/var/lib/systemd persistent is recommended.
/var/log/journal/{machine-id}Lastly,systemd-journald(8) writes the system’s journal in binaryform to/var/log/journal/{machine-id}; if (locally) persisting the entire logis desired, it is recommended to make all of/var/log/journal persistent.
If not, one can setStorage=volatile injournald.conf(5)(services.journald.storage = "volatile";),which disables journal persistence and causes it to be written to/run/log/journal.
When using ZFS,/etc/zfs/zpool.cache should be persistent (or a symlink to a persistentlocation) as it is the default value for thecachefileproperty.
This cachefile is used on system startup to discover ZFS pools, so ZFS poolsholding therootfs and/or early-boot datasets such as/nix can be set tocachefile=none.
In principle, if there are no other pools attached to the system,zpool.cachedoes not need to be persisted; it is howeverstrongly recommended to persistit, in case additional pools are added later on, temporarily or permanently:
While mishandling the cachefile does not lead to data loss by itself, it maycause zpools not to be imported during boot, and services may then write to alocation where a dataset was expected to be mounted.
Table of Contents
Nix has a purely functional model, meaning that packages are neverupgraded in place. Instead new versions of packages end up in adifferent location in the Nix store (/nix/store). You shouldperiodically run Nix’sgarbage collector to remove old, unreferencedpackages. This is easy:
$ nix-collect-garbageAlternatively, you can use a systemd unit that does the same in thebackground:
# systemctl start nix-gc.serviceYou can tell NixOS inconfiguration.nix to run this unit automaticallyat certain points in time, for instance, every night at 03:15:
{ nix.gc.automatic = true; nix.gc.dates = "03:15";}The commands above do not remove garbage collector roots, such as oldsystem configurations. Thus they do not remove the ability to roll backto previous configurations. The following command deletes old roots,removing the ability to roll back to them:
$ nix-collect-garbage -dYou can also do this for specific profiles, e.g.
$ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations oldNote that NixOS system configurations are stored in the profile/nix/var/nix/profiles/system.
Another way to reclaim disk space (often as much as 40% of the size ofthe Nix store) is to run Nix’s store optimiser, which seeks outidentical files in the store and replaces them with hard links to asingle copy.
$ nix-store --optimiseSince this command needs to read the entire Nix store, it can take quitea while to finish.
If your/boot partition runs out of space, after clearing old profilesyou must rebuild your system withnixos-rebuild boot ornixos-rebuild switch to update the/boot partition and clear space.
Table of Contents
NixOS allows you to easily run other NixOS instances ascontainers.Containers are a light-weight approach to virtualisation that runssoftware in the container at the same speed as in the host system. NixOScontainers share the Nix store of the host, making container creationvery efficient.
Currently, NixOS containers are not perfectly isolated from the hostsystem. This means that a user with root access to the container can dothings that affect the host. So you should not give container rootaccess to untrusted users.
NixOS containers can be created in two ways: imperatively, using thecommandnixos-container, and declaratively, by specifying them in yourconfiguration.nix. The declarative approach implies that containersget upgraded along with your host system when you runnixos-rebuild,which is often not what you want. By contrast, in the imperativeapproach, containers are configured and updated independently from thehost system.
We’ll cover imperative container management usingnixos-containerfirst. Be aware that container management is currently only possible asroot.
You create a container with identifierfoo as follows:
# nixos-container create fooThis creates the container’s root directory in/var/lib/nixos-containers/fooand a small configuration file in/etc/nixos-containers/foo.conf. It alsobuilds the container’s initial system configuration and stores it in/nix/var/nix/profiles/per-container/foo/system. You can modify theinitial configuration of the container on the command line. Forinstance, to create a container that hassshd running, with the givenpublic key forroot:
# nixos-container create foo --config ' services.openssh.enable = true; users.users.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];'By default the next free address in the10.233.0.0/16 subnet will bechosen as container IP. This behavior can be altered by setting--host-address and--local-address:
# nixos-container create test --config-file test-container.nix \ --local-address 10.235.1.2 --host-address 10.235.1.1Creating a container does not start it. To start the container, run:
# nixos-container start fooThis command will return as soon as the container has booted and hasreachedmulti-user.target. On the host, the container runs within asystemd unit calledcontainer@container-name.service. Thus, ifsomething went wrong, you can get status info usingsystemctl:
# systemctl status container@fooIf the container has started successfully, you can log in as root usingtheroot-login operation:
# nixos-container root-login foo[root@foo:~]#Note that only root on the host can do this (since there is noauthentication). You can also get a regular login prompt using thelogin operation, which is available to all users on the host:
# nixos-container login foofoo login: alicePassword: ***Withnixos-container run, you can execute arbitrary commands in thecontainer:
# nixos-container run foo -- uname -aLinux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/LinuxThere are several ways to change the configuration of the container.First, on the host, you can edit/var/lib/nixos-containers/foo/etc/nixos/configuration.nix, and run
# nixos-container update fooThis will build and activate the new configuration. You can also specifya new configuration on the command line:
# nixos-container update foo --config ' services.httpd.enable = true; services.httpd.adminAddr = "foo@example.org"; networking.firewall.allowedTCPPorts = [ 80 ];'# curl http://$(nixos-container show-ip foo)/<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…However, note that this will overwrite the container’s/etc/nixos/configuration.nix.
Alternatively, you can change the configuration from within thecontainer itself by runningnixos-rebuild switch inside the container.Note that the container by default does not have a copy of the NixOSchannel, so you should runnix-channel --update first.
Containers can be stopped and started usingnixos-container stop andnixos-container start, respectively, or by usingsystemctl on the container’s service unit. To destroy a container,including its file system, do
# nixos-container destroy fooYou can also specify containers and their configuration in the host’sconfiguration.nix. For example, the following specifies that thereshall be a container nameddatabase running PostgreSQL:
{ containers.database = { config = { config, pkgs, ... }: { services.postgresql.enable = true; services.postgresql.package = pkgs.postgresql_14; }; };}If you runnixos-rebuild switch, the container will be built. If thecontainer was already running, it will be updated in place, withoutrebooting. The container can be configured to start automatically bysettingcontainers.database.autoStart = true in its configuration.
By default, declarative containers share the network namespace of thehost, meaning that they can listen on (privileged) ports. However, theycannot change the network configuration. You can give a container itsown network as follows:
{ containers.database = { privateNetwork = true; hostAddress = "192.168.100.10"; localAddress = "192.168.100.11"; };}This gives the container a private virtual Ethernet interface with IPaddress192.168.100.11, which is hooked up to a virtual Ethernetinterface on the host with IP address192.168.100.10. (See the nextsection for details on container networking.)
To disable the container, just remove it fromconfiguration.nix andrunnixos-rebuild switch. Note that this will not delete the root directory of thecontainer in/var/lib/nixos-containers. Containers can be destroyed usingthe imperative method:nixos-container destroy foo.
Declarative containers can be started and stopped using thecorresponding systemd service, e.g.systemctl start container@database.
When you create a container usingnixos-container create, it gets itown private IPv4 address in the range10.233.0.0/16. You can get thecontainer’s IPv4 address as follows:
# nixos-container show-ip foo10.233.4.2$ ping -c1 10.233.4.264 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 msNetworking is implemented using a pair of virtual Ethernet devices. Thenetwork interface in the container is calledeth0, while the matchinginterface in the host is calledve-container-name (e.g.,ve-foo).The container has its own network namespace and theCAP_NET_ADMINcapability, so it can perform arbitrary network configuration such assetting up firewall rules, without affecting or having access to thehost’s network.
By default, containers cannot talk to the outside network. If you wantthat, you should set up Network Address Translation (NAT) rules on thehost to rewrite container traffic to use your external IP address. Thiscan be accomplished using the following configuration on the host:
{ networking.nat.enable = true; networking.nat.internalInterfaces = [ "ve-+" ]; networking.nat.externalInterface = "eth0";}whereeth0 should be replaced with the desired external interface.Note thatve-+ is a wildcard that matches all container interfaces.
If you are using Network Manager, you need to explicitly prevent it frommanaging container interfaces:
{ networking.networkmanager.unmanaged = [ "interface-name:ve-*" ]; }You may need to restart your system for the changes to take effect.
Table of Contents
This chapter describes solutions to common problems you might encounterwhen you manage your NixOS system.
If NixOS fails to boot, there are a number of kernel command line parameters that may help you to identify or fix the issue. You can add these parameters in the GRUB boot menu by pressing “e” to modify the selected boot entry and editing the line starting withlinux. The following are some useful kernel command line parameters that are recognised by the NixOS boot scripts or by systemd:
boot.shell_on_failAllows the user to start a root shell if something goes wrong in stage 1 of the boot process (the initial ramdisk). This is disabled by default because there is no authentication for the root shell.
boot.debug1Start an interactive shell in stage 1 before anything useful has been done. That is, no modules have been loaded and no file systems have been mounted, except for/proc and/sys.
boot.debug1devicesLikeboot.debug1, but runs stage1 until kernel modules are loaded and device nodes are created. This may help with e.g. making the keyboard work.
boot.debug1mountsLikeboot.debug1 orboot.debug1devices, but runs stage1 until all filesystems that are mounted during initrd are mounted (seeneededForBoot). As a motivating example, this could be useful if you’ve forgotten to setneededForBoot on a file system.
boot.tracePrint every shell command executed by the stage 1 and 2 boot scripts.
singleBoot into rescue mode (a.k.a. single user mode). This will cause systemd to start nothing but the unitrescue.target, which runssulogin to prompt for the root password and start a root login shell. Exiting the shell causes the system to continue with the normal boot process.
systemd.log_level=debugsystemd.log_target=consoleMake systemd very verbose and send log messages to the console instead of the journal. For more parameters recognised by systemd, see systemd(1).
In addition, these arguments are recognised by the live image only:
live.nixos.passwd=passwordSet the password for thenixos live user. This can be used for SSH access if there are issues using the terminal.
Notice that forboot.shell_on_fail,boot.debug1,boot.debug1devices, andboot.debug1mounts, if you didnot select “start the new shell as pid 1”, and youexit from the new shell, boot will proceed normally from the point where it failed, as if you’d chosen “ignore the error and continue”.
If no login prompts or X11 login screens appear (e.g. due to hanging dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start rescue mode (described above). (Also note that since most units have a 90-second timeout before systemd gives up on them, theagetty login prompts should appear eventually unless something is very wrong.)
You can enter rescue mode by running:
# systemctl rescueThis will eventually give you a single-user root shell. Systemd willstop (almost) all system services. To get out of maintenance mode, justexit from the rescue shell.
After runningnixos-rebuild to switch to a new configuration, you mayfind that the new configuration doesn’t work very well. In that case,there are several ways to return to a previous configuration.
First, the GRUB boot manager allows you to boot into any previousconfiguration that hasn’t been garbage-collected. These configurationscan be found under the GRUB submenu “NixOS - All configurations”. Thisis especially useful if the new configuration fails to boot. After thesystem has booted, you can make the selected configuration the defaultfor subsequent boots:
# /run/current-system/bin/switch-to-configuration bootSecond, you can switch to the previous configuration in a runningsystem:
# nixos-rebuild switch --rollbackThis is equivalent to running:
# /nix/var/nix/profiles/system-N-link/bin/switch-to-configuration switchwhereN is the number of the NixOS system configuration. To get alist of the available configurations, do:
$ ls -l /nix/var/nix/profiles/system-*-link...lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055After a system crash, it’s possible for files in the Nix store to becomecorrupted. (For instance, the Ext4 file system has the tendency toreplace un-synced files with zero bytes.) NixOS tries hard to preventthis from happening: it performs async before switching to a newconfiguration, and Nix’s database is fully transactional. If corruptionstill occurs, you may be able to fix it automatically.
If the corruption is in a path in the closure of the NixOS systemconfiguration, you can fix it by doing
# nixos-rebuild switch --repairThis will cause Nix to check every path in the closure, and if itscryptographic hash differs from the hash recorded in Nix’s database, thepath is rebuilt or redownloaded.
You can also scan the entire Nix store for corrupt paths:
# nix-store --verify --check-contents --repairAny corrupt paths will be redownloaded if they’re available in a binarycache; otherwise, they cannot be repaired.
Nix uses a so-calledbinary cache to optimise building a package fromsource into downloading it as a pre-built binary. That is, whenever acommand likenixos-rebuild needs a path in the Nix store, Nix will tryto download that path from the Internet rather than build it fromsource. The default binary cache ishttps://cache.nixos.org/. If thiscache is unreachable, Nix operations may take a long time due to HTTPconnection timeouts. You can disable the use of the binary cache byadding--option use-binary-caches false, e.g.
# nixos-rebuild switch --option use-binary-caches falseIf you have an alternative binary cache at your disposal, you can use itinstead:
# nixos-rebuild switch --option binary-caches http://my-cache.example.org/This chapter describes how you can modify and extend NixOS.
By default, NixOS’snixos-rebuild command uses the NixOS and Nixpkgssources provided by thenixos channel (kept in/nix/var/nix/profiles/per-user/root/channels/nixos). To modify NixOS,however, you should check out the latest sources from Git. This is asfollows:
$ git clone https://github.com/NixOS/nixpkgs$ cd nixpkgs$ git remote update originThis will check out the latest Nixpkgs sources to./nixpkgs the NixOSsources to./nixpkgs/nixos. (The NixOS source tree lives in asubdirectory of the Nixpkgs repository.) Thenixpkgs repository hasbranches that correspond to each Nixpkgs/NixOS channel (seeUpgrading NixOS for more information about channels). Thus, theGit branchorigin/nixos-17.03 will contain the latest built and testedversion available in thenixos-17.03 channel.
It’s often inconvenient to develop directly on the master branch, sinceif somebody has just committed (say) a change to GCC, then the binarycache may not have caught up yet and you’ll have to rebuild everythingfrom source. So you may want to create a local branch based on yourcurrent NixOS version:
$ nixos-version17.09pre104379.6e0b727 (Hummingbird)$ git checkout -b local 6e0b727Or, to base your local branch on the latest version available in a NixOSchannel:
$ git remote update origin$ git checkout -b local origin/nixos-17.03(Replacenixos-17.03 with the name of the channel you want to use.)You can usegit merge orgit rebase to keep your local branch in sync with the channel, e.g.
$ git remote update origin$ git merge origin/nixos-17.03You can usegit cherry-pick to copy commits from your local branch tothe upstream branch.
If you want to rebuild your system using your (modified) sources, youneed to tellnixos-rebuild about them using the-I flag:
# nixos-rebuild switch -I nixpkgs=/my/sources/nixpkgsIf you wantnix-env to use the expressions in/my/sources, usenix-env -f /my/sources/nixpkgs, or change the default by adding a symlink in~/.nix-defexpr:
$ ln -s /my/sources/nixpkgs ~/.nix-defexpr/nixpkgsYou may want to delete the symlink~/.nix-defexpr/channels_root toprevent root’s NixOS channel from clashing with your own tree (this maybreak the command-not-found utility though). If you want to go back tothe default state, you may just remove the~/.nix-defexpr directorycompletely, log out and log in again and it should have been recreatedwith a link to the root channels.
Table of Contents
NixOS has a modular system for declarative configuration. This systemcombines multiplemodules to produce the full system configuration.One of the modules that constitute the configuration is/etc/nixos/configuration.nix. Most of the others live in thenixos/modulessubdirectory of the Nixpkgs tree.
Each NixOS module is a file that handles one logical aspect of theconfiguration, such as a specific kind of hardware, a service, ornetwork settings. A module configuration does not have to handleeverything from scratch; it can use the functionality provided by othermodules for its implementation. Thus a module candeclare options thatcan be used by other modules, and conversely candefine optionsprovided by other modules in its own implementation. For example, themodulepam.nixdeclares the optionsecurity.pam.services that allows other modules (e.g.sshd.nix)to define PAM services; and it defines the optionenvironment.etc (declared byetc.nix)to cause files to be created in/etc/pam.d.
InConfiguration Syntax, we saw the following structure ofNixOS modules:
{ config, pkgs, ... }:{ # option definitions}This is actually anabbreviated form of module that only definesoptions, but does not declare any. The structure of full NixOS modulesis shown inExample: Structure of NixOS Modules.
{ config, pkgs, ... }:{ imports = [ # paths of other modules ]; options = { # option declarations }; config = { # option definitions };}The meaning of each part is as follows.
The first line makes the current Nix expression a function. The variablepkgs contains Nixpkgs (by default, it takes thenixpkgs entry ofNIX_PATH, see theNix manualfor further details), whileconfig contains the full systemconfiguration. This line can be omitted if there is no reference topkgs andconfig inside the module.
Thisimports list enumerates the paths to other NixOS modules thatshould be included in the evaluation of the system configuration. Adefault set of modules is defined in the filemodules/module-list.nix.These don’t need to be added in the import list.
The attributeoptions is a nested set ofoption declarations(described below).
The attributeconfig is a nested set ofoption definitions (alsodescribed below).
Example: NixOS Module for the “locate” Serviceshows a module that handles the regular update of the “locate” database,an index of all files in the file system. This module declares twooptions that can be defined by other modules (typically the user’sconfiguration.nix):services.locate.enable (whether the database shouldbe updated) andservices.locate.interval (when the update should be done).It implements its functionality by defining two options declared by othermodules:systemd.services (the set of all systemd services) andsystemd.timers (the list of commands to be executed periodically bysystemd).
Care must be taken when writing systemd services usingExec* directives. Bydefault systemd performs substitution on%<char> specifiers in thesedirectives, expands environment variables from$FOO and${FOO}, splitsarguments on whitespace, and splits commands on;. All of these must be escapedto avoid unexpected substitution or splitting when interpolating into anExec*directive, e.g. when using anextraArgs option to pass additional arguments tothe service. The functionsutils.escapeSystemdExecArg andutils.escapeSystemdExecArgs are provided for this, seeExample: Escaping inExec directives for an example. When using thesefunctions system environment substitution shouldnot be disabled explicitly.
{ config, lib, pkgs, ...}:let inherit (lib) concatStringsSep mkIf mkOption optionalString types ; cfg = config.services.locate;in{ options.services.locate = { enable = mkOption { type = types.bool; default = false; description = '' If enabled, NixOS will periodically update the database of files used by the locate command. ''; }; interval = mkOption { type = types.str; default = "02:15"; example = "hourly"; description = '' Update the locate database at this interval. Updates by default at 2:15 AM every day. The format is described in systemd.time(7). ''; }; # Other options omitted for documentation }; config = { systemd.services.update-locatedb = { description = "Update Locate Database"; path = [ pkgs.su ]; script = '' mkdir -p $(dirname ${toString cfg.output}) chmod 0755 $(dirname ${toString cfg.output}) exec updatedb \ --localuser=${cfg.localuser} \ ${optionalString (!cfg.includeStore) "--prunepaths='/nix/store'"} \ --output=${toString cfg.output} ${concatStringsSep " " cfg.extraFlags} ''; }; systemd.timers.update-locatedb = mkIf cfg.enable { description = "Update timer for locate database"; partOf = [ "update-locatedb.service" ]; wantedBy = [ "timers.target" ]; timerConfig.OnCalendar = cfg.interval; }; };}{ config, pkgs, utils, ...}:let cfg = config.services.echo; echoAll = pkgs.writeScript "echo-all" '' #! ${pkgs.runtimeShell} for s in "$@"; do printf '%s\n' "$s" done ''; args = [ "a%Nything" "lang=\${LANG}" ";" "/bin/sh -c date" ];in{ systemd.services.echo = { description = "Echo to the journal"; wantedBy = [ "multi-user.target" ]; serviceConfig.Type = "oneshot"; serviceConfig.ExecStart = '' ${echoAll} ${utils.escapeSystemdExecArgs args} ''; };}An option declaration specifies the name, type and description of aNixOS configuration option. It is invalid to define an option thathasn’t been declared in any module. An option declaration generallylooks like this:
{ options = { name = mkOption { type = type specification; default = default value; example = example value; description = "Description for use in the NixOS manual."; }; };}The attribute names within thename attribute path must be camelcased in general but should, as an exception, match the packageattribute namewhen referencing a Nixpkgs package. For example, the optionservices.nix-serve.bindAddress references thenix-serve Nixpkgspackage.
The functionmkOption accepts the following arguments.
typeThe type of the option (seethe section called “Options Types”). Thisargument is mandatory for nixpkgs modules. Setting this is highlyrecommended for the sake of documentation and type checking. In case it isnot set, a fallback type with unspecified behavior is used.
defaultThe default value used if no value is defined by any module. Adefault is not required; but if a default is not given, then usersof the module will have to define the value of the option, otherwisean error will be thrown.
defaultTextA textual representation of the default value to be rendered verbatim inthe manual. Useful if the default value is a complex expression or dependson other values or packages.Uselib.literalExpression for a Nix expression,lib.literalMD fora plain English description inNixpkgs-flavored Markdown format.
exampleAn example value that will be shown in the NixOS manual.You can uselib.literalExpression andlib.literalMD in the same wayas indefaultText.
descriptionA textual description of the option inNixpkgs-flavored Markdown format that will beincluded in the NixOS manual.
mkEnableOptionCreates an Option attribute set for a boolean value option i.e anoption to be toggled on or off.
This function takes a single string argument, the name of the thing to be toggled.
The option’s description is “Whether to enable <name>.”.
For example:
mkEnableOption usagelib.mkEnableOption "magic" # is like lib.mkOption { type = lib.types.bool; default = false; example = true; description = "Whether to enable magic."; }mkPackageOptionUsage:
mkPackageOption pkgs "name" { default = [ "path" "in" "pkgs" ]; example = "literal example";}Creates an Option attribute set for an option that specifies the package a module should use for some purpose.
Note: You should make package options for your modules, where applicable. While one can always overwrite a specific package throughout nixpkgs by usingnixpkgs overlays, they slow down nixpkgs evaluation significantly and are harder to debug when issues arise.
The package is specified in the third argument underdefault as a list of stringsrepresenting its attribute path in nixpkgs (or another package set).Because of this, you need to pass nixpkgs itself (or a subset) as the first argument.
The second argument may be either a string or a list of strings.It provides the display name of the package in the description of the generated option(using only the last element if the passed value is a list)and serves as the fallback value for thedefault argument.
To include extra information in the description, passextraDescription toappend arbitrary text to the generated description.You can also pass anexample value, either a literal string or an attribute path.
The default argument can be omitted if the provided name isan attribute of pkgs (if name is a string) or avalid attribute path in pkgs (if name is a list).
If you wish to explicitly provide no default, passnull asdefault.
Examples:
mkPackageOption usagelib.mkPackageOption pkgs "hello" { } # is like lib.mkOption { type = lib.types.package; default = pkgs.hello; defaultText = lib.literalExpression "pkgs.hello"; description = "The hello package to use."; }mkPackageOption with explicit default and examplelib.mkPackageOption pkgs "GHC" { default = [ "ghc" ]; example = "pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])"; } # is like lib.mkOption { type = lib.types.package; default = pkgs.ghc; defaultText = lib.literalExpression "pkgs.ghc"; example = lib.literalExpression "pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])"; description = "The GHC package to use."; }mkPackageOption with additional description textmkPackageOption pkgs [ "python312Packages" "torch" ] { extraDescription = "This is an example and doesn't actually do anything."; } # is like lib.mkOption { type = lib.types.package; default = pkgs.python312Packages.torch; defaultText = lib.literalExpression "pkgs.python312Packages.torch"; description = "The pytorch package to use. This is an example and doesn't actually do anything."; }Extensible option types is a feature that allows to extend certain typesdeclaration through multiple module files. This feature only work with arestricted set of types, namelyenum andsubmodules and any composedforms of them.
Extensible option types can be used forenum options that affectsmultiple modules, or as an alternative to relatedenable options.
As an example, we will take the case of display managers. There is acentral display manager module for generic display manager options and amodule file per display manager backend (sddm, gdm …).
There are two approaches we could take with this module structure:
Configuring the display managers independently by adding an enableoption to every display manager module backend. (NixOS)
Configuring the display managers in the central module by addingan option to select which display manager backend to use.
Both approaches have problems.
Making backends independent can quickly become hard to manage. Fordisplay managers, there can only be one enabled at a time, but thetype system cannot enforce this restriction as there is no relationbetween each backend’senable option. As a result, this restrictionhas to be done explicitly by adding assertions in each display managerbackend module.
On the other hand, managing the display manager backends in thecentral module will require changing the central module option everytime a new backend is added or removed.
By using extensible option types, it is possible to create a placeholderoption in the central module(Example: Extensible type placeholder in the service module),and to extend it in each backend module(Example: Extendingservices.xserver.displayManager.enable in thegdm module,Example: Extendingservices.xserver.displayManager.enable in thesddm module).
As a result,displayManager.enable option values can be added withoutchanging the main service module file and the type system automaticallyenforces that there can only be a single display manager enabled.
{ services.xserver.displayManager.enable = mkOption { description = "Display manager to use"; type = with types; nullOr (enum [ ]); };}services.xserver.displayManager.enable in thegdm module{ services.xserver.displayManager.enable = mkOption { type = with types; nullOr (enum [ "gdm" ]); };}services.xserver.displayManager.enable in thesddm module{ services.xserver.displayManager.enable = mkOption { type = with types; nullOr (enum [ "sddm" ]); };}The placeholder declaration is a standardmkOption declaration, but itis important that extensible option declarations only use thetypeargument.
Extensible option types work with any of the composed variants ofenumsuch aswith types; nullOr (enum [ "foo" "bar" ]) orwith types; listOf (enum [ "foo" "bar" ]).
Option types are a way to put constraints on the values a module optioncan take. Types are also responsible of how values are merged in case ofmultiple value definitions.
Basic types are the simplest available types in the module system. Basictypes include multiple string types that mainly differ in how definitionmerging is handled.
types.boolA boolean, its values can betrue orfalse.All definitions must have the same value, after priorities. An error is thrown in case of a conflict.
types.boolByOrA boolean, its values can betrue orfalse.The result istrue ifany of multiple definitions istrue.In other words, definitions are merged with the logicalOR operator.
types.pathA filesystem path that starts with a slash. Even if derivations can beconsidered as paths, the more specifictypes.package should be preferred.
types.pathInStoreA path that is contained in the Nix store. This can be a top-level storepath likepkgs.hello or a descendant like"${pkgs.hello}/bin/hello".
types.pathWith {inStore ?null,absolute ?null }A filesystem path. Either a string or something that can be coercedto a string.
Parameters
inStore (Boolean ornull, defaultnull)Whether the path must be in the store (true), must not be in the store(false), or it doesn’t matter (null)
absolute (Boolean ornull, defaultnull)Whether the path must be absolute (true), must not be absolute(false), or it doesn’t matter (null)
Behavior
pathWith { inStore = true; } is equivalent topathInStore
pathWith { absolute = true; } is equivalent topath
pathWith { inStore = false; absolute = true; } requires an absolutepath that is not in the store. Useful for password files that shouldn’t beleaked into the store.
types.packageA top-level store path. This can be an attribute set pointingto a store path, like a derivation or a flake input.
types.enumlOne element of the listl, e.g.types.enum [ "left" "right" ].Multiple definitions cannot be merged.
If you want to pair these values with more information, possibly ofdistinct types, consider using asum type.
types.anythingA type that accepts any value and recursively merges attribute setstogether. This type is recommended when the option type is unknown.
types.anythingTwo definitions of this type like
{ str = lib.mkDefault "foo"; pkg.hello = pkgs.hello; fun.fun = x: x + 1;}{ str = lib.mkIf true "bar"; pkg.gcc = pkgs.gcc; fun.fun = lib.mkForce (x: x + 2);}will get merged to
{ str = "bar"; pkg.gcc = pkgs.gcc; pkg.hello = pkgs.hello; fun.fun = x: x + 2;}types.rawA type which doesn’t do any checking, merging or nested evaluation. Itaccepts a single arbitrary value that is not recursed into, making ituseful for values coming from outside the module system, such as packagesets or arbitrary data. Options of this type are still evaluated accordingto priorities and conditionals, somkForce,mkIf and co. still work onthe option value itself, but not for any value nested within it. This typeshould only be used when checking, merging and nested evaluation are notdesirable.
types.optionTypeThe type of an option’s type. Its merging operation ensures that nestedoptions have the correct file location annotated, and that if possible,multiple option definitions are correctly merged together. The main usecase is as the type of the_module.freeformType option.
types.attrsA free-form attribute set.
This type will be deprecated in the future because it doesn’trecurse into attribute sets, silently drops earlier attributedefinitions, and doesn’t dischargelib.mkDefault,lib.mkIfand co. For allowing arbitrary attribute sets, prefertypes.attrsOf types.anything instead which doesn’t have theseproblems.
types.pkgsA type for the top level Nixpkgs package set.
types.intA signed integer.
types.ints.{s8, s16, s32}Signed integers with a fixed length (8, 16 or 32 bits). They go from−2^n/2 to2^n/2−1 respectively (e.g.−128 to127 for 8 bits).
types.ints.unsignedAn unsigned integer (that is >= 0).
types.ints.{u8, u16, u32}Unsigned integers with a fixed length (8, 16 or 32 bits). They gofrom 0 to 2^n−1 respectively (e.g.0to255 for 8 bits).
types.ints.betweenlowest highestAn integer betweenlowest andhighest (both inclusive).
types.ints.positiveA positive integer (that is > 0).
types.portA port number. This type is an alias totypes.ints.u16.
types.floatA floating point number.
Converting a floating point number to a string withtoString ortoJSONmay result inprecision loss.
types.numberEither a signed integer or a floating point number. No implicit conversionis done between the two types, and multiple equal definitions will only bemerged if they have the same type.
types.numbers.betweenlowest highestAn integer or floating point number betweenlowest andhighest (both inclusive).
types.numbers.nonnegativeA nonnegative integer or floating point number (that is >= 0).
types.numbers.positiveA positive integer or floating point number (that is > 0).
types.strA string. Multiple definitions cannot be merged.
types.separatedStringsepA string. Multiple definitions are concatenated withsep, e.g.types.separatedString "|".
types.linesA string. Multiple definitions are concatenated with a new line"\n".
types.commasA string. Multiple definitions are concatenated with a comma",".
types.envVarA string. Multiple definitions are concatenated with a colon":".
types.strMatchingA string matching a specific regular expression. Multipledefinitions cannot be merged. The regular expression is processedusingbuiltins.match.
types.luaInlineA string wrapped usinglib.mkLuaInline. Allows embedding lua expressionsinline within generated lua. Multiple definitions cannot be merged.
Submodules are detailed inSubmodule.
types.submoduleoA set of sub optionso.o can be an attribute set, a functionreturning an attribute set, or a path to a file containing such avalue. Submodules are used in composed types to create modularoptions. This is equivalent totypes.submoduleWith { modules = toList o; shorthandOnlyDefinesConfig = true; }.
types.submoduleWith {modules,specialArgs ? {},shorthandOnlyDefinesConfig ? false }Liketypes.submodule, but more flexible and with better defaults.It has parameters
modules A list of modules to use by default for thissubmodule type. This gets combined with all option definitionsto build the final list of modules that will be included.
Only options defined with this argument are included in rendereddocumentation.
specialArgs An attribute set of extra arguments to be passedto the module functions. The option_module.args should beused instead for most arguments since it allows overriding.specialArgs should only be used for arguments that can’t gothrough the module fixed-point, because of infinite recursion orother problems. An example is overriding thelib argument,becauselib itself is used to define_module.args, whichmakes using_module.args to define it impossible.
shorthandOnlyDefinesConfig Whether definitions of this typeshould default to theconfig section of a module (seeExample: Structure of NixOS Modules)if it is an attribute set. Enabling this only has a benefitwhen the submodule defines an option namedconfig oroptions.In such a case it would allow the option to be set withthe-submodule.config = "value" instead of requiringthe-submodule.config.config = "value". This is becauseonly when modulesdon’t set theconfig oroptionskeys, all keys are interpreted as option definitions in theconfig section. Enabling this option implicitly puts allattributes in theconfig section.
With this option enabled, defining a non-config sectionrequires using a function:the-submodule = { ... }: { options = { ... }; }.
types.deferredModuleWhereassubmodule represents an option tree,deferredModule representsa module value, such as a module file or a configuration.
It can be set multiple times.
Module authors can use its value inimports, insubmoduleWith’smodulesor inevalModules’modules parameter, among other places.
Note thatimports must be evaluated before the module fixpoint. Becauseof this, deferred modules can only be imported into “other” fixpoints, suchas submodules.
One use case for this type is the type of a “default” module that allow theuser to affect all submodules in anattrsOf submodule at once. This ismore convenient and discoverable than expecting the module user totype-merge with theattrsOf submodule option.
A union of types is a type such that a value is valid when it is valid for at least one of those types.
If some values are instances of more than one of the types, it is not possible to distinguish which type they are meant to be instances of. If that’s needed, consider using asum type.
types.eithert1 t2Typet1 or typet2, e.g.with types; either int str.Multiple definitions cannot be merged.
types.oneOf [t1 t2 … ]Typet1 or typet2 and so forth, e.g.with types; oneOf [ int str bool ]. Multiple definitions cannot bemerged.
types.nullOrtnull or typet. Multiple definitions are merged according totypet.
A sum type can be thought of, conceptually, as atypes.enum where each valid item is paired with at least a type, through some value syntax.Nix does not have a built-in syntax for this pairing of a label and a type or value, so sum types may be represented in multiple ways.
If the you’re interested in can be distinguished without a label, you may simplify your value syntax with aunion type instead.
types.attrTag{ attr1 = option1; attr2 = option2; ... }An attribute set containing one attribute, whose name must be picked fromthe attribute set (attr1, etc) and whose value consists of definitions that are valid for the corresponding option (option1, etc).
This type appears in the documentation asattribute-tagged union.
Example:
{ lib, ... }:let inherit (lib) type mkOption;in { options.toyRouter.rules = mkOption { description = '' Rules for a fictional packet routing service. ''; type = types.attrsOf ( types.attrTag { bounce = mkOption { description = "Send back a packet explaining why it wasn't forwarded."; type = types.submodule { options.errorMessage = mkOption { … }; }; }; forward = mkOption { description = "Forward the packet."; type = types.submodule { options.destination = mkOption { … }; }; }; drop = types.mkOption { description = "Drop the packet without sending anything back."; type = types.submodule {}; }; }); }; config.toyRouter.rules = { http = { bounce = { errorMessage = "Unencrypted HTTP is banned. You must always use https://."; }; }; ssh = { drop = {}; }; };}Composed types are types that take a type as parameter.listOf int andeither int str are examples of composed types.
types.listOftA list oft type, e.g.types.listOf int. Multiple definitions are merged with list concatenation.
types.attrsOftAn attribute set of where all the values are oft type. Multipledefinitions result in the joined attribute set.
This type isstrict in its values, which in turn means attributescannot depend on other attributes. See types.lazyAttrsOf for a lazy version.
types.lazyAttrsOftAn attribute set of where all the values are oft type. Multipledefinitions result in the joined attribute set. This is the lazyversion oftypes.attrsOf, allowing attributes to depend on each other.
This version does not fully support conditional definitions! With anoptionfoo of this type and a definitionfoo.attr = lib.mkIf false 10, evaluatingfoo ? attr will returntrue even though it should be false. Accessing the value will thenthrow an error. For typest that have anemptyValue defined,that value will be returned instead of throwing an error. So if thetype offoo.attr waslazyAttrsOf (nullOr int),null would bereturned instead for the samemkIf false definition.
types.attrsWith {elemType,lazy ? false,placeholder ? “name” }An attribute set of where all the values are ofelemType type.
Parameters
elemType (Required)Specifies the type of the values contained in the attribute set.
lazyDetermines whether the attribute set is lazily evaluated. See:types.lazyAttrsOf
placeholder (String, default:name )Placeholder string in documentation for the attribute names.The default valuename results in the placeholder<name>
Behavior
attrsWith { elemType = t; } is equivalent toattrsOf t
attrsWith { lazy = true; elemType = t; } is equivalent tolazyAttrsOf t
attrsWith { placeholder = "id"; elemType = t; }
Displays the option asfoo.<id> in the manual.
types.uniqtEnsures that typet cannot be merged. It is used to ensure optiondefinitions are provided only once.
types.unique{ message = m }tEnsures that typet cannot be merged. Prints the messagem, afterthe lineThe option <option path> is defined multiple times. and beforea list of definition locations.
types.coercedTofrom f toTypeto or typefrom which will be coerced to typeto usingfunctionf which takes an argument of typefrom and return avalue of typeto. Can be used to preserve backwards compatibilityof an option if its type was changed.
submodule is a very powerful type that defines a set of sub-optionsthat are handled like a separate module.
It takes a parametero, that should be a set, or a function returninga set with anoptions key defining the sub-options. Submodule optiondefinitions are type-checked accordingly to theoptions declarations.Of course, you can nest submodule option definitions for even highermodularity.
The option set can be defined directly(Example: Directly defined submodule) or as reference(Example: Submodule defined as a reference).
Note that even if your submodule’s options all have a default value,you will still need to provide a default value (e.g. an empty attribute set)if you want to allow users to leave it undefined.
{ options.mod = mkOption { description = "submodule example"; type = with types; submodule { options = { foo = mkOption { type = int; }; bar = mkOption { type = str; }; }; }; };}let modOptions = { options = { foo = mkOption { type = int; }; bar = mkOption { type = int; }; }; };in{ options.mod = mkOption { description = "submodule example"; type = with types; submodule modOptions; };}Thesubmodule type is especially interesting when used with composedtypes likeattrsOf orlistOf. When composed withlistOf(Example: Declaration of a list of submodules),submodule allowsmultiple definitions of the submodule option set(Example: Definition of a list of submodules).
{ options.mod = mkOption { description = "submodule example"; type = with types; listOf (submodule { options = { foo = mkOption { type = int; }; bar = mkOption { type = str; }; }; }); };}{ config.mod = [ { foo = 1; bar = "one"; } { foo = 2; bar = "two"; } ];}When composed withattrsOf(Example: Declaration of attribute sets of submodules),submodule allowsmultiple named definitions of the submodule option set(Example: Definition of attribute sets of submodules).
{ options.mod = mkOption { description = "submodule example"; type = with types; attrsOf (submodule { options = { foo = mkOption { type = int; }; bar = mkOption { type = str; }; }; }); };}{ config.mod.one = { foo = 1; bar = "one"; }; config.mod.two = { foo = 2; bar = "two"; };}Types are mainly characterized by theircheck andmerge functions.
checkThe function to type check the value. Takes a value as parameter andreturn a boolean. It is possible to extend a type check with theaddCheck function (Example: Adding a type check),or to fully override the check function(Example: Overriding a type check).
{ byte = mkOption { description = "An integer between 0 and 255."; type = types.addCheck types.int (x: x >= 0 && x <= 255); };}{ nixThings = mkOption { description = "words that start with 'nix'"; type = types.str // { check = (x: lib.hasPrefix "nix" x); }; };}mergeFunction to merge the options values when multiple values are set.The function takes two parameters,loc the option path as a listof strings, anddefs the list of defined values as a list. It ispossible to override a type merge function for custom needs.
Custom types can be created with themkOptionType function. As typecreation includes some more complex topics such as submodule handling,it is recommended to get familiar withtypes.nix code before creatinga new type.
The only required parameter isname.
nameA string representation of the type function name.
descriptionDescription of the type used in documentation. Give information ofthe type and any of its arguments.
checkA function to type check the definition value. Takes the definitionvalue as a parameter and returns a boolean indicating the type checkresult,true for success andfalse for failure.
mergeA function to merge multiple definitions values. Takes twoparameters:
locThe option path as a list of strings, e.g.["boot" "loader "grub" "enable"].
defsThe list of sets of definedvalue andfile where the valuewas defined, e.g.[ { file = "/foo.nix"; value = 1; } { file = "/bar.nix"; value = 2 } ]. Themerge function should return the merged valueor throw an error in case the values are impossible or not meantto be merged.
getSubOptionsFor composed types that can take a submodule as type parameter, thisfunction generate sub-options documentation. It takes the currentoption prefix as a list and return the set of sub-options. Usuallydefined in a recursive manner by adding a term to the prefix, e.g.prefix: elemType.getSubOptions (prefix ++ ["prefix"]) where"prefix" is the newly added prefix.
getSubModulesFor composed types that can take a submodule as type parameter, thisfunction should return the type parameters submodules. If the typeparameter is calledelemType, the function should just recursivelylook into submodules by returningelemType.getSubModules;.
substSubModulesFor composed types that can take a submodule as type parameter, thisfunction can be used to substitute the parameter of a submoduletype. It takes a module as parameter and return the type with thesubmodule options substituted. It is usually defined as a typefunction call with a recursive call tosubstSubModules, e.g for atypecomposedType that take anelemtype type parameter, thisfunction should be defined asm: composedType (elemType.substSubModules m).
typeMergeA function to merge multiple type declarations. Takes the type tomergefunctor as parameter. Anull return value means that typecannot be merged.
fThe type to mergefunctor.
Note: There is a genericdefaultTypeMerge that work with most ofvalue and composed types.
functorAn attribute set representing the type. It is used for typeoperations and has the following keys:
typeThe type function.
wrappedHolds the type parameter for composed types.
payloadHolds the value parameter for value types. The types that have apayload are theenum,separatedString andsubmoduletypes.
binOpA binary operation that can merge the payloads of two sametypes. Defined as a function that take two payloads asparameters and return the payloads merged.
Option definitions are generally straight-forward bindings of values tooption names, like
{ config = { services.httpd.enable = true; };}However, sometimes you need to wrap an option definition or set ofoption definitions in aproperty to achieve certain effects:
If a set of option definitions is conditional on the value of anotheroption, you may need to usemkIf. Consider, for instance:
{ config = if config.services.httpd.enable then { environment.systemPackages = [ # ... ]; # ... } else { };}This definition will cause Nix to fail with an “infinite recursion”error. Why? Because the value ofconfig.services.httpd.enable dependson the value being constructed here. After all, you could also write theclearly circular and contradictory:
{ config = if config.services.httpd.enable then { services.httpd.enable = false; } else { services.httpd.enable = true; };}The solution is to write:
{ config = mkIf config.services.httpd.enable { environment.systemPackages = [ # ... ]; # ... };}The special functionmkIf causes the evaluation of the conditional tobe “pushed down” into the individual definitions, as if you had written:
{ config = { environment.systemPackages = if config.services.httpd.enable then [ # ... ] else [ ]; # ... };}A module can override the definitions of an option in other modules bysetting anoverride priority. All option definitions that do not have the lowestpriority value are discarded. By default, option definitions havepriority 100 and option defaults have priority 1500.You can specify an explicit priority by usingmkOverride, e.g.
{ services.openssh.enable = mkOverride 10 false; }This definition causes all other definitions with priorities above 10 tobe discarded. The functionmkForce is equal tomkOverride 50, andmkDefault is equal tomkOverride 1000.
It is also possible to influence the order in which the definitions for an option aremerged by setting anorder priority withmkOrder. The default order priority is 1000.The functionsmkBefore andmkAfter are equal tomkOrder 500 andmkOrder 1500, respectively.As an example,
{ hardware.firmware = mkBefore [ myFirmware ]; }This definition ensures thatmyFirmware comes before other unordereddefinitions in the final list value ofhardware.firmware.
Note that this is different fromoverride priorities:setting an order does not affect whether the definition is included or not.
In conjunction withmkIf, it is sometimes useful for a module toreturn multiple sets of option definitions, to be merged together as ifthey were declared in separate modules. This can be done usingmkMerge:
{ config = mkMerge [ # Unconditional stuff. { environment.systemPackages = [ # ... ]; } # Conditional stuff. (mkIf config.services.bla.enable { environment.systemPackages = [ # ... ]; }) ];}The module system internally transforms module syntax into definitions. This always happens internally.
It is possible to create first class definitions which are not transformedagain into definitions by the module system.
Usually the file location of a definition is implicit and equal to the file it came from.However, when manipulating definitions, it may be useful for them to be completely self-contained (or “free-floating”).
A free-floating definition is created withmkDefinition { file = ...; value = ...; }.
Preserving the file location creates better error messages, for example when copying definitions from one option to another.
Other properties likemkOverridemkMergemkAfter can be used in thevalue attribute but not on the entire definition.
This is what would work
mkDefinition { value = mkForce 42; file = "somefile.nix";}While this would NOT work.
mkForce (mkDefinition { value = 42; file = "somefile.nix";})The following shows an example configuration that yields an error with the custom position information:
{ _file = "file.nix"; options.foo = mkOption { default = 13; }; config.foo = lib.mkDefinition { file = "custom place"; # mkOptionDefault creates a conflict with the option foo's `default = 1` on purpose # So we see the error message below contains the conflicting values and different positions value = lib.mkOptionDefault 42; };}evaluating the module yields the following error:
error: Cannot merge definitions of `foo'. Definition values:- In `file.nix': 13- In `custom place': 42To set the file location for all definitions in a module, you may add the_file module syntax attribute, which has a similar effect to usingmkDefinition on all definitions in the module, without the hassle.
When configuration problems are detectable in a module, it is a good idea to write an assertion or warning. Doing so provides clear feedback to the user and prevents errors after the build.
Although Nix has theabort andbuiltins.tracefunctions to perform such tasks, they are not ideally suited for NixOS modules. Instead of these functions, you can declare your warnings and assertions using the NixOS module system.
This is an example of usingwarnings.
{ config, lib, ... }:{ config = lib.mkIf config.services.foo.enable { warnings = if config.services.foo.bar then [ '' You have enabled the bar feature of the foo service. This is known to cause some specific problems in certain situations. '' ] else [ ]; };}This example, extracted from thesyslogd module shows how to useassertions. Since there can only be one active syslog daemon at a time, an assertion is useful to prevent such a broken system from being built.
{ config, lib, ... }:{ config = lib.mkIf config.services.syslogd.enable { assertions = [ { assertion = !config.services.rsyslogd.enable; message = "rsyslogd conflicts with syslogd"; } ]; };}Like Nix packages, NixOS modules can declare meta-attributes to provideextra information. Module meta attributes are defined in themeta.nixspecial module.
meta is a top level attribute likeoptions andconfig. Availablemeta-attributes aremaintainers,doc, andbuildDocsInSandbox.
Each of the meta-attributes must be defined at most once per modulefile.
{ config, lib, pkgs, ...}:{ options = { # ... }; config = { # ... }; meta = { maintainers = with lib.maintainers; [ ]; doc = ./default.md; buildDocsInSandbox = true; };}maintainers contains a list of the module maintainers.
doc points to a validNixpkgs-flavored CommonMark file containing the moduledocumentation. Its contents is automatically added toConfiguration. Changes to a module documentation have tobe checked to not break building the NixOS manual:
$ nix-build nixos/release.nix -A manual.x86_64-linuxbuildDocsInSandbox indicates whether the option documentation for themodule can be built in a derivation sandbox. This option is currently onlyhonored for modules shipped by nixpkgs. User modules and modules taken fromextraModules are always built outside of the sandbox, as hasbeen the case in previous releases.
Building NixOS option documentation in a sandbox allows caching of the builtdocumentation, which greatly decreases the amount of time needed to evaluatea system configuration that has NixOS documentation enabled. The sandbox alsorestricts which attributes may be referenced by documentation attributes(such as option descriptions) to theoptions andlib module arguments andthepkgs.formats attribute of thepkgs argument,config and the rest ofpkgs are disallowed and will cause doc build failures when used. Thisrestriction is necessary because we cannot reproduce the full nixpkgsinstantiation with configuration and overlays from a system configurationinside the sandbox. Theoptions argument only includes options of modulesthat are also built inside the sandbox, referencing an option of a modulethat isn’t built in the sandbox is also forbidden.
The default istrue and should usually not be changed; set it tofalseonly if the module requires access topkgs in its documentation (e.g.because it loads information from a linked package to build an option type)or if its documentation depends on other modules that also aren’t sandboxed(e.g. by using types defined in the other module).
Sometimes NixOS modules need to be used in configuration but existoutside of Nixpkgs. These modules can be imported:
{ config, lib, pkgs, ...}:{ imports = [ # Use a locally-available module definition in # ./example-module/default.nix ./example-module ]; services.exampleModule.enable = true;}Modules that are imported can also be disabled. The option declarations,config implementation and the imports of a disabled module will beignored, allowing another to take its place. This can be used toimport a set of modules from another channel while keeping the rest ofthe system on a stable release.
disabledModules is a top level attribute likeimports,options andconfig. It contains a list of modules that will be disabled. This caneither be:
the full path to the module,
or a string with the filename relative to the modules path (eg. <nixpkgs/nixos/modules> for nixos),
or an attribute set containing a specifickey attribute.
The latter allows some modules to be disabled, despite them being distributedvia attributes instead of file paths. Thekey should be globally unique, soit is recommended to include a file path in it, or rely on a framework to do itfor you.
This example will replace the existing postgresql module with theversion defined in the nixos-unstable channel while keeping the rest ofthe modules and packages from the original nixos channel. This onlyoverrides the module definition, this won’t use postgresql fromnixos-unstable unless explicitly configured to do so.
{ config, lib, pkgs, ...}:{ disabledModules = [ "services/databases/postgresql.nix" ]; imports = [ # Use postgresql service from nixos-unstable channel. # sudo nix-channel --add https://nixos.org/channels/nixos-unstable nixos-unstable <nixos-unstable/nixos/modules/services/databases/postgresql.nix> ]; services.postgresql.enable = true;}This example shows how to define a custom module as a replacement for anexisting module. Importing this module will disable the original modulewithout having to know its implementation details.
{ config, lib, pkgs, ...}:let inherit (lib) mkIf mkOption types; cfg = config.programs.man;in{ disabledModules = [ "services/programs/man.nix" ]; options = { programs.man.enable = mkOption { type = types.bool; default = true; description = "Whether to enable manual pages."; }; }; config = mkIf cfg.enabled { warnings = [ "disabled manpages for production deployments." ]; };}Freeform modules allow you to define values for option paths that havenot been declared explicitly. This can be used to add attribute-specifictypes to what would otherwise have to beattrsOf options in order toaccept all attribute names.
This feature can be enabled by using the attributefreeformType todefine a freeform type. By doing this, all assignments without anassociated option will be merged using the freeform type and combinedinto the resultingconfig set. Since this feature nullifies namechecking for entire option trees, it is only recommended for use insubmodules.
The following shows a submodule assigning a freeform type that allowsarbitrary attributes withstr values belowsettings, but alsodeclares an option for thesettings.port attribute to have ittype-checked and assign a default value. SeeExample: Declaring a type-checkedsettings attributefor a more complete example.
{ lib, config, ... }:{ options.settings = lib.mkOption { type = lib.types.submodule { freeformType = with lib.types; attrsOf str; # We want this attribute to be checked for the correct type options.port = lib.mkOption { type = lib.types.port; # Declaring the option also allows defining a default value default = 8080; }; }; };}And the following shows what such a module then allows
{ # Not a declared option, but the freeform type allows this settings.logLevel = "debug"; # Not allowed because the the freeform type only allows strings # settings.enable = true; # Allowed because there is a port option declared settings.port = 80; # Not allowed because the port option doesn't allow strings # settings.port = "443";}Freeform attributes cannot depend on other attributes of the same setwithout infinite recursion:
{ # This throws infinite recursion encountered settings.logLevel = lib.mkIf (config.settings.port == 80) "debug";}To prevent this, declare options for all attributes that need to dependon others. For above example this means to declarelogLevel to be anoption.
Many programs have configuration files where program-specific settingscan be declared. File formats can be separated into two categories:
Nix-representable ones: These can trivially be mapped to a subset ofNix syntax. E.g. JSON is an example, since its values like{"foo":{"bar":10}} can be mapped directly to Nix:{ foo = { bar = 10; }; }. Other examples are INI, YAML and TOML.The following section explains the convention for these settings.
Non-nix-representable ones: These can’t be trivially mapped to asubset of Nix syntax. Most generic programming languages are in thisgroup, e.g. bash, since the statementif true; then echo hi; fidoesn’t have a trivial representation in Nix.
Currently there are no fixed conventions for these, but it is commonto have aconfigFile option for setting the configuration filepath directly. The default value ofconfigFile can be anauto-generated file, with convenient options for controlling thecontents. For example an option of typeattrsOf str can be usedfor representing environment variables which generates a sectionlikeexport FOO="foo". Often it can also be useful to also includeanextraConfig option of typelines to allow arbitrary textafter the autogenerated part of the file.
By convention, formats like this are handled with a genericsettingsoption, representing the full program configuration as a Nix value. Thetype of this option should represent the format. The most common formatshave a predefined type and string generator already declared underpkgs.formats:
pkgs.formats.javaProperties {comment ?"Generated with Nix" }A function taking an attribute set with values
commentA string to put at the start of thefile in a comment. It can have multiplelines.
It returns thetype:attrsOf str and a functiongenerate to build a Java.properties file, takingcare of the correct escaping, etc.
pkgs.formats.hocon {generator ?<derivation>,validator ?<derivation>,doCheck ? true }A function taking an attribute set with values
generatorA derivation used for converting the JSON outputfrom the nix settings into HOCON. This might beuseful if your HOCON variant is slightly differentfrom the java-based one, or for testing purposes.
validatorA derivation used for verifying that the HOCONoutput is correct and parsable. This might beuseful if your HOCON variant is slightly differentfrom the java-based one, or for testing purposes.
doCheckWhether to enable/disable the validator check.
It returns an attrset with atype,generate function,and alib attset, as specifiedbelow.Some of the lib functions will be best understood if you haveread the reference specification. You can find thisspecification here:
https://github.com/lightbend/config/blob/main/HOCON.md
Inside oflib, you will find these functions
mkIncludeThis is used together with a specially namedattributeincludes, to include other HOCONsources into the document.
The function has a shorthand variant where itis up to the HOCON parser to figure out what typeof include is being used. The include will defaultto being non-required. If you want to be moreexplicit about the details of the include, you canprovide an attrset with following arguments
requiredWhether the parser should fail upon failureto include the document
typeType of the source of the included document.Valid values arefile,url andclasspath.See upstream documentation for the semanticsbehind each value
valueThe URI/path/classpath pointing to the source ofthe document to be included.
Example usage:
let format = pkgs.formats.hocon { }; hocon_file = pkgs.writeText "to_include.hocon" '' a = 1; ''; in { some.nested.hocon.attrset = { _includes = [ (format.lib.mkInclude hocon_file) (format.lib.mkInclude "https://example.com/to_include.hocon") (format.lib.mkInclude { required = true; type = "file"; value = include_file; }) ]; ... }; }mkAppendThis is used to invoke the+= operator.This can be useful if you need to add somethingto a list that is included from outside of nix.See upstream documentation for the semanticsbehind the+= operation.
Example usage:
let format = pkgs.formats.hocon { }; hocon_file = pkgs.writeText "to_include.hocon" '' a = [ 1 ]; b = [ 2 ]; ''; in { _includes = [ (format.lib.mkInclude hocon_file) ]; c = 3; a = format.lib.mkAppend 3; b = format.lib.mkAppend (format.lib.mkSubstitution "c"); }mkSubstitutionThis is used to make HOCON substitutions.Similarly tomkInclude, this function hasa shorthand variant where you just give itthe string with the substitution value.The substitution is not optional by default.Alternatively, you can provide an attrsetwith more options
optionalWhether the parser should fail uponfailure to fetch the substitution value.
valueThe name of the variable to use forsubstitution.
See upstream documentation for semanticsbehind the substitution functionality.
Example usage:
let format = pkgs.formats.hocon { }; in { a = 1; b = format.lib.mkSubstitution "a"; c = format.lib.mkSubstitution "SOME_ENVVAR"; d = format.lib.mkSubstitution { value = "SOME_OPTIONAL_ENVVAR"; optional = true; }; }Implementation notes:
classpath includes are not implemented in pyhocon,which is used for validating the HOCON output. Thismeans that if you are using classpath includes,you will want to either use an alternative validatoror setdoCheck = false in the format options.
pkgs.formats.libconfig {generator ?<derivation>,validator ?<derivation> }A function taking an attribute set with values
generatorA derivation used for converting the JSON outputfrom the nix settings into libconfig. This might beuseful if your libconfig variant is slightly differentfrom the original one, or for testing purposes.
validatorA derivation used for verifying that the libconfigoutput is correct and parsable. This might beuseful if your libconfig variant is slightly differentfrom the original one, or for testing purposes.
It returns an attrset with atype,generate function,and alib attset, as specifiedbelow.Some of the lib functions will be best understood if you haveread the reference specification. You can find thisspecification here:
https://hyperrealm.github.io/libconfig/libconfig_manual.html#Configuration-Files
Inside oflib, you will find these functions
mkHex,mkOctal,mkFloatUse these to specify numbers in other formats.
Example usage:
let format = pkgs.formats.libconfig { }; in { myHexValue = format.lib.mkHex "0x1FC3"; myOctalValue = format.lib.mkOctal "0027"; myFloatValue = format.lib.mkFloat "1.2E-3"; }mkArray,mkListUse these to differentiate between whethera nix list should be considered as a libconfigarray or a libconfig list. See the upstreamdocumentation for the semantics behind these types.
Example usage:
let format = pkgs.formats.libconfig { }; in { myList = format.lib.mkList [ "foo" 1 true ]; myArray = format.lib.mkArray [ 1 2 3 ]; }Implementation notes:
Since libconfig does not allow setting names to start with an underscore,this is used as a prefix for both special types and include directives.
The difference between 32bit and 64bit values became optional in libconfig1.5, so we assume 64bit values for all numbers.
pkgs.formats.json { }A function taking an empty attribute set (for future extensibility)and returning a set with JSON-specific attributestype andgenerate as specifiedbelow.
pkgs.formats.yaml { }A function taking an empty attribute set (for future extensibility)and returning a set with YAML-specific attributestype andgenerate as specifiedbelow.
pkgs.formats.ini {listsAsDuplicateKeys ? false,listToValue ? null, ... }A function taking an attribute set with values
listsAsDuplicateKeysA boolean for controlling whether list values can be used torepresent duplicate INI keys
listToValueA function for turning a list of values into a single value.
It returns a set with INI-specific attributestype andgenerateas specifiedbelow.The type of the input is anattrset of sections; key-value pairs wherethe key is the section name and the value is the corresponding contentwhich is also anattrset of key-value pairs for the actual key-valuemappings of the INI format.The values of the INI atoms are subject to the above parameters (e.g. listsmay be transformed into multiple key-value pairs depending onlistToValue).
The attributelib.type.atom contains the used INI atom.
pkgs.formats.iniWithGlobalSection {listsAsDuplicateKeys ? false,listToValue ? null, ... }A function taking an attribute set with values
listsAsDuplicateKeysA boolean for controlling whether list values can be used torepresent duplicate INI keys
listToValueA function for turning a list of values into a single value.
It returns a set with INI-specific attributestype andgenerateas specifiedbelow.The type of the input is anattrset of the structure{ sections = {}; globalSection = {}; } wheresections are severalsections as withpkgs.formats.ini andglobalSection being just a singleattrset of key-value pairs for a single section, the global section whichprecedes the section definitions.
The attributelib.type.atom contains the used INI atom.
pkgs.formats.toml { }A function taking an empty attribute set (for future extensibility)and returning a set with TOML-specific attributestype andgenerate as specifiedbelow.
pkgs.formats.xml { format ? “badgerfish”, withHeader ? true}A function taking an attribute set with valuesand returning a set with XML-specific attributestype andgenerate as specifiedbelow.
formatInput format. Because XML can not be translated one-to-one, we have to use intermediate formats. Possible values:
"badgerfish": Usesbadgerfish conversion.
withHeaderOutputs the xml with header.
pkgs.formats.cdn { }A function taking an empty attribute set (for future extensibility)and returning a set withCDN-specificattributestype andgenerate as specifiedbelow.
pkgs.formats.elixirConf { elixir ? pkgs.elixir }A function taking an attribute set with values
elixirThe Elixir package which will be used to format the generated output
It returns a set with Elixir-Config-specific attributestype,lib, andgenerate as specifiedbelow.
Thelib attribute contains functions to be used in settings, forgenerating special Elixir values:
mkRaw elixirCodeOutputs the given string as raw Elixir code
mkGetEnv { envVariable, fallback ? null }Makes the configuration fetch an environment variable at runtime
mkAtom atomOutputs the given string as an Elixir atom, instead of the defaultElixir binary string. Note: lowercase atoms still needs to be prefixedwith:
mkTuple arrayOutputs the given array as an Elixir tuple, instead of the defaultElixir list
mkMap attrsetOutputs the given attribute set as an Elixir map, instead of thedefault Elixir keyword list
pkgs.formats.lua { asBindings ? false, multiline ? true, columnWidth ? 100, indentWidth ? 2, indentUsingTabs ? false }A function taking an attribute set with values
asBindings (defaultfalse)Whether to treat attributes as variable bindings
multiline (defaulttrue)Whether to produce a multiline output. The output may still wrap acrossmultiple lines if it would otherwise exceedcolumnWidth.
columnWidth (default100)The column width to use to attempt to wrap lines.
indentWidth (default2)The width of a single indentation level.
indentUsingTabs (defaultfalse)Whether the indentation should use tabs instead of spaces.
pkgs.formats.php { finalVariable }A function taking an attribute set with values
finalVariableThe variable that will store generated expression (usuallyconfig). If set tonull, generated expression will containreturn.
It returns a set with PHP-Config-specific attributestype,lib, andgenerate as specifiedbelow.
Thelib attribute contains functions to be used in settings, forgenerating special PHP values:
mkRaw phpCodeOutputs the given string as raw PHP code
mkMixedArray list setCreates PHP array that contains both indexed and associative values. For example,lib.mkMixedArray [ "hello" "world" ] { "nix" = "is-great"; } returns['hello', 'world', 'nix' => 'is-great']
These functions all return an attribute set with these values:
typeA module system type representing a value of the format
libUtility functions for convenience, or special interactions with the format.This attribute is optional. It may contain inside atypes attributecontaining types specific to this format.
generatefilename jsonValueA function that can render a value of the format to a file. Returnsa file path.
This function puts the value contents in the Nix store. So thisshould be avoided for secrets.
settings optionThe following shows a module for an example program that uses a JSONconfiguration file. It demonstrates how above values can be used, alongwith some other related best practices. See the comments forexplanations.
{ options, config, lib, pkgs, ...}:let cfg = config.services.foo; # Define the settings format used for this program settingsFormat = pkgs.formats.json { };in{ options.services.foo = { enable = lib.mkEnableOption "foo service"; settings = lib.mkOption { # Setting this type allows for correct merging behavior type = settingsFormat.type; default = { }; description = '' Configuration for foo, see <link xlink:href="https://example.com/docs/foo"/> for supported settings. ''; }; }; config = lib.mkIf cfg.enable { # We can assign some default settings here to make the service work by just # enabling it. We use `mkDefault` for values that can be changed without # problems services.foo.settings = { # Fails at runtime without any value set log_level = lib.mkDefault "WARN"; # We assume systemd's `StateDirectory` is used, so we require this value, # therefore no mkDefault data_path = "/var/lib/foo"; # Since we use this to create a user we need to know the default value at # eval time user = lib.mkDefault "foo"; }; environment.etc."foo.json".source = # The formats generator function takes a filename and the Nix value # representing the format value and produces a filepath with that value # rendered in the format settingsFormat.generate "foo-config.json" cfg.settings; # We know that the `user` attribute exists because we set a default value # for it above, allowing us to use it without worries here users.users.${cfg.settings.user} = { isSystemUser = true; }; # ... };}Somesettings attributes may deserve some extra care. They may need adifferent type, default or merging behavior, or they are essentialoptions that should show their documentation in the manual. This can bedone usingthe section called “Freeform modules”.
We extend above example using freeform modules to declare an option forthe port, which will enforce it to be a valid integer and make it showup in the manual.
settings attribute{ settings = lib.mkOption { type = lib.types.submodule { freeformType = settingsFormat.type; # Declare an option for the port such that the type is checked and this option # is shown in the manual. options.port = lib.mkOption { type = lib.types.port; default = 8080; description = '' Which port this service should listen on. ''; }; }; default = { }; description = '' Configuration for Foo, see <link xlink:href="https://example.com/docs/foo"/> for supported values. ''; };}With the commandnix-build, you can build specific parts of your NixOSconfiguration. This is done as follows:
$ cd /path/to/nixpkgs/nixos$ nix-build -A config.optionwhereoption is a NixOS option with type “derivation” (i.e. somethingthat can be built). Attributes of interest include:
system.build.toplevelThe top-level option that builds the entire NixOS system. Everythingelse in your configuration is indirectly pulled in by this option.This is whatnixos-rebuild builds and what/run/current-systempoints to afterwards.
A shortcut to build this is:
$ nix-build -A systemsystem.build.manual.manualHTMLThe NixOS manual.
system.build.etcA tree of symlinks that form the static parts of/etc.
system.build.initialRamdisk ,system.build.kernelThe initial ramdisk and kernel of the system. This allows a quickway to test whether the kernel and the initial ramdisk bootcorrectly, by using QEMU’s-kernel and-initrd options:
$ nix-build -A config.system.build.initialRamdisk -o initrd$ nix-build -A config.system.build.kernel -o kernel$ qemu-system-x86_64 -kernel ./kernel/bzImage -initrd ./initrd/initrd -hda /dev/nullsystem.build.nixos-rebuild ,system.build.nixos-install ,system.build.nixos-generate-configThese build the corresponding NixOS commands.
systemd.units.unit-name.unitThis builds the unit with the specified name. Note that since unitnames contain dots (e.g.httpd.service), you need to put thembetween quotes, like this:
$ nix-build -A 'config.systemd.units."httpd.service".unit'You can also test individual units, without rebuilding the wholesystem, by putting them in/run/systemd/system:
$ cp $(nix-build -A 'config.systemd.units."httpd.service".unit')/httpd.service \ /run/systemd/system/tmp-httpd.service# systemctl daemon-reload# systemctl start tmp-httpd.serviceNote that the unit must not have the same name as any unit in/etc/systemd/system since those take precedence over/run/systemd/system. That’s why the unit is installed astmp-httpd.service here.
Table of Contents
Bootspec is a feature introduced inRFC-0125 in order to standardize bootloader support and advanced boot workflows such as SecureBoot and potentially more.The reference implementation can be foundhere.
The creation of bootspec documents is enabled by default.
The bootspec schema is versioned and validated againsta CUE schema file which should considered as the source of truth for your applications.
You will find the current versionhere.
Bootspec cannot account for all usecases.
For this purpose, Bootspec offers a generic extension facilityboot.bootspec.extensions which can be used to inject any data needed for your usecases.
An example for SecureBoot is to get the Nix store path to/etc/os-release in order to bake it into a unified kernel image:
{ config, lib, ... }:{ boot.bootspec.extensions = { "org.secureboot.osRelease" = config.environment.etc."os-release".source; };}To reduce incompatibility and prevent names from clashing between applications, it ishighly recommended to use a unique namespace for your extensions.
It is possible to enable your own bootloader throughboot.loader.external.installHook which can wrap an existing bootloader.
Currently, there is no good story to compose existing bootloaders to enrich their features, e.g. SecureBoot, etc.It will be necessary to reimplement or reuse existing parts.
Runningnixos-rebuild switch is one of the more common tasks under NixOS.This chapter explains some of the internals of this command to make it simplerfor new module developers to configure their units correctly and to make iteasier to understand what is happening and why for curious administrators.
nixos-rebuild, like many deployment solutions, callsswitch-to-configurationwhich resides in a NixOS system at$out/bin/switch-to-configuration. Thescript is called with the action that is to be performed likeswitch,test,boot. There is also thedry-activate action which does not really performthe actions but rather prints what it would do if you called it withtest.This feature can be used to check what service states would be changed if theconfiguration was switched to.
If the action isswitch orboot, the bootloader is updated first so theconfiguration will be the next one to boot. UnlessNIXOS_NO_SYNC is set to1,/nix/store is synced to disk.
If the action isswitch ortest, the currently running system is inspectedand the actions to switch to the new system are calculated. This process takestwo data sources into account:/etc/fstab and the current systemd status.Mounts and swaps are read from/etc/fstab and the corresponding actions aregenerated. If the options of a mount are modified, for example, the proper.mountunit is reloaded (or restarted if anything else changed and it’s neither the rootmount or the nix store). The current systemd state is inspected, the differencebetween the current system and the desired configuration is calculated andactions are generated to get to this state. There are a lot of nuances that canbe controlled by the units which are explained here.
After calculating what should be done, the actions are carried out. The orderof actions is always the same:
Stop units (systemctl stop)
Run activation script ($out/activate)
See if the activation script requested more units to restart
Restart systemd if needed (systemd daemon-reexec)
Forget about the failed state of units (systemctl reset-failed)
Reload systemd (systemctl daemon-reload)
Reload systemd user instances (systemctl --user daemon-reload)
Reactivate sysinit (systemctl restart sysinit-reactivation.target)
Reload units (systemctl reload)
Restart units (systemctl restart)
Start units (systemctl start)
Inspect what changed during these actions and print units that failed andthat were newly started
By default, some units are filtered from the outputs to make it less spammy.This can be disabled for development or testing by setting the environment variableSTC_DISPLAY_ALL_UNITS=1
Most of these actions are either self-explaining but some of them have to dowith our units or the activation script. For this reason, these topics areexplained in the next sections.
To figure out what units need to be started/stopped/restarted/reloaded, thescript first checks the current state of the system, similar to whatsystemctl list-units shows. For each of the units, the script goes through the followingchecks:
Is the unit file still in the new system? If not,stop the service unlessit setsX-StopOnRemoval in the[Unit] section tofalse.
Is it a.target unit? If so,start it unless it setsRefuseManualStart in the[Unit] section totrue orX-OnlyManualStartin the[Unit] section totrue. Alsostop the unit again unless itsetsX-StopOnReconfiguration tofalse.
Are the contents of the unit files different? They are compared by parsingthem and comparing their contents. If they are different but onlyX-Reload-Triggers in the[Unit] section is changed,reload the unit.The NixOS module system allows setting these triggers with the optionsystemd.services.<name>.reloadTriggers. There aresome additional keys in the[Unit] section that are ignored as well. If theunit files differ in any way, the following actions are performed:
.path and.slice units are ignored. There is no need to restart themsince changes in their values are applied by systemd when systemd isreloaded.
.mount units arereloaded if only theirOptions changed. If anythingelse changed (likeWhat), they arerestarted unless they are the mountunit for/ or/nix in which case they are reloaded to prevent the systemfrom crashing. Note that this is the case for.mount units and not formounts from/etc/fstab. These are explained inWhat happens during a system switch?.
.socket units are currently ignored. This is to be fixed at a laterpoint.
The rest of the units (mostly.service units) are thenreloaded ifX-ReloadIfChanged in the[Service] section is set totrue (exposedviasystemd.services.<name>.reloadIfChanged).A little exception is done for units that were deactivated in the meantime,for example because they require a unit that got stopped before. Thesearestarted instead of reloaded.
If the reload flag is not set, some more flags decide if the unit isskipped. These flags areX-RestartIfChanged in the[Service] section(exposed viasystemd.services.<name>.restartIfChanged),RefuseManualStop in the[Unit] section, andX-OnlyManualStart in the[Unit] section.
Further behavior depends on the unit havingX-StopIfChanged in the[Service] section set totrue (exposed viasystemd.services.<name>.stopIfChanged). This isset totrue by default and must be explicitly turned off if not wanted.If the flag is enabled, the unit isstopped and thenstarted. Ifnot, the unit isrestarted. The goal of the flag is to make sure thatthe new unit never runs in the old environment which is still in placebefore the activation script is run. This behavior is different when theservice is socket-activated, as outlined in the following steps.
The last thing that is taken into account is whether the unit is aservice and socket-activated. A correspondence between a.service and its.socket unit is detected automatically, butservices canopt out of that detection by settingX-NotSocketActivated toyes in their[Service]section. Otherwise, ifX-StopIfChanged isnot set, theservice isrestarted with the others. If it is set, both theservice and the socket arestopped and the socket isstarted, leaving socket activation to start the service whenit’s needed.
sysinit.targetis a systemd target that encodes system initialization (i.e. early startup). Afew units that need to run very early in the bootup process are ordered tofinish before this target is reached. Probably the most notable one of these issystemd-tmpfiles-setup.service. We will refer to these units as “sysinitunits”.
“Normal” systemd units, by default, are ordered AFTERsysinit.target. Inother words, these “normal” units expect all services ordered beforesysinit.target to have finished without explicitly declaring this dependencyrelationship for each dependency. See thesystemdbootupfor more details on the bootup process.
When restarting both a unit ordered beforesysinit.target as well as oneafter, this presents a problem because they would be started at the same timeas they do not explicitly declare their dependency relations.
To solve this, NixOS has an artificialsysinit-reactivation.target whichallows you to ensure that services ordered beforesysinit.target arerestarted correctly. This applies both to the ordering between these sysinitservices as well as ensuring that sysinit units are restarted before “normal”units.
To make an existing sysinit service restart correctly during system switch, youhave to declare:
{ systemd.services.my-sysinit = { requiredBy = [ "sysinit-reactivation.target" ]; before = [ "sysinit-reactivation.target" ]; restartTriggers = [ config.environment.etc."my-sysinit.d".source ]; };}You need to configure appropriaterestartTriggers specific to your service.
The activation script is a bash script called to activate the newconfiguration which resides in a NixOS system in$out/activate. Since itscontents depend on your system configuration, the contents may differ.This chapter explains how the script works in general and some common NixOSsnippets. Please be aware that the script is executed on every boot and systemswitch, so tasks that can be performed in other places should be performedthere (for example letting a directory of a service be created by systemd usingmechanisms likeStateDirectory,CacheDirectory, … or if that’s notpossible usingpreStart of the service).
Activation scripts are defined as snippets usingsystem.activationScripts. They can either be a simple multiline stringor an attribute set that can depend on other snippets. The builder for theactivation script will take these dependencies into account and order thesnippets accordingly. As a simple example:
{ system.activationScripts.my-activation-script = { deps = [ "etc" ]; # supportsDryActivation = true; text = '' echo "Hallo i bims" ''; };}This example creates an activation script snippet that is run after theetcsnippet. The special variablesupportsDryActivation can be set so the snippetis also run whennixos-rebuild dry-activate is run. To differentiate betweenreal and dry activation, the$NIXOS_ACTION environment variable can beread which is set todry-activate when a dry activation is done.
An activation script can write to special files instructingswitch-to-configuration to restart/reload units. The script will take theserequests into account and will incorporate the unit configuration as describedabove. This means that the activation script will “fake” a modified unit fileandswitch-to-configuration will act accordingly. By doing so, configurationlikesystemd.services.<name>.restartIfChanged isrespected. Since the activation script is runafter services are alreadystopped,systemd.services.<name>.stopIfChangedcannot be taken into account anymore and the unit is always restarted insteadof being stopped and started afterwards.
The files that can be written to are/run/nixos/activation-restart-list and/run/nixos/activation-reload-list with their respective counterparts fordry activation being/run/nixos/dry-activation-restart-list and/run/nixos/dry-activation-reload-list. Those files can containnewline-separated lists of unit names where duplicates are being ignored. Thesefiles are not create automatically and activation scripts must take thepossibility into account that they have to create them first.
There are some snippets NixOS enables by default because disabling them wouldmost likely break your system. This section lists a few of them and what theydo:
binsh creates/bin/sh which points to the runtime shell
etc sets up the contents of/etc, this includes systemd units andexcludes/etc/passwd,/etc/group, and/etc/shadow (which are managed bytheusers snippet)
hostname sets the system’s hostname in the kernel (not in/etc)
modprobe sets the path to themodprobe binary for module auto-loading
nix prepares the nix store and adds a default initial channel
specialfs is responsible for mounting filesystems like/proc andsys
users creates and removes users and groups by managing/etc/passwd,/etc/group and/etc/shadow. This also creates home directories
usrbinenv creates/usr/bin/env
var creates some directories in/var that are not service-specific
wrappers creates setuid wrappers likesudo
In certain systems, most notably image based appliances, updates are handledoutside the system. This means that you do not need to rebuild yourconfiguration on the system itself anymore.
If you want to build such a system, you can use theimage-based-applianceprofile:
{ modulesPath, ... }:{ imports = [ "${modulesPath}/profiles/image-based-appliance.nix" ];}The most notable deviation of this profile from a standard NixOS configurationis that after building it, you cannot switchto the configuration anymore.The profile setsconfig.system.switch.enable = false;, which excludesswitch-to-configuration, the central script called bynixos-rebuild, fromyour system. Removing this script makes the image lighter and slightly moresecure.
/etc via overlay filesystemThis is experimental and requires a kernel version >= 6.6 because it usesnew overlay features and relies on the new mount API.
Instead of using a custom perl script to activate/etc, you activate it via anoverlay filesystem:
{ system.etc.overlay.enable = true; }Using an overlay has two benefits:
it removes a dependency on perl
it makes activation faster (up to a few seconds)
By default, the/etc overlay is mounted writable (i.e. there is a writableupper layer). However, you can also mount/etc immutably (i.e. read-only) bysetting:
{ system.etc.overlay.mutable = false; }The overlay is atomically replaced during system switch. However, files thathave been modified will NOT be overwritten. This is the biggest change comparedto the perl-based system.
If you manually make changes to/etc on your system and then switch to a newconfiguration wheresystem.etc.overlay.mutable = false;, you will not be ableto see the previously made changes in/etc anymore. However the changes arenot completely gone, they are still in the upperdir of the previous overlay in/.rw-etc/upper.
Table of Contents
As NixOS grows, so too does the need for a catalogue and explanation ofits extensive functionality. Collecting pertinent information fromdisparate sources and presenting it in an accessible style would be aworthy contribution to the project.
The sources of theNixOS Manual are in thenixos/doc/manualsubdirectory of the Nixpkgs repository.
You can quickly validate your edits withdevmode:
$ cd /path/to/nixpkgs/nixos/doc/manual$ nix-shell[nix-shell:~]$ devmodeOnce you are done making modifications to the manual, it’s important tobuild it before committing. You can do that as follows:
nix-build nixos/release.nix -A manual.x86_64-linuxWhen this command successfully finishes, it will tell you where themanual got generated. The HTML will be accessible through theresultsymlink at./result/share/doc/nixos/index.html.
Table of Contents
When you add some feature to NixOS, you should write a test for it.NixOS tests are kept in the directorynixos/tests, and are executed(using Nix) by a testing framework that automatically starts one or morevirtual machines containing the NixOS system(s) required for the test.
A NixOS test is a module that has the following structure:
{ # One or more machines: nodes = { machine = { config, pkgs, ... }: { # ... }; machine2 = { config, pkgs, ... }: { # ... }; # … }; testScript = '' Python code… '';}We refer to the whole test above as a test module, whereas the valuesinnodes.<name> are NixOS modules themselves.
The optiontestScript is a piece of Python code that executes thetest (described below). During the test, it will start one or morevirtual machines, the configuration of which is described bythe optionnodes.
An example of a single-node test islogin.nix.It only needs a single machine to test whether users can log inon the virtual console, whether device ownership is correctly maintainedwhen switching between consoles, and so on. An interesting multi-node test isnfs/simple.nix.It uses two client nodes to test correct locking across server crashes.
Tests are invoked differently depending on whether the test is part of NixOS or lives in a different project.
Tests that are part of NixOS are added tonixos/tests/all-tests.nix.
{ hostname = runTest ./hostname.nix; }Overrides can be added by defining an anonymous module inall-tests.nix.
{ hostname = runTest { imports = [ ./hostname.nix ]; defaults.networking.firewall.enable = false; };}You can run a test with attribute namehostname innixos/tests/all-tests.nix by invoking:
cd /my/git/clone/of/nixpkgsnix-build -A nixosTests.hostnameOutside thenixpkgs repository, you can use therunNixOSTest function frompkgs.testers:
let pkgs = import <nixpkgs> { };inpkgs.testers.runNixOSTest { imports = [ ./test.nix ]; defaults.services.foo.package = mypkg;}runNixOSTest returns a derivation that runs the test.
There are a few special NixOS options for test VMs:
virtualisation.memorySizeThe memory of the VM in megabytes.
virtualisation.vlansThe virtual networks to which the VM is connected. Seenat.nixfor an example.
virtualisation.writableStoreBy default, the Nix store in the VM is not writable. If you enablethis option, a writable union file system is mounted on top of theNix store to make it appear writable. This is necessary for teststhat run Nix operations that modify the store.
For more options, see the moduleqemu-vm.nix.
The test script is a sequence of Python statements that perform variousactions, such as starting VMs, executing commands in the VMs, and so on.Each virtual machine is represented as an object stored in the variablename if this is also the identifier of the machine in the declarativeconfig. If you specified a nodenodes.machine, the following example starts themachine, waits until it has finished booting, then executes a commandand checks that the output is more-or-less correct:
machine.start()machine.wait_for_unit("default.target")t.assertIn("Linux", machine.succeed("uname"), "Wrong OS")The first line is technically unnecessary; machines are implicitly startedwhen you first execute an action on them (such aswait_for_unit orsucceed). If you have multiple machines, you can speed up the test bystarting them in parallel:
start_all()Under the variablet, all assertions fromunittest.TestCase are available.
If the hostname of a node contains characters that can’t be used in aPython variable name, those characters will be replaced withunderscores in the variable name, sonodes.machine-a will be exposedto Python asmachine_a.
The following methods are available on machine objects:
Simulate unplugging the Ethernet cable that connects the machine tothe other machines.This happens by shutting down eth1 (the multicast interface used to talkto the other VMs). eth0 is kept online to still enable the test driverto communicate with the machine.
Allows you to directly interact with QEMU’s stdin, by forwardingterminal input to the QEMU process.This is for use with the interactive test driver, not for productiontests, which run unattended.Output from QEMU is only read line-wise.Ctrl-c kills QEMU andCtrl-d closes console and returns to the test runner.
Copies a file from host to machine, e.g.,copy_from_host("myfile", "/etc/my/important/file").
The first argument is the file on the host. Note that the “host” refersto the environment in which the test driver runs, which is typically theNix build sandbox.
The second argument is the location of the file on the machine that willbe written to.
The file is copied via theshared_dir directory which is shared amongall the VMs (using a temporary directory).The access rights bits will mimic the ones from the host file anduser:group will be root:root.
Copy a file from the host into the guest by piping it over theshell into the destination file. Works without host-guest shared folder.Prefer copy_from_host for whenever possible.
Copy a file from the VM (specified by an in-VM source path) to a pathrelative to$out. The file is copied via theshared_dir shared amongall the VMs (using a temporary directory).
Simulate a sudden power failure, by telling the VM to exit immediately.
Debugging: Dump the contents of the TTY<n>
Execute a shell command, returning a list(status, stdout).
Commands are run withset -euo pipefail set:
If several commands are separated by; and one fails, thecommand as a whole will fail.
For pipelines, the last non-zero exit status will be returned(if there is one; otherwise zero will be returned).
Dereferencing unset variables fails the command.
It will wait for stdout to be closed.
If the command detaches, it must close stdout, asexecute will waitfor this to consume all output reliably. This can be achieved byredirecting stdout to stderr>&2, to/dev/console,/dev/null ora file. Examples of detaching commands aresleep 365d &, where theshell forks a new process that can write to stdout andxclip -i, wherethexclip command itself forks without closing stdout.
Takes an optional parametercheck_return that defaults toTrue.Setting this parameter toFalse will not check for the return codeand return -1 instead. This can be used for commands that shut downthe VM and would therefore break the pipe that would be used forretrieving the return code.
A timeout for the command can be specified (in seconds) using the optionaltimeout parameter, e.g.,execute(cmd, timeout=10) orexecute(cmd, timeout=None). The default is 900 seconds.
Likesucceed, but raising an exception if the command returns a zerostatus.
Forward a TCP port on the host to a TCP port on the guest.Useful during interactive testing.
Return a textual representation of what is currently visible on themachine’s screen using optical character recognition.
This requiresenableOCR to be set totrue.
Return a list of different interpretations of what is currentlyvisible on the machine’s screen using optical characterrecognition. The number and order of the interpretations is notspecified and is subject to change, but if no exception is raised atleast one will be returned.
This requiresenableOCR to be set totrue.
Press Ctrl+Alt+Delete in the guest.
Prepares the machine to be reconnected which is useful if themachine was started withallow_reboot = True
Take a picture of the display of the virtual machine, in PNG format.The screenshot will be available in the derivation output.
Simulate typing a sequence of characters on the virtual keyboard,e.g.,send_chars("foobar\n") will type the stringfoobarfollowed by the Enter key.
Send keys to the kernel console. This allows interaction with the systemdemergency mode, for example. Takes a string that is sent, e.g.,send_console("\n\nsystemctl default\n").
Simulate pressing keys on the virtual keyboard, e.g.,send_key("ctrl-alt-delete").
Please also refer to the QEMU documentation for more information on theinput syntax: https://en.wikibooks.org/wiki/QEMU/Monitor#sendkey_keys
Send a command to the QEMU monitor. This allows attachingvirtual USB disks to a running machine, among other things.
Allows you to directly interact with the guest shell. This shouldonly be used during test development, not in production tests.Killing the interactive session withCtrl-d orCtrl-c also endsthe guest session.
Shut down the machine, waiting for the VM to exit.
Start the virtual machine. This method is asynchronous — it doesnot wait for the machine to finish booting.
Execute a shell command, raising an exception if the exit status isnot zero, otherwise returning the standard output. Similar toexecute,except that the timeout isNone by default. Seeexecute for details oncommand execution.
Transition from stage 1 to stage 2. This requires themachine to be configured withtesting.initrdBackdoor = trueandboot.initrd.systemd.enable = true.
Runssystemctl commands with optional support forsystemctl --user
# run `systemctl list-jobs --no-pager`machine.systemctl("list-jobs --no-pager")# spawn a shell for `any-user` and run# `systemctl --user list-jobs --no-pager`machine.systemctl("list-jobs --no-pager", "any-user")Undo the effect ofblock.
Wait until nobody is listening on the given TCP port and IP address(defaultlocalhost).
Wait until the supplied regular expressions match a line of theserial console output.This method is useful when OCR is not possible or inaccurate.
Waits until the file exists in the machine’s file system.
Wait until a process is listening on the given TCP port and IP address(defaultlocalhost).
Wait until a process is listening on the given UNIX-domain socket(default to a UNIX-domain stream socket).
Wait for a QMP event which you can filter with theevent_filter function.The function takes as an input a dictionary of the event and if it returns True, we return that event,if it does not, we wait for the next event and retry.
It will skip all events received in the meantime, if you want to keep them,you have to do the bookkeeping yourself and store them somewhere.
By default, it will wait up to 10 minutes,timeout is in seconds.
Wait until the supplied regular expressions matches the textualcontents of the screen by using optical character recognition (seeget_screen_text andget_screen_text_variants).
This requiresenableOCR to be set totrue.
Wait for a systemd unit to get into “active” state.Throws exceptions on “failed” and “inactive” states as well as aftertiming out.
Wait until an X11 window has appeared whose name matches the givenregular expression, e.g.,wait_for_window("Terminal").
Wait until it is possible to connect to the X server.
Likewait_until_succeeds, but repeating the command until it fails.
Repeat a shell command with 1-second intervals until it succeeds.Has a default timeout of 900 seconds which can be modified, e.g.wait_until_succeeds(cmd, timeout=10). Seeexecute for details oncommand execution.Throws an exception on timeout.
Wait until the visible output on the chosen TTY matches regularexpression. Throws an exception on timeout.
To test user units declared bysystemd.user.services the optionaluser argument can be used:
machine.start()machine.wait_for_x()machine.wait_for_unit("xautolock.service", "x-session-user")This applies tosystemctl,get_unit_info,wait_for_unit,start_job andstop_job.
For faster dev cycles it’s also possible to disable the code-linters(this shouldn’t be committed though):
{ skipLint = true; nodes.machine = { config, pkgs, ... }: { # configuration… }; testScript = '' Python code… '';}This will produce a Nix warning at evaluation time. To fully disable thelinter, wrap the test script in comment directives to disable the Blacklinter directly (again, don’t commit this within the Nixpkgsrepository):
{ testScript = '' # fmt: off Python code… # fmt: on '';}Similarly, the type checking of test scripts can be disabled in the followingway:
{ skipTypeCheck = true; nodes.machine = { config, pkgs, ... }: { # configuration… };}To fail tests early when certain invariants are no longer met (instead of waiting for the build to time out), the decoratorpolling_condition is provided. For example, if we are testing a programfoo that should not quit after being started, we might write the following:
@polling_conditiondef foo_running(): machine.succeed("pgrep -x foo")machine.succeed("foo --start")machine.wait_until_succeeds("pgrep -x foo")with foo_running: ... # Put `foo` through its pacespolling_condition takes the following (optional) arguments:
seconds_intervalspecifies how often the condition should be polled:
@polling_condition(seconds_interval=10)def foo_running(): machine.succeed("pgrep -x foo")descriptionis used in the log when the condition is checked. If this is not provided, the description is pulled from the docstring of the function. These two are therefore equivalent:
@polling_conditiondef foo_running(): "check that foo is running" machine.succeed("pgrep -x foo")@polling_condition(description="check that foo is running")def foo_running(): machine.succeed("pgrep -x foo")When additional Python libraries are required in the test script, they can beadded using the parameterextraPythonPackages. For example, you could addnumpy like this:
{ extraPythonPackages = p: [ p.numpy ]; nodes = { }; # Type checking on extra packages doesn't work yet skipTypeCheck = true; testScript = '' import numpy as np assert str(np.zeros(4)) == "[0. 0. 0. 0.]" '';}In that case,numpy is chosen from the genericpython3Packages.
The following options can be used when writing tests.
enableOCRWhether to enable Optical Character Recognition functionality fortesting graphical programs. SeeMachine objects.
Type:boolean
Default:false
Declared by:
nixos/lib/testing/driver.nix |
defaultsNixOS configuration that is applied to allnodes.
Type:module
Default:{ }
Declared by:
nixos/lib/testing/nodes.nix |
driverPackage containing a script that runs the test.
Type:package
Default:set by the test framework
Declared by:
nixos/lib/testing/driver.nix |
extraBaseModulesNixOS configuration that, likedefaults, is applied to allnodes and can not be undone withspecialisation.<name>.inheritParentConfig.
Type:module
Default:{ }
Declared by:
nixos/lib/testing/nodes.nix |
extraDriverArgsExtra arguments to pass to the test driver.
They become part ofdriver viawrapProgram.
Type:list of string
Default:[ ]
Declared by:
nixos/lib/testing/driver.nix |
extraPythonPackagesPython packages to add to the test driver.
The argument is a Python package set, similar topkgs.pythonPackages.
Type:function that evaluates to a(n) list of package
Default:<function>
Example:
p: [ p.numpy ]Declared by:
nixos/lib/testing/driver.nix |
globalTimeoutA global timeout for the complete test, expressed in seconds.Beyond that timeout, every resource will be killed and released and the test will fail.
By default, we use a 1 hour timeout.
Type:signed integer
Default:3600
Example:600
Declared by:
nixos/lib/testing/driver.nix |
hostPkgsNixpkgs attrset used outside the nodes.
Type:raw value
Example:
import nixpkgs { inherit system config overlays; }Declared by:
nixos/lib/testing/driver.nix |
interactiveTestscan be run interactivelyusing the program in the test derivation’s.driverInteractive attribute.
When they are, the configuration will include anything set in this submodule.
You can set any top-level test option here.
Example test module:
{ config, lib, ... }: { nodes.rabbitmq = { services.rabbitmq.enable = true; }; # When running interactively ... interactive.nodes.rabbitmq = { # ... enable the web ui. services.rabbitmq.managementPlugin.enable = true; };}For details, see the section aboutrunning tests interactively.
Type:submodule
Declared by:
nixos/lib/testing/interactive.nix |
metaThemeta attributes that will be set on the returned derivations.
Not allmeta attributes are supported, but more can be added as desired.
Type:submodule
Default:{ }
Declared by:
nixos/lib/testing/meta.nix |
meta.brokenSets themeta.broken attribute on thetest derivation.
Type:boolean
Default:false
Declared by:
nixos/lib/testing/meta.nix |
meta.hydraPlatformsSets themeta.hydraPlatforms attribute on thetest derivation.
Type:list of raw value
Default:lib.platforms.linux only, as thehydra.nixos.org build farm does not currently support virtualisation on Darwin.
Declared by:
nixos/lib/testing/meta.nix |
meta.maintainersThelist of maintainers for this test.
Type:list of raw value
Default:[ ]
Declared by:
nixos/lib/testing/meta.nix |
meta.platformsSets themeta.platforms attribute on thetest derivation.
Type:list of raw value
Default:
[ "aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7a-linux" "armv7l-linux" "i686-linux" "loongarch64-linux" "m68k-linux" "microblaze-linux" "microblazeel-linux" "mips-linux" "mips64-linux" "mips64el-linux" "mipsel-linux" "powerpc64-linux" "powerpc64le-linux" "riscv32-linux" "riscv64-linux" "s390-linux" "s390x-linux" "x86_64-linux" "x86_64-darwin" "aarch64-darwin"]Declared by:
nixos/lib/testing/meta.nix |
meta.timeoutThetest’smeta.timeout in seconds.
Type:null or signed integer
Default:3600
Declared by:
nixos/lib/testing/meta.nix |
nameThe name of the test.
This is used in the derivation names of thedriver andtest runner.
Type:string
Declared by:
nixos/lib/testing/name.nix |
node.pkgsThe Nixpkgs to use for the nodes.
Setting this will make thenixpkgs.* options read-only, to avoid mistakenly testing with a Nixpkgs configuration that diverges from regular use.
Type:null or Nixpkgs package set
Default:null, so constructpkgs according to thenixpkgs.* options as usual.
Declared by:
nixos/lib/testing/nodes.nix |
node.pkgsReadOnlyWhether to make thenixpkgs.* options read-only. This is only relevant whennode.pkgs is set.
Set this tofalse when any of thenodes needs to configure any of thenixpkgs.* options. This will slow down evaluation of your test a bit.
Type:boolean
Default:node.pkgs != null
Declared by:
nixos/lib/testing/nodes.nix |
node.specialArgsAn attribute set of arbitrary values that will be made available as module arguments during the resolution of moduleimports.
Note that it is not possible to override these from within the NixOS configurations. If you argument is not relevant toimports, consider settingdefaults._module.args.<name> instead.
Type:lazy attribute set of raw value
Default:{ }
Declared by:
nixos/lib/testing/nodes.nix |
nodesAn attribute set of NixOS configuration modules.
The configurations are augmented by thedefaults option.
They are assigned network addresses according to thenixos/lib/testing/network.nix module.
A few special options are available, that aren’t in a plain NixOS configuration. SeeConfiguring the nodes
Type:lazy attribute set of module
Declared by:
nixos/lib/testing/nodes.nix |
passthruAttributes to add to the returned derivations,which are not necessarily part of the build.
This is a bit like doingdrv // { myAttr = true; } (which would be lost byoverrideAttrs).It does not change the actual derivation, but adds the attribute nonetheless, so thatconsumers of what would bedrv have more information.
Type:lazy attribute set of raw value
Declared by:
nixos/lib/testing/run.nix |
qemu.packageWhich qemu package to use for the virtualisation ofnodes.
Type:package
Default:"hostPkgs.qemu_test"
Declared by:
nixos/lib/testing/driver.nix |
skipLintDo not run the linters. This may speed up your iteration cycle, but it is not something you should commit.
Type:boolean
Default:false
Declared by:
nixos/lib/testing/driver.nix |
skipTypeCheckDisable type checking. This must not be enabled for new NixOS tests.
This may speed up your iteration cycle, unless you’re working on thetestScript.
Type:boolean
Default:false
Declared by:
nixos/lib/testing/driver.nix |
sshBackdoor.enableWhether to turn on the VSOCK-based access to all VMs. This provides an unauthenticated access intended for debugging.
Type:boolean
Default:false
Declared by:
nixos/lib/testing/nodes.nix |
sshBackdoor.vsockOffsetThis field is only relevant when multiple users run the (interactive)driver outside the sandbox and with the SSH backdoor activated.The typical symptom for this being a problem are error messages like this:vhost-vsock: unable to set guest cid: Address already in use
This option allows to assign an offset to each vsock number toresolve this.
This is a 32bit number. The lowest possible vsock number is3(i.e. with the lowest node number being1, this is 2+1).
Type:integer between 2 and 4294967296 (both inclusive)
Default:2
Declared by:
nixos/lib/testing/nodes.nix |
testDerivation that runs the test as its “build” process.
This implies that NixOS tests run isolated from the network, making themmore dependable.
Type:package
Declared by:
nixos/lib/testing/run.nix |
testScriptA series of python declarations and statements that you write to performthe test.
Type:string or function that evaluates to a(n) string
Declared by:
nixos/lib/testing/testScript.nix |
You can run tests usingnix-build. For example, to run the testlogin.nix,you do:
$ cd /my/git/clone/of/nixpkgs$ nix-build -A nixosTests.loginAfter building/downloading all required dependencies, this will performa build that starts a QEMU/KVM virtual machine containing a NixOSsystem. The virtual machine mounts the Nix store of the host; this makesVM creation very fast, as no disk image needs to be created. Afterwards,you can view a log of the test:
$ nix-store --read-log resultNixOS tests require virtualization support.This means that the machine must havekvm in itssystem features list, orapple-virt in case of macOS.These features are autodetected locally, butapple-virt is only autodetected since Nix 2.19.0.
Features ofremote builders must additionally be configured manually on the client, e.g. on NixOS withnix.buildMachines.*.supportedFeatures or through generalNix configuration.
If you run the tests on amacOS machine, you also need a “remote” builder for Linux; possibly a VM.nix-darwin users may enablenix.linux-builder.enable to launch such a VM.
The test itself can be run interactively. This is particularly usefulwhen developing or debugging a test:
$ nix-build . -A nixosTests.login.driverInteractive$ ./result/bin/nixos-test-driver[...]>>>By executing the test driver in this way,the VMs executed may gain network & Internet access via their backdoor control interface,typically recognized aseth0.
You can then take any Python statement, e.g.
>>> start_all()>>> test_script()>>> machine.succeed("touch /tmp/foo")>>> print(machine.succeed("pwd")) # Show stdout of commandThe functiontest_script executes the entire test script and drops youback into the test driver command line upon its completion. This allowsyou to inspect the state of the VMs after the test (e.g. to debug thetest script).
The function<yourmachine>.shell_interact() grants access to a shell runninginside a virtual machine. To use it, replace<yourmachine> with the name of avirtual machine defined in the test, for example:machine.shell_interact().Keep in mind that this shell may not display everything correctly as it isrunning within an interactive Python REPL, and logging output from the virtualmachine may overwrite input and output from the guest shell:
>>> machine.shell_interact()machine: Terminal is ready (there is no initial prompt):$ hostnamemachineAs an alternative, you can proxy the guest shell to a local TCP server by firststarting a TCP server in a terminal using the command:
$ socat 'READLINE,PROMPT=$ ' tcp-listen:4444,reuseaddrIn the terminal where the test driver is running, connect to this server byusing:
>>> machine.shell_interact("tcp:127.0.0.1:4444")Once the connection is established, you can enter commands in the socat terminalwhere socat is running.
An SSH-based backdoor to log into machines can be enabled with
{ name = "…"; nodes.machines = { # … }; interactive.sshBackdoor.enable = true;}Make sure to only enable the backdoor for interactive tests(i.e. by usinginteractive.sshBackdoor.enable)! This is the onlysupported configuration.
Running a test in a sandbox with this will fail because/dev/vhost-vsock isn’t availablein the sandbox.
This creates avsock socketfor each VM to log in with SSH. This configures root login with an empty password.
When the VMs get started interactively with the test-driver, it’s possible toconnect tomachine with
$ ssh vsock/3 -o User=rootThe socket numbers correspond to the node number of the test VM, but startat three instead of one because that’s the lowest possiblevsock number. The exact SSH commands are also printed out when startingnixos-test-driver.
On non-NixOS systems you’ll probably need to enablethe SSH config fromsystemd-ssh-proxy(1) yourself.
If starting VM fails with an error like
qemu-system-x86_64: -device vhost-vsock-pci,guest-cid=3: vhost-vsock: unable to set guest cid: Address already in useit means that the vsock numbers for the VMs are already in use. This can happenif another interactive test with SSH backdoor enabled is running on the machine.
In that case, you need to assign another range of vsock numbers. You can pick anotheroffset with
{ sshBackdoor = { enable = true; vsockOffset = 23542; };}If your test has only a single VM, you may use e.g.
$ QEMU_NET_OPTS="hostfwd=tcp:127.0.0.1:2222-:22" ./result/bin/nixos-test-driverto port-forward a port in the VM (here22) to the host machine (here port2222).
This naturally does not work when multiple machines are involved,since a single port on the host cannot forward to multiple VMs.
If the test defines multiple machines, you may opt totemporarily setvirtualisation.forwardPorts in the test definition for debugging.
Such port forwardings connect via the VM’s virtual network interface.Thus they cannot connect to ports that are only bound to the VM’sloopback interface (127.0.0.1), and the VM’s NixOS firewallmust be configured to allow these connections.
You can re-use the VM states coming from a previous run by setting the--keep-vm-state flag.
$ ./result/bin/nixos-test-driver --keep-vm-stateThe machine state is stored in the$TMPDIR/vm-state-machinenamedirectory.
The.driverInteractive attribute combines the regular test configuration withdefinitions from theinteractive submodule. This gives youa more usable, graphical, but slightly different configuration.
You can add your own interactive-only test configuration by adding extraconfiguration to theinteractive submodule.
To interactively run only the regular configuration, build the<test>.driver attributeinstead, and call it with the flagresult/bin/nixos-test-driver --interactive.
You can link NixOS module tests to the packages that they exercised,so that the tests can be run automatically during code review when the package gets changed.This isdescribed in the nixpkgs manual.
This section covers how to test various features using NixOS tests that wouldnormally only be possible with hardware. It is designed to showcase the NixOS testframework’s flexibility when combined with various hardware simulation librariesor kernel modules.
Useservices.vwifi to set up a virtual Wi-Fi physical layer. Create at least two nodesfor this kind of test: one with vwifi active, and either a station or an access point.Give each a static IP address on the test network so they will never collide.This module likely supports other topologies too; document them if you make one.
This NixOS module leveragesvwifi. Read theupstream repository’s documentation for more information.
This node runs the vwifi server, and otherwise does not interact with the network.You can runvwifi-ctrl on this node to control characteristics of the simulatedphysical layer.
{ airgap = { config, ... }: { networking.interfaces.eth1.ipv4.addresses = lib.mkForce [ { address = "192.168.1.2"; prefixLength = 24; } ]; services.vwifi = { server = { enable = true; ports.tcp = 8212; # uncomment if you want to enable monitor mode on another node # ports.spy = 8213; openFirewall = true; }; }; };}A node like this will act as a wireless access point in infrastructure mode.
{ ap = { config, ... }: { networking.interfaces.eth1.ipv4.addresses = lib.mkForce [ { address = "192.168.1.3"; prefixLength = 24; } ]; services.hostapd = { enable = true; radios.wlan0 = { channel = 1; networks.wlan0 = { ssid = "NixOS Test Wi-Fi Network"; authentication = { mode = "wpa3-sae"; saePasswords = [ { password = "supersecret"; } ]; enableRecommendedPairwiseCiphers = true; }; }; }; }; services.vwifi = { module = { enable = true; macPrefix = "74:F8:F6:00:01"; }; client = { enable = true; serverAddress = "192.168.1.2"; }; }; };}A node like this acts as a wireless client.
{ station = { config, ... }: { networking.interfaces.eth1.ipv4.addresses = lib.mkForce [ { address = "192.168.1.3"; prefixLength = 24; } ]; networking.wireless = { # No, really, we want it enabled! enable = lib.mkOverride 0 true; interfaces = [ "wlan0" ]; networks = { "NixOS Test Wi-Fi Network" = { psk = "supersecret"; authProtocols = [ "SAE" ]; }; }; }; services.vwifi = { module = { enable = true; macPrefix = "74:F8:F6:00:02"; }; client = { enable = true; serverAddress = "192.168.1.2"; }; }; };}When the monitor mode interface is enabled, this node will receiveall packets broadcast by all other nodes through the spy interface.
{ monitor = { config, ... }: { networking.interfaces.eth1.ipv4.addresses = lib.mkForce [ { address = "192.168.1.4"; prefixLength = 24; } ]; services.vwifi = { module = { enable = true; macPrefix = "74:F8:F6:00:03"; }; client = { enable = true; spy = true; serverAddress = "192.168.1.2"; }; }; };}Table of Contents
The NixOS test framework is a project of its own.
It consists of roughly the following components:
nixos/lib/test-driver: The Python framework that sets up the test and runs thetestScript
nixos/lib/testing: The Nix code responsible for the wiring, written using the (NixOS) Module System.
These components are exposed publicly through:
nixos/lib/default.nix: The public interface that exposes thenixos/lib/testing entrypoint.
flake.nix: Exposes thelib.nixos, including the public test interface.
Beyond the test driver itself, its integration into NixOS and Nixpkgs is important.
pkgs/top-level/all-packages.nix: Defines thenixosTests attribute, usedby the packagetests attributes and OfBorg.
nixos/release.nix: Defines thetests attribute built by Hydra, independently, but analogous tonixosTests
nixos/release-combined.nix: Defines which tests are channel blockers.
Finally, we have legacy entrypoints that users should move away from, but are cared for on a best effort basis.These includepkgs.nixosTest,testing-python.nix andmake-test-python.nix.
We currently have limited unit tests for the framework itself. You may run these withnix-build -A nixosTests.nixos-test-driver.
When making significant changes to the test framework, we run the tests on Hydra, to avoid disrupting the larger NixOS project.
For this, we use thepython-test-refactoring branch in theNixOS/nixpkgs repository, and itscorresponding Hydra jobset.This branch is used as a pointer, and not as a feature branch.
Rebase the PR onto a recent, good evaluation ofnixos-unstable
Create a baseline evaluation by force-pushing this revision ofnixos-unstable topython-test-refactoring.
Note the evaluation number (we’ll call it<previous>)
Push the PR topython-test-refactoring and evaluate the PR on Hydra
Create a comparison URL by navigating to the latest build of the PR and adding to the URL?compare=<previous>. This is not necessary for the evaluation that comes right after the baseline.
Review the removed tests and newly failed tests using the constructed URL; otherwise you will accidentally compare iterations of the PR instead of changes to the PR base.
As we currently have some flaky tests, newly failing tests are expected, but should be reviewed to make sure that
The number of failures did not increase significantly.
All failures that do occur can reasonably be assumed to fail for a different reason than the changes.
Building, burning, and booting from an installation CD is rathertedious, so here is a quick way to see if the installer works properly:
# mount -t tmpfs none /mnt# nixos-generate-config --root /mnt$ nix-build '<nixpkgs>' -A nixos-install# ./result/bin/nixos-installTo start a login shell in the new NixOS installation in/mnt:
$ nix-build '<nixpkgs>' -A nixos-enter# ./result/bin/nixos-enterTable of Contents
The sources of the NixOS manual are in thenixos/doc/manual subdirectory of theNixpkgs repository.This manual uses theNixpkgs manual syntax.
You can quickly check your edits with the following:
$ cd /path/to/nixpkgs$ $EDITOR doc/nixos/manual/... # edit the manual$ nix-build nixos/release.nix -A manual.x86_64-linuxIf the build succeeds, the manual will be in./result/share/doc/nixos/index.html.
There’s alsoa convenient development daemon.
The above instructions don’t deal with the appendix of availableconfiguration.nix options, and the manual pages related to NixOS. These are built, and written in a different location and in a different format, as explained in the next sections.
Once you have a successful build, you can open the relevant HTML (path mentioned above) in a browser along with the anchor, and observe the redirection.
Note that if you already loaded the page andthen input the anchor, you will need to perform a reload. This is because browsers do not re-run client JS code when only the anchor has changed.
configuration.nix options documentationThe documentation for all the differentconfiguration.nix options is automatically generated by reading thedescriptions of all the NixOS options defined atnixos/modules/. If you want to improve suchdescription, find it in thenixos/modules/ directory, and edit it and open a pull request.
To see how your changes render on the web, run again:
$ nix-build nixos/release.nix -A manual.x86_64-linuxAnd you’ll see the changes to the appendix in the pathresult/share/doc/nixos/options.html.
You can also build only theconfiguration.nix(5) manual page, via:
$ cd /path/to/nixpkgs$ nix-build nixos/release.nix -A nixos-configuration-reference-manpage.x86_64-linuxAnd observe the result via:
$ man --local-file result/share/man/man5/configuration.nix.5If you’re on a different architecture that’s supported by NixOS (check filenixos/release.nix on Nixpkgs’ repository) then replacex86_64-linux with the architecture.nix-build will complain otherwise, but should also tell you which architecture you have + the supported ones.
nixos-* tools’ manpagesThe manual pages for the tools available in the installation image can be found in Nixpkgs by running (e.g fornixos-rebuild):
$ git ls | grep nixos-rebuild.8Man pages are written inmdoc(7) format and should be portable between mandoc and groff for rendering (except for minor differences, notably different spacing rules.)
For a preview, runman --local-file path/to/file.8.
Being written inmdoc, these manpages use semantic markup. This following subsections provides a guideline on where to apply which semantic elements.
In any manpage, commands, flags and arguments to thecurrent executable should be marked according to their semantics. Commands, flags and arguments passed toother executables should not be marked like this and should instead be considered as code examples and marked withQl.
UseFl to mark flag arguments,Ar for their arguments.
Repeating arguments should be marked by adding an ellipsis (spelled with periods,...).
UseCm to mark literal string arguments, e.g. theboot command argument passed tonixos-rebuild.
Optional flags or arguments should be marked withOp. This includes optional repeating arguments.
Required flags or arguments should not be marked.
Mutually exclusive groups of arguments should be enclosed in curly brackets, preferably created withBro/Brc blocks.
When an argument is used in an example it should be marked up withAr again to differentiate it from a constant. For example, a command with a--host name option that calls ssh to retrieve the host’s local time would signify this thusly:
This will run.Ic ssh Ar name Ic timeto retrieve the remote time.Constant paths should be marked withPa, NixOS options withVa, and environment variables withEv.
Generated paths, e.g.result/bin/run-hostname-vm (wherehostname is a variable or arguments) should be marked asQl inline literals with their variable components marked appropriately.
Whenhostname refers to an argument, it becomes.Ql result/bin/run- Ns Ar hostname Ns -vm
Whenhostname refers to a variable, it becomes.Ql result/bin/run- Ns Va hostname Ns -vm
In free text names and complete invocations of other commands (e.g.ssh ortar -xvf src.tar) should be marked withIc, fragments of command lines should be marked withQl.
Larger code blocks or those that cannot be shown inline should use indented literal display block markup for their contents, i.e.
.Bd -literal -offset indent....EdContents of code blocks may be marked up further, e.g. if they refer to arguments that will be substituted into them:
.Bd -literal -offset indent{ config.networking.hostname = "\c.Ar hostname Ns \c";}.Ed| Next | ||
| Appendix A. Configuration Options |