Making a qBittorrent Module for NixOS

#nixos #qbittorrent #linux

2025-03-02

Introduction

I've been working on porting my home server from Ubuntu to NixOS after my boot-drive failed. If you're not familiar, NixOS uses the Nix package manager, which guarantees reproducibillity; the whole system is configured declaratively via the nix language. I talk about it a bit more in my

One of the applications in my stack, qBittorrent wants its traffic routed through a VPN. Previously this was handled for me by a specific Docker setup. For NixOS I needed to roll my own solution. This article details my development of my own highly configurable NixOS module, as well as how I achieved per application split-tunnelling.

Creating the Module

The great thing about making Nix modules is the abundance of references in the nixpkgs repo on GitHub. You can look for any options modules there, and use that a base--particularly for developing the options side of things. A particularly instructive reference I used was the Sonarr module.

To any Nix modules that expose services, there are really two parts: options and config. Note that this assumes that the application is already packaged in nixpkgs. The options section exposes parameters to the user, while the config section describeds how the service is paramaterised.

Let's use that Sonarr reference as an example. The structure of the file is as follows:

{ config, pkgs, lib, utils, ... }: let cfg = config.services.sonarr; in { ## OPTIONS SECTION options = { services.sonarr = { enable = lib.mkEnableOption "Sonarr"; dataDir = lib.mkOption { type = lib.types.str; default = "/var/lib/sonarr/.config/NzbDrone"; description = "The directory where Sonarr stores its data files."; }; openFirewall = lib.mkOption { type = lib.types.bool; default = false; description = '' Open ports in the firewall for the Sonarr web interface ''; }; user = lib.mkOption { type = lib.types.str; default = "sonarr"; description = "User account under which Sonaar runs."; }; group = lib.mkOption { type = lib.types.str; default = "sonarr"; description = "Group under which Sonaar runs."; }; package = lib.mkPackageOption pkgs "sonarr" { }; }; }; ## CONFIG SECTION config = lib.mkIf cfg.enable { systemd.tmpfiles.rules = [ "d '${cfg.dataDir}' 0700 ${cfg.user} ${cfg.group} - -" ]; systemd.services.sonarr = { description = "Sonarr"; after = [ "network.target" ]; wantedBy = [ "multi-user.target" ]; serviceConfig = { Type = "simple"; User = cfg.user; Group = cfg.group; ExecStart = utils.escapeSystemdExecArgs [ (lib.getExe cfg.package) "-nobrowser" "-data=${cfg.dataDir}" ]; Restart = "on-failure"; }; }; networking.firewall = lib.mkIf cfg.openFirewall { allowedTCPPorts = [ 8989 ]; }; users.users = lib.mkIf (cfg.user == "sonarr") { sonarr = { group = cfg.group; home = cfg.dataDir; uid = config.ids.uids.sonarr; }; }; users.groups = lib.mkIf (cfg.group == "sonarr") { sonarr.gid = config.ids.gids.sonarr; }; }; }

You can clearly see the two sections. In the options section, we are defining options via the lib.mkOption function, e.g. enable. Then, in the config section, we are using the defined options to control behaviour; this service will only be created if config.services.sonarr.enable = true. Eventually, we end up parameterising a systemd service which runs the Sonarr application.

We can take the same approach for our qBittorrent service; run it via systemd, but parameterise it via options.

Module Development

In my research I found a few existing solutions:

These solutions seemed to work, but they didn't leverage the full-power of NixOS. The solution I came up with is unique in that I parameterise the configuration file of qBittorrent directly, exposing far more options to the user. The benefit of this is that I need only configure the application once; each subsequent rebuild will have an identical configuration.

To achieve this, I started by looking at the configuration file for qBittorrent. The application uses a file in its configuration directory qBittorrent/config/qBittorrent.conf . This file exposes configuration options via the following syntax:

[BitTorrent] Session\AddTorrentStopped=false Session\BTProtocol=TCP Session\DefaultSavePath=/data/media/torrents Session\DisableAutoTMMByDefault=false

After some investigation, I found that qBittorrent will load the configuration file before running, then reformat it as necessary; if the [section] and label are correct, the program will operate normally. Additionally, missing values are replaced with their default values. Leveraging these two properties, I was able to write the following nix snippet:

# Generate only non-null settings dynamically generateConfig = attrs: concatStringsSep "\n\n" ( mapAttrsToList ( section: keys: let lines = mapAttrsToList ( key: value: "${key}=${ if isBool value then ( if value then "true" else "false" ) else if isList value then concatStringsSep ", " (map toString value) else toString value }" ) (filterAttrs (_: v: v != null) keys); in if lines == [] then "" else "[${section}]\n" + concatStringsSep "\n" lines ) (filterAttrs (_: v: v != {}) attrs) ); # Create the qBittorrent configuration file qbittorrentConf = pkgs.writeText "qBittorrent.conf" (generateConfig { BitTorrent = filterAttrs (_: v: v != null) { "Session\\BTProtocol" = cfg.bittorrent.protocol; "Session\\Port" = cfg.bittorrent.port; "Session\\GlobalDLSpeedLimit" = cfg.bittorrent.globalDownloadSpeedLimit; "Session\\GlobalUPSpeedLimit" = cfg.bittorrent.globalUploadSpeedLimit; "Session\\Interface" = cfg.bittorrent.interface; "Session\\InterfaceName" = cfg.bittorrent.interfaceName; "Session\\Preallocation" = cfg.bittorrent.preallocation; "Session\\QueueingSystemEnabled" = cfg.bittorrent.queueingEnabled; "Session\\MaxActiveDownloads" = cfg.bittorrent.maxActiveDownloads; "Session\\MaxActiveTorrents" = cfg.bittorrent.maxActiveTorrents; "Session\\MaxActiveUploads" = cfg.bittorrent.maxActiveUploads; "Session\\DefaultSavePath" = cfg.bittorrent.defaultSavePath; "Session\\DisableAutoTMMByDefault" = cfg.bittorrent.disableAutoTMMByDefault; "Session\\DisableAutoTMMTriggers\\CategorySavePathChanged" = cfg.bittorrent.disableAutoTMMTriggersCategorySavePathChanged; "Session\\DisableAutoTMMTriggers\\DefaultSavePathChanged" = cfg.bittorrent.disableAutoTMMTriggersDefaultSavePathChanged; "Session\\ExcludedFileNamesEnabled" = cfg.bittorrent.excludedFileNamesEnabled; "Session\\ExcludedFileNames" = cfg.bittorrent.excludedFileNames; "Session\\FinishedTorrentExportDirectory" = cfg.bittorrent.finishedTorrentExportDirectory; "Session\\SubcategoriesEnabled" = cfg.bittorrent.subcategoriesEnabled; "Session\\TempPath" = cfg.bittorrent.tempPath; }; Core = filterAttrs (_: v: v != null) { "AutoDeleteAddedTorrentFile" = cfg.core.autoDeleteTorrentFile; }; Network = filterAttrs (_: v: v != null) { "PortForwardingEnabled" = cfg.network.portForwardingEnabled; }; Preferences = filterAttrs (_: v: v != null) { "WebUI\\LocalHostAuth" = cfg.webUI.localHostAuth; "WebUI\\AuthSubnetWhitelist" = cfg.webUI.authSubnetWhitelist; "WebUI\\AuthSubnetWhitelistEnabled" = cfg.webUI.authSubnetWhitelistEnabled; "WebUI\\Username" = cfg.webUI.username; "WebUI\\Port" = cfg.webUI.port; "WebUI\\Password_PBKDF2" = cfg.webUI.password; "WebUI\\CSRFProtection" = cfg.webUI.csrfProtection; "WebUI\\ClickjackingProtection" = cfg.webUI.clickjackingProtection; }; });

This snippet is quite straightforward if you understand the nix language. It defines a function generateConfig which takes some attrs as input. These attrs are then mapped to a list via the mapAttrsToList builtin. The list of attribute strings are then concatenated into a single string.

Most of the magic here is happening in the mapAttrsToList function. In the first call to this function, we take some section with keys storing the key-value pairs.

mapAttrsToList ( section: keys: let

Each key value pair is mapped to a string and stored in the list lines. Empty values are filtered out.

lines = mapAttrsToList ( key: value: "${key}=${ if isBool value then ( if value then "true" else "false" ) else if isList value then concatStringsSep ", " (map toString value) else toString value }" ) (filterAttrs (_: v: v != null) keys);

After all key-value pairs are mapped, the section is constructed using lines. Empty sections are filtered out.

in if lines == [] then "" else "[${section}]\n" + concatStringsSep "\n" lines ) (filterAttrs (_: v: v != {}) attrs)

I'm actually probably over-filtering here. Regardless, this short snippet is able to construct the desired configuration file.

Once I've imported the module, the actual configuration looks something like this:

# qBittorrent -- see ../../../modules/qbittorrent/default.nix for options qbittorrent = { enable = true; user = "qbittorrent"; group = "media"; configDir = "${configDir}/qbittorrent"; bittorrent = { protocol = "TCP"; globalDownloadSpeedLimit = 6500; globalUploadSpeedLimit = 2000; interface = "wg-mullvad"; interfaceName = "wg-mullvad"; preallocation = true; queueingEnabled = false; defaultSavePath = "${mediaDir}/torrents"; disableAutoTMMByDefault = false; disableAutoTMMTriggersCategorySavePathChanged = false; disableAutoTMMTriggersDefaultSavePathChanged = false; finishedTorrentExportDirectory = "${mediaDir}/torrents/complete"; subcategoriesEnabled = true; }; network.portForwardingEnabled = false; ...

Now that we've gone over the module itself, let's address the main challenge: traffic routing.

Adding Selective VPN Routing via a Network Namespace

As a relative beginner, networking in Linux feels quite complicated to me. I find it hard to connect shell commands to the underlying network stack. To address this, I'm going to try to explain each command I run in detail, and also provide a mental model where possible.

Let's start with what we want: all traffic routed via a VPN, except local (LAN) WebUI. Additionally, I want the rest of my server's traffic to be routed outside of the VPN. We need some way of split-tunnelling our traffic on an application basis. A few solutions to this already exist in the references I've noted, with both leveraging networking namespaces.

This solution uses socat as a proxy to open a UDS socket. This is combined with nginx to route traffic from localhost:8080 to the UDS socket. This works, but the use of socat seems out of place. Can't we use the networking stack itself?

VPN-Confinement is a great nix flake which tries to modularise the whole VPN construction and confinement via namespaces process. It would probably work for my use case, but I'd prefer to avoid external dependencies where possible; I'm hoping to get my module merged into nixpkgs eventually.

Since we aren't using Docker, my services aren't isolated by default. They all share a single network namespace. We can create a separate network namespace for qBittorrent, which effectively isolates it from the rest of my computer, the same as the aforementioned references. It helps to think of each network namespace as an independent networking stack.

If you're anything like me, this sort of textual description evokes a mental image of what is happening inside of Linux. Without it, terminal commands are just gibberish. Try to hold on to that mental image and imagine what we are doing as we run the commands below.

A Network Namespace

To create a new network namespace, we can run:

ip netns add wg-qbittorrent

This is using the netns command of the ip application. We are telling the networking stack to add a new network namespace called wg-qbittorrent.

We can check the result of this command by running:

[sam@myshkin:~]$ ip netns wg-qbittorrent (id: 0)

Once, we've create this namespace, we want to bring it online. We can do this as follows:

ip -n wg-qbittorrent link set lo up

In this command, we are setting the link called lo to up. In the context of Linux networking, a link represents a physical (e.g. ethernet eth0) or virtual interface (e.g wireguard wg0) in the networking stack. Here, we are setting the loopback link lo up. We do this because some applications require traffic on localhost to operate

We can check the result of this command by running:

[sam@myshkin:~]$ sudo ip netns exec wg-qbittorrent ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host proto kernel_lo valid_lft forever preferred_lft forever

We can see that the state of the loopback device is set to UP. A note on the command here: since we now have two network namespaces, a standard ip command, e.g. ip a wouldn't work. The links associated with the wg-qbittorrent interface would not appear. We need use the netns exec command to execute a command in the wg-qbittorrent namespace.

The namespace isolation becomes apparent when you try to access services running on your host. In this example, I run a simple netcat server which echos a message.

# Both client and server on default namespace [sam@myshkin:~]$ echo "Do you read me?" | nc -l 5678 & [2] 7990 [sam@myshkin:~]$ nc localhost 5678 Do you read me? ^C # Client and server on separate namespaces [sam@myshkin:~]$ echo "Do you read me?" | nc -l 5678 & [4] 10099 [sam@myshkin:~]$ sudo ip netns exec wg-qbittorrent nc localhost 5678 [sam@myshkin:~]$

As you can see, the client on the wg-qbittorrent namespace cannot reach the server running on the default namespace.

Let's try and get our qBittorrent's traffic routed via WireGuard now.

Attaching WireGuard to a Network Namespace

I won't go into too much detail in this section, as WireGuard isn't the focus of this writeup. In my system flake on NixOS, I have the following code in a wireguard.nix module:

networking.wireguard.interface.wg-mullvad = { # Use a separate network namespace for the VPN. # sudo ip netns exec wg-qbittorrent curl --interface wg-mullvad https://am.i.mullvad.net/connected privateKey = "my-private-key"; ips = ["my-ip"]; interfaceNamespace = "wg-qbittorrent"; preSetup = '' ip netns add wg-qbittorrent ip -n wg-qbittorrent link set lo up ''; postShutdown = '' ip netns delete wg-qbittorrent ''; peers = [ { publicKey = "the-public-key"; allowedIPs = ["0.0.0.0/0" "::0/0"]; endpoint = "the-endpoint"; } ]; };

You can see we do three key things here:

  • create the namespace in preSetup
  • delete the namespace in postShutdown
  • set interfaceNamespace

Long story short, you need to create the network namespace before the WireGuard service starts. You specify the network namespace it attaches to via the interfaceNamespace option.

So now qBittorrent is running, and our traffic is being routed via WireGuard:

[sam@myshkin:~]$ sudo ip netns exec wg-qbittorrent ip route default dev wg-mullvad scope link

We've achieved our goal of getting the application's traffic routed via the VPN, but what about accessing the WebUI. We still can't access that.

A Virtual Ethernet Cable

While we've successfully routed the application's traffic through the VPN, we still face a problem: we can't access the WebUI. The issue arises because there's no route between the isolated network namespace and our default namespace. This is expected behavior; network namespaces are designed to provide isolation. To solve this, let's wire a virtual Ethernet cable between the namespaces.

The veth interface is a network interface (link) with a very simply mental model: an ethernet cable. It allows us to bridge our isolated namespaces, enabling direct communication.

First, we need to create a a veth pair to link the namespaces:

ip link add veth-host type veth peer name veth-vpn

This command is adding a link called veth-host of type veth with a peer named veth-vpn. You can think of this as an ethernet wire with two ends: veth-host and veth-vpn. Since we created veth in our default namespace, both ends of the wire are plugged into the default namespace.

Let's move veth-vpn into the wg-qbittorrent namespace:

ip link set veth-vpn netns wg-qbittorrent

This ommand sets the netns of veth-vpn to wg-qbittorrent. We've just unplugged the veth-vpn end of our ethernet cable and plugged it into the wg-qbittorrent namespace.

Next, we need to assign IP addresses to each of the interfaces, so that our networking stack knows how where to route traffic:

ip addr add 10.200.200.1/24 dev veth-host ip netns exec wg-qbittorrent ip addr add 10.200.200.2/24 dev veth-vpn

The veth-host end of our wire has the IP 10.200.200.1 and the veth-vpn end has the IP 10.200.200.2. By inspection we can see that both devices are on the same subnet.

Finally, we need to set the interfaces up, i.e. turn them on. We can do this by running:

ip link set veth-host up ip netns exec wg-qbittorrent ip link set veth-vpn up

Let's see if they're connected:

[sam@myshkin:~]$ ping 10.200.200.2 PING 10.200.200.2 (10.200.200.2) 56(84) bytes of data. 64 bytes from 10.200.200.2: icmp_seq=1 ttl=64 time=0.048 ms 64 bytes from 10.200.200.2: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 10.200.200.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1009ms rtt min/avg/max/mdev = 0.048/0.052/0.056/0.004 ms

Traffic is successfully passing between the namespaces. We can confirm this by checking the routes:

[sam@myshkin:~]$ sudo ip netns exec wg-qbittorrent ip route default dev wg-mullvad scope link 10.200.200.0/24 dev veth-vpn proto kernel scope link src 10.200.200.2

The second route indicates that traffic bound for the 10.200.200.0/24 subnet will pass through the veth-vpn interface. This route is created for the veth pair automatically by the kernel.

Everything appears to be working. Our final test is to curl the qBittorrent WebUI.

[sam@myshkin:~]$ curl 10.200.200.2:8080 <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="color-scheme" content="light dark" /> <meta name="description" content="qBittorrent WebUI"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>qBittorrent WebUI</title> <link rel="icon" type="image/png" href="images/qbittorrent32.png" /> <link rel="icon" type="image/svg+xml" href="images/qbittorrent-tray.svg" /> <link rel="stylesheet" type="text/css" href="css/login.css?v=gms426" /> <noscript> <link rel="stylesheet" type="text/css" href="css/noscript.css?v=gms426" /> </noscript> <script defer src="scripts/login.js?locale=en&v=gms426"></script> </head> <body> <noscript id="noscript"> <h1>JavaScript Required! You must enable JavaScript for the WebUI to work properly</h1> </noscript> <div id="main"> <h1>qBittorrent WebUI</h1> <div id="logo" class="col"> <img src="images/qbittorrent-tray.svg" alt="qBittorrent logo" /> </div> <div id="formplace" class="col"> <form id="loginform"> <div class="row"> <label for="username">Username</label><br /> <input type="text" id="username" name="username" autocomplete="username" autofocus required /> </div> <div class="row"> <label for="password">Password</label><br /> <input type="password" id="password" name="password" autocomplete="current-password" required /> </div> <div class="row"> <input type="submit" id="loginButton" value="Login" /> </div> </form> </div> <div id="error_msg"></div> </div> </body> </html>

Now that everything is working, we can add these commands to the preSetup of our wireguard.nix config. We should also add the appropriate cleanup commands to the postShutdown field.

networking.wireguard.interface.wg-mullvad = { # Use a separate network namespace for the VPN. # sudo ip netns exec wg-qbittorrent curl --interface wg-mullvad https://am.i.mullvad.net/connected privateKey = "my-private-key"; ips = ["my-ip"]; interfaceNamespace = "wg-qbittorrent"; preSetup = '' ip netns add wg-qbittorrent ip -n wg-qbittorrent link set lo up # Create a veth pair to link the namespaces ip link add veth-host type veth peer name veth-vpn ip link set veth-vpn netns wg-qbittorrent ip addr add 10.200.200.1/24 dev veth-host ip netns exec wg-qbittorrent ip addr add 10.200.200.2/24 dev veth-vpn ip link set veth-host up ip netns exec wg-qbittorrent ip link set veth-vpn up ip netns exec wg-qbittorrent ip route add default via 10.200.200.1 ''; postShutdown = '' # Delete the veth pair ip link del veth-host # Delete the namespace ip netns del wg-qbittorrent ''; peers = [ { publicKey = "the-public-key"; allowedIPs = ["0.0.0.0/0" "::0/0"]; endpoint = "the-endpoint"; } ]; };

Conclusion

I now have a working NixOS module for qBittorrent that can be reliably reproduce qBittorrent instances with the same configuration. Additionally, it leverages network namespaces and a veth to achieve split-tunnelling, ensuring that local traffic stays local. The implications of my module are huge. If my server's boot drive dies again, I can spin up a new machine with an identical config in minutes not hours. That kind of power is what draws me to Nix.

The more I work with NixOS, the less inclined am to work with other tools. The struggle to set things up is real but once something works, it just works.