Interconnecting My Devices [network/interconnection]
- 2025-07-05
- spore
Interconnecting My Devices [network/interconnection]
- 2025-07-05
- spore
TL;DR: Self-hosted Tailscale (with Headscale), with Nebula as a fall-back solution.
I have an insecure LAN. I have machines in other cities. I need to access my computer at home while being outside. So I need a virtual, peer-to-peer network that connects all my devices together. Better, two of them in parallel so I can put one in maintenance mode while using the other one.
I found Nebula at first. It works well, requiring
no central server or public SSL certificate. Conversely admins need to
sign and distribute their own certificates with The minimal requirement to run a Nebula network is a machine with a public,
static IP and a port available (a cheapest VPS would work). This machine will
serve as a “lighthouse” which other nodes use to find each other. You can have
multiple lighthouses and the network is available with at least one of them
online, which is handy during migration. Nebula will try its best at NAT traversal, but if that fails it try to relay
connections through nodes configured to work as relays. I simply use
lighthouses as relays because they are accessible from every node.1
A simple (NixOS-based, check official docs for regular ones)
lighthouse configuration looks like this: And a client configuration is a bit more complicated than that, because it
needs to know where to find a lighthouse, and to configure its
firewall2 to accept connections. The mobile app works similarly, just set the same information up with the GUI.
Beware that the mobile client doesn’t allow any inbound connections and the
firewall is not configurable as of now (2025-07). I think it’s also possible to have port-forwarded relays inside
LAN to provide public-facing access to all the nodes inside local network,
without depending on NAT traversal. Disclaimer: I haven’t tried this in
practice. Note that the “lighthouse” is also a regular node in the network, and its
firewall can be configured in the same way to allow inbound access with
nebula.Mesh VPN with Nebula [network/nebula]
nebula-cert
,
and they are used to authorize and encrypt connections.services.nebula.networks."<name>" = {
ca = "/etc/nebula/ca.crt"; # it would be better to use agenix or sops-nix
cert = "/etc/nebula/host.crt"; # but I'm too lazy to touch a working config
key = "/etc/nebula/host.key";
isLighthouse = true;
isRelay = true;
listen.port = 4242;
};
networking.firewall = {
allowedTCPPorts = [ 4242 ];
allowedUDPPorts = [ 4242 ];
};
services.nebula.networks."<name>" = {
ca = "/etc/nebula/ca.crt";
cert = "/etc/nebula/host.crt";
key = "/etc/nebula/host.key";
staticHostMap = {
# nebula ip is the one you assigned to the machine while signing its cert.
# you can check it with `nebula-cert print -path host.crt`
"<lighthouse-nebula-ip>" = [ "<lighthouse-public-address>:<port>" ];
};
lighthouses = [ "<lighthouse-nebula-ip>" ];
relays = [ "<lighthouse-nebula-ip>" ];
# I only have trusted devices in the network,
# if that's not true use a more strict config
firewall = let any = [{
host = "any";
port = "any";
proto = "any";
}]; in {
inbound = any;
outbound = any;
};
};
networking.firewall = {
allowedTCPPorts = [ 4242 ];
allowedUDPPorts = [ 4242 ];
};
Nebula has its shortcomings: its NAT traversal strategy is weak, so I often end up squeezing videos through a 5Mbps relay connection, and rotating certificates for all devices without automation is gruesome.
Tailscale solves the problems coming with
Nebula, at the expense of a central server, and
additionally requiring a valid SSL certificate. Normally their clients connect
to the Tailscale coordination server to give a smooth onboarding experience,
but I don’t want to depend on freemium services, and they do
mess up sometimes. Luckily
there’s a self-hosted solution called Headscale for
personal use scenarios. In Tailscale, the coordination server is not a regular node anymore. It is a
dedicated public service that handles not only service discovery, but also
client registration, key distribution and access control. And relays (DERP
servers) are a standalone service that can be hosted separately from the
coordination server. Headscale is a coordination server implementation that has
simplified access control and account system, with an embedded DERP server. To run Headscale with embedded DERP server1 2 (again in
NixOS, refer to official docs otherwise): This is a very basic config, so you may want to checkout the
reference config file
for more options available. Once the Headscale service is up and running you need to first set it up. This user name is required in following auth process of clients. In contrast it’s much simpler to register and run a Tailscale client. The
vanilla client is FOSS so I can just go with it: This needs some explanation. Basically with this config, Nix generates a
systemd service, that starts Tailscale daemon with There are two flows to register a new client with Headscale
(official docs here). Non-interactively with a pre-generated key: This generates a one-time key that you can put in aforementioned
Interactively without setting the key: This leads to a web page on the server that shows a key, then you need to
register the client with the key. In both cases you only need to do this once per client/device. On mobile devices it’s even simpler, you need to go out of the way to set your
login server but that’s basically it. Upon first login the interactive login
page shows up with the key, and again you run Sometimes you don’t have access to ports Configure both Headscale and its DERP server to run on some ports other than
and on the client, login with The Nix Note that you still need a domain name for this, which is not free. It may no
longer be true with Let’s Encrypt IP certificates in the future, but with
those limitations posed I’m not sure it will work. By the way, if you don’t have a domain but your server is able to do a HTTP-01 challenge, you can consider using a IP-based domain service like sslip.io. Only IPv4 DERP is supported in my config, set Headscale config has undergone some breaking changes, so this might be
outdated when you see it in the future.Mesh VPN with Tailscale (Headscale) [network/tailscale]
services.headscale = {
enable = true;
address = "0.0.0.0"; # this is the listen address
settings = {
server_url = "<public-url-of-the-server>";
derp = {
server = {
enabled = true;
region_id = 999;
region_code = "hsc";
region_name = "Headscale embedded";
stun_listen_addr = "<listen-addr>:<listen-port>";
ipv4 = "<public-ipv4-addr>";
};
urls = []; # this disables DERP servers provided by Tailscale
};
# set these if you don't acquire certs with headscale
# note you may also want to set `services.headscale.group` to access certs
tls_cert_path = "<tls-cert>";
tls_key_path = "<tls-key>";
dns = {
# note that this shouldn't be the same as your server's public domain
# I use an nonexistent TLD like .tail
base_domain = "<magic-dns-domain>";
};
};
};
headscale users create <user-name>
services.tailscale = {
enable = true;
openFirewall = true;
extraUpFlags = [
"--login-server"
"<your-headscale-address>"
];
extraDaemonFlags = [
"--no-logs-no-support" # this disables telemetry
];
# this configures your system to be able to serve as an exit node
# (with `sudo tailscale set --advertise-exit-node`,
# or put `--advertise-exit-node` in `extraSetFlags`)
useRoutingFeatures = "server";
# see below
authKeyFile = "<auth-key>";
};
# prevent tailscale from using nebula interface
systemd.services.tailscaled.serviceConfig = {
RestrictNetworkInterfaces = "lo <interfaces-other-than-nebula>";
};
tailscaled --no-log-no-support <other-flags-omitted>
then calls tailscale up
once with
the given auth key. Some intervention is needed because Tailscale doesn’t avoid
Nebula interfaces and this may lead to
various issues
(Nebula is smart enough to avoid routing through Tailscale so the reverse
doesn’t hold).
# on the server
headscale preauthkeys create --user <user-name>
authKeyFile
(effective for one hour by default). Deploy the config with
the key on your device to log it into the network.# on the client device
tailscale login --login-server <user-name>
# on the server
headscale nodes register --user <user-name> --key <key-on-the-page>
headscale nodes register
to
register the device.Special notes: in case you have no access to :80 and :443 [network/tailscale-dns-challenge]
:80
and :443
for certain reasons.
I won’t ask why, but here’s a possible solution::80
and :443
, and use Let’s Encrypt DNS challenge to acquire the SSL
certificate:security.acme.acceptTerms = true;
security.acme.certs."<your-domain>" = {
email = "<your-email>";
dnsProvider = "<dns-provider>";
# agenix path to the systemd environmentFile in my case,
# free to use all other means to provide access keys
environmentFile = config.age.secrets.acme-env.path;
};
services.headscale = {
# choose a atypical port
port = 11366;
# other options stays the same, omitted
settings = {
server_url = "https://<your-domain>:11366";
derp = {
server = {
# anything different from the headscale server port
stun_listen_addr = "0.0.0.0:11367";
# ...
};
# ...
};
};
# Also open the corresponding ports in firewall!
tailscale login --login-server https://<your-domain>:11366
.security.acme
option uses
Lego for ACME challenges, which
integrates with a great many domainproviders. Check it out on how to
configure yours!derp.server.ipv6
if
needed.