I read an article by Lemon about using N2N for remote networking a long time ago (https://blog.ilemonrain.com/linux/n2n-v2-tutorial.html), and I was greatly impressed. However, the deployment experience of N2N was not very pleasant.
Then I heard that WireGuard was added to the Linux kernel. Here is an introduction to WireGuard:
WireGuard is an open-source VPN program and protocol developed by Jason A. Donenfeld (https://zh.wikipedia.org/wiki/%E5%BC%80%E6%94%BE%E6%BA%90%E4%BB%A3%E7%A0%81). It is implemented based on the Linux kernel, uses Curve25519 for key exchange, ChaCha20 for encryption, Poly1305 for data authentication, and BLAKE2 for hashing operations (https://zh.wikipedia.org/wiki/WireGuard#cite_note-wireguard-site-1). WireGuard supports the third layer of both IPv4 and IPv6 (https://zh.wikipedia.org/wiki/WireGuard#cite_note-wireguard-whitepaper_section1-2). WireGuard aims to achieve better performance than IPsec and OpenVPN (https://zh.wikipedia.org/wiki/WireGuard#cite_note-3).
Indeed, WireGuard has excellent performance, but the configuration is too cumbersome. If you want to add a device to a WireGuard network, you need to modify the configuration files of almost all devices connected to the network. It is too troublesome. Fortunately, there is now a service provider called Tailscale that provides a zero-configuration VPN networking solution based on WireGuard.
Headscale is the open-source version of the central control server in Tailscale. The open-source version supports self-deployment and has no limit on the number of connected devices. So I spent some time deploying headscale.
Projects Used#
Github: juanfont/headscale
Github: gurucomputing/headscale-ui
Deploying headscale#
Here, I use docker-compose for deployment.
version: '3.5'
services:
headscale:
image: headscale/headscale:latest-alpine
container_name: headscale
volumes:
- ./container-config:/etc/headscale
- ./container-data/data:/var/lib/headscale
ports:
- 27896:8080
command: headscale serve
restart: unless-stopped
headscale-ui:
image: ghcr.io/gurucomputing/headscale-ui:latest
restart: unless-stopped
container_name: headscale-ui
ports:
- 9443:443
At the same time, I also use Nginx for reverse proxy to deploy headscale-ui and headscale under different domain names. Therefore, some CORS processing needs to be done. The Nginx configuration file is as follows:
location / {
add_header 'Access-Control-Allow-Origin' '{{UI Domain}}' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS,DELETE ,PUT' always;
add_header 'Access-Control-Allow-Headers' 'authorization ,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '{{UI Domain}}' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS,DELETE ,PUT ' always;
add_header 'Access-Control-Allow-Headers' 'authorization ,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
proxy_pass {{headscale address}};
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_redirect default;
}
Configuring headscale-ui#
Headscale-ui is a pure frontend project that directly calls the headscale API interface through the user's browser. Therefore, you need to create an API key inside the headscale container for authentication.
docker exec -it headscale headscale apikey create
This command will create an API key, which should be filled in the headscale-ui configuration. This configuration is only kept locally and will not be uploaded anywhere. So if you change devices or browsers and want to configure headscale, you need to repeat this step.
Configuring ACL#
Headscale also supports ACL (Access Control List) to control the devices that can be accessed within this large internal network. Here, I have written a relatively simple ACL configuration file.
// ./container-config/acl.json
{
"groups": {
"group:admin": ["admin"], // Administrator users
"group:user": ["user"], // Regular users
},
"acls": [
// { "action": "accept", "src": ["*"], "dst": ["*:*"] },
{ "action": "accept", "src": ["group:admin"], "dst": ["*:*"] }, // Devices of administrator users can access all devices
{ "action": "accept", "src": ["group:user"], "dst": ["tag:share:*","autogroup:self:*"] }, // Regular users can only access devices with the share tag and their own devices
],
"ssh": [
{
"action": "check",
"src": [
"autogroup:members"
],
"dst": [
"autogroup:self"
],
"users": [
"autogroup:nonroot",
"root"
]
},
],
"tagOwners": {
"tag:share": ["group:admin"],
},
}
Also, modify the acl_policy_path
in the configuration file.
acl_policy_path: "/etc/headscale/acl.json"
That's basically it. The client configuration can be found online, so I won't write much about it.