Lab Setup
Setting up a local lab environment to simulate the Vendor Agnostic Surveillance architecture using Multipass VMs and Tailscale VPN
Lab Setup
In this guide, I'm going to walk you through setting up a local lab environment that simulates the architecture we deployed for the client. Since I need to respect the client's NDA, I'll show you how to replicate the setup using local virtual machines instead of the actual production environment.
What We're Building
The client provides a cloud setup for the k3s master node, and this cluster will have different nodes at different sites where cameras are located.
Key constraints:
- k3s nodes will run Kerberos agents and other components at each site
- Communication must not require inbound firewall connections or port forwarding
- Everything must work over Tailscale VPN — edge k3s nodes run Tailscale clients and connect to the cloud master via the VPN mesh
This lab setup lets me demonstrate the architecture without exposing client-specific details. The principles are the same — we're just using local VMs instead of production infrastructure.
Lab Architecture
For this lab, I'm setting up:
- VM 1:
k3s-master— The master node (simulating the cloud master) - VM 2:
node-site-tunis-1— An edge node (simulating a site with cameras)
Each VM will have Tailscale installed and configured to connect to the master via Tailscale VPN.
Resource Allocation
Here's the resource allocation I'm using for each VM:
| VM Name | Role | RAM | Storage | vCPU |
|---|---|---|---|---|
k3s-master | Master Node | 2 GB | 10 GB | 4 |
node-site-tunis-1 | Edge Node | 4 GB | 20 GB | 4 |
These resources are sufficient for a lab environment. In production, you'd scale these based on your actual workload requirements.
Prerequisites
Before we start, make sure you have:
- Ubuntu (or Windows with WSL2)
- Multipass — for provisioning real virtual machines
- Basic tools —
helm,kubectl, etc. (standard Kubernetes tooling)
If you're on Windows, you can use WSL2. Just make sure Multipass works correctly in your WSL environment.
Step 1: Provision the VMs
Let's start by provisioning both virtual machines using Multipass.
Provision Master VM
Provision Edge Node VM
Verify VMs are Running
Both VMs are now running and ready for configuration. Notice they have different IP addresses on the local network — we'll connect them via Tailscale VPN.
Step 2: Fix DNS Resolution Issues
Before we proceed, I need to address a common issue with k3s and CoreDNS. There's a conflict between systemd-resolved and CoreDNS that can cause DNS resolution problems.
To avoid loop resolution issues between CoreDNS and systemd-resolved, I highly recommend disabling systemd-resolved and setting up your own /etc/resolv.conf with public DNS servers like 8.8.8.8.
In production, for better security, I highly recommend setting up managed DNS with restrictions. I always set up a self-hosted AdGuard DNS with blocking of ads, ransomware C2 servers, etc. I'll write an article about AdGuard setup in the future.
Let's fix this on the master node first:
The chattr +i command makes /etc/resolv.conf immutable, preventing systemd-resolved from overwriting it. This ensures our DNS configuration persists.
Repeat the same DNS fix on the edge node (node-site-tunis-1).
Step 3: Install Tailscale on Both VMs
Now I'm going to install Tailscale on both VMs. This will create the VPN mesh network that connects our edge nodes to the master.
Install Tailscale on Master
After running sudo tailscale up, you'll need to authenticate. Follow the instructions to log in via the provided URL.
Get Master's Tailscale IP
Once authenticated, get the Tailscale IP address of the master:
This is the Tailscale-assigned IP address (in the 100.x.x.x range). This is the IP we'll use for k3s node registration, not the local network IP.
Install Tailscale on Edge Node
Now let's install Tailscale on the edge node and connect it to the same Tailscale network:
Verify Tailscale Connection
You can verify both nodes are connected by checking the Tailscale dashboard. Here's what it should look like:
Let's also verify connectivity by pinging the master from the edge node:
Perfect! Both nodes can now communicate via Tailscale VPN. The first ping might be slower as Tailscale establishes the direct connection, but subsequent pings should be much faster.
Step 4: Install k3s
Now I'm going to install k3s on both nodes. Since this is a demo, I'm using default settings, but there are many things to consider in production.
By default, Multipass sets up VMs with no swap, which is required for k3s and Kubernetes in general to work properly. This is already configured correctly.
Install k3s on Master Node
The k3s master is now running! Notice it's already showing as Ready with the control-plane role.
Get Node Registration Token
We need the node token to register the edge node:
Save this token securely — you'll need it to register worker nodes. In production, use proper secret management.
Install k3s on Edge Node
Now let's install k3s on the edge node. Important: We need to configure it to connect to the master using the Tailscale IP address, not the local network IP.
Notice I'm using the Tailscale IP address (100.125.228.57) for K3S_URL. This ensures the edge node connects to the master through the Tailscale VPN, not the local network.
Verify Cluster Status
Let's verify that both nodes are now part of the cluster:
Perfect! Both nodes are now part of the k3s cluster. The edge node is connected via Tailscale VPN, demonstrating the outbound-only connectivity model.
Visual Cluster Status
For easier cluster management, I like to use k9s — a terminal UI for Kubernetes. Here's what the cluster looks like:
I highly recommend installing k9s for easier cluster interaction. It's a great terminal UI that makes working with Kubernetes much more intuitive.
Production Considerations
This lab setup is great for testing, but production deployments differ in several important ways:
Tailscale Enterprise Plan
To avoid the overhead of managing VPN access manually, we followed Tailscale's Enterprise plan. This gives us better control over access policies, device management, and security features.
Automated Edge Node Setup
For edge nodes at scale, we automated the entire process:
- Lightweight OS — We use Alpine-based OS for minimal resource footprint
- Cloud-init automation — Automated installation using cloud-init configs
- Rapid provisioning — At scale, we only need to set up a serial connection to a Raspberry Pi, and the agent is ready to go
This automation aspect will be further explained in future documents, as we rely on it along with Kerberos Factory to automate the entire edge node lifecycle.
Summary
We've successfully set up a lab environment that simulates the production architecture:
- ✅ Two VMs provisioned (master and edge node)
- ✅ DNS resolution fixed to avoid CoreDNS conflicts
- ✅ Tailscale VPN installed and configured on both nodes
- ✅ k3s cluster established with edge node connected via Tailscale
- ✅ Cluster verified and ready for workloads
Your lab environment is now ready! You can deploy Kerberos Agents and test the full architecture locally before moving to production.
In the next documents, I'll show you how to deploy Kerberos Factory, configure agents, and set up the complete video platform stack.