A GitOps-managed Kubernetes homelab cluster running on Talos Linux.
This repository contains the declarative configuration for kantai, a bare-metal Kubernetes cluster. The cluster is designed for home infrastructure workloads with a focus on:
- GitOps-driven operations via FluxCD
- Advanced networking with Cilium, Envoy Gateway, external-dns, Cloudflare, and cert-manager
- Distributed storage using Rook-Ceph
- GPU workloads with NVIDIA GPU Operator
- Comprehensive observability using VictoriaMetrics and Grafana
- Continuous integration via Renovate
| Node | Role | Hardware |
|---|---|---|
| kantai1 | Hyper-converged control plane and workloads |
|
| kantai2 | Virtual arm64 control plane and workloads |
|
| kantai3 | Hyper-converged control plane and workloads |
|
kantai is connected to an all-Ubiquiti network, with a Hi-Capacity Aggregation as the TOR and a Dream Machine Pro as the gateway/router/firewall. Recent versions of Unifi Network and Unifi OS support BGP, which is used to advertise load balancer addresses and thus provide node-balanced services to the network. The cluster's virtual network is dual-stack IPv4 and IPv6.
The cluster uses kantai.xyz as its public domain. It is registered at Cloudflare which also acts as the DNS authority. Cloudflare also proxies requests for services available from the public internet and tunnels them to the cluster for DDOS and privacy protection.
The cluster integrates with a Tailscale tailnet for private secure global access.
- Cluster nodes are connected to the main Ubiquiti network which uses
10.1.0.0/16. - Cilium advertises routes to load-balanced services using BGP.
- A Unifi network matching the load balancer CIDR is programmed to prevent unnecessary NAT hairpinning and allow flows through the firewall.
- Cilium masquerades pod addresses to node addresses.
| Role | CIDR |
|---|---|
| Pod | 10.11.0.0/16 |
| Service | 10.11.0.0/16 |
| Cilium LB IPAM | 10.11.0.0/16 |
For IPv6 networking, I decided to use globally routable addresses for pods, services, and LB IPAM. This means no masquerading is necessary, which is more in the spirit of IPv6. Routes and firewalls must still be programmed for traffic to flow.
- Cluster nodes are connected to the main Ubiquiti network which receives an IPv6
/64prefix via prefix delegation and assigns addresses to clients via SLAAC. - 3 additional
/64prefixes are manually reserved for pods, services, and Cilium LB IPAM. - Cilium advertises routes to load-balanced services using BGP (same as IPv4).
- A Unifi network matching the load balancer CIDR is programmed to prevent unnecessary NAT hairpinning and allow flows through the firewall (same as IPv4).
- IPv6 masquerading is disabled.
The cluster is managed entirely through GitOps using FluxCD. All resources are declared in this repository and automatically reconciled to the cluster. The Flux Operator manages the FluxCD instance.
- Kustomizations define the desired state of each application
- HelmReleases manage Helm chart deployments
- OCIRepositories pull charts from OCI registries
- Drift detection ensures cluster state matches Git
Automated Talos and Kubernetes upgrades are managed by tuppr. Upgrade CRDs (TalosUpgrade, KubernetesUpgrade) define version targets with health checks that ensure VolSync backups complete and Ceph cluster health is OK before proceeding.
The repository is constantly updated using Renovate and flux-local. Minor and patch updates are applied automatically while major releases require human approval.
Cilium serves as the CNI in kube-proxy replacement mode, providing:
- eBPF-based networking with native routing
- BGP control plane for advertising load-balanced services to the Unifi gateway
- LoadBalancer IP Address Management to assign routable addresses to load-balanced services
- Network policies for pod-level traffic control
- Bandwidth Manager with BBR for bandwith and congestion control
Envoy Gateway provides a complete and up-to-date implemenmtation of the Kubernetes Gateway API with advanced extensions.
An external Gateway is used for routes that should be available from the public internet (via a Cloudflare Tunnel), while an internal Gateway is used for routes that should only be accessible on the local network or on my tailnet.
external-dns automatically manages DNS records for services:
- Cloudflare for external
Gatewayroutes - UniFi for internal
Gatewayroutes using @kashalls's excellent Unifi provider
The Tailscale Operator integrates the cluster with my tailnet.
- API Server Proxy - The Kubernetes API server is accessible over the tailnet via Tailscale's API server proxy in auth mode, enabling API server access with tailnet authn/authz.
- Split-Horizon DNS - A k8s-gateway deployment serves as a
kantai.xyzsplit-horizon DNS server on the tailnet for allHTTPRouteresources with akantai.xyzhostname, making them resolvable on the tailnet (but not reachable since the EnvoyGatewayservices use the Cilium BGP LoadBalancer class; see next). The k8s-gateway service itself is exposed to the tailnet using a Tailscale load balancer service. - The Unifi gateway is connected to the tailnet and programmed as a subnet router for the Cilium BGP LoadBalancer's IPv4 CIDR, making all such services reachable over the tailnet.
Multus CNI enables attaching multiple network interfaces to pods. Used for workloads requiring direct LAN access via macvlan interfaces with dual-stack networking support.
external-secrets synchronizes secrets from 1Password into Kubernetes using the 1Password Connect server. A ClusterSecretStore provides cluster-wide access to secrets.
cert-manager automates certificate lifecycle management:
- Maintains a wildcard certificate for
kantai.xyzusing Let's Encrypt DNS challenge (Cloudflare API) - trust-manager distributes CA bundles across namespaces
Pocket ID serves as the in-cluster OIDC provider, enabling:
- Kubernetes API server OIDC authentication
- OIDC authentication for apps that do not natively support it via Envoy Gateway's
SecurityPolicyextension - Centralized identity management for applications
Rook-Ceph provides distributed storage across the cluster:
- Block Storage (
ceph-block) - Default storage class with 3-way replication, LZ4 compression - Object Storage (
ceph-bucket) - S3-compatible storage with erasure coding (2+1) - Dashboard exposed via Envoy Gateway
- Encrypted OSDs for data-at-rest security
OpenEBS ZFS LocalPV exposes existing ZFS pools on nodes as Kubernetes storage:
- Provides access to large media and data pools
- Supports ZFS features (compression, snapshots, datasets)
- Used for workloads requiring high-capacity local storage
Samba deployments on storage nodes share ZFS-backed volumes to the local network via SMB, enabling access to cluster-managed data from non-Kubernetes clients.
VolSync backs up persistent volumes to Cloudflare R2 using Kopia:
- Daily snapshots with 7 daily, 4 weekly, 12 monthly retention
- Clone-based backups (no application downtime)
- Zstd compression for efficient storage
CloudNative-PG manages PostgreSQL clusters for applications:
- PostgreSQL 18 with vchord vector extensions for AI/ML workloads
- WAL archiving via barman-cloud plugin
- Automated backups and point-in-time recovery
The NVIDIA GPU Operator enables GPU workloads:
- Automatic container toolkit management
- CDI (Container Device Interface) support
- Time-slicing for GPU sharing
- DCGM metrics for monitoring
The VictoriaMetrics Operator manages the metrics stack:
- VMSingle for metrics storage (12-week retention on Ceph block storage)
- VMAgent for metric collection
- VMAlert + VMAlertmanager for alerting
- OpenTelemetry integration with Prometheus naming
The Grafana Operator manages Grafana instances and dashboards:
- Declarative dashboard management via
GrafanaDashboardCRDs - Automated datasource configuration
- Integrated with VictoriaMetrics
fluent-bit collects container logs from all nodes, running as a DaemonSet in the observability-agents namespace.
The kube-prometheus-stack provides:
- ServiceMonitors for Kubernetes components (API server, kubelet, etcd, scheduler, controller-manager)
- kube-state-metrics for resource metrics
- Dashboards via Grafana Operator integration
Note: Prometheus and Alertmanager from this stack are disabled in favor of VictoriaMetrics. The stack is primarily used for its comprehensive ServiceMonitor definitions and dashboards.
βββ kubernetes/ # Kubernetes resources
β βββ apps/ # Deployments by namespace
β β βββ cert-manager/
β β βββ cnpg-system/
β β βββ database/ # Databases (postgres, influxdb)
β β βββ default/ # Most applications
β β βββ external-secrets/
β β βββ flux-system/
β β βββ gpu-operator/ # NVIDIA GPU operator
β β βββ kube-system/ # Core infrastructure (Cilium, CoreDNS, etc.)
β β βββ network/ # Networking (Envoy Gateway, external-dns, etc.)
β β βββ observability/ # Observability stack
β β βββ observability-agents/# Privileged observability agents
β β βββ openebs-system/
β β βββ rook-ceph/
β β βββ storage/ # Samba
β β βββ tailscale/
β β βββ talos-admin/ # Talos management (backups, tuppr)
β β βββ volsync-system/
β βββ components/ # Reusable Kustomize components
β βββ transformers/ # Global Kustomize transformers
βββ talos/ # Talos configuration
βββ Taskfile.yaml # Task runner commands
Bootstrap is currently broken and unusable. I love my pets.
Update Talos node configuration:
task talos:gen-mc
task talos:apply-mc- Talos Linux provides an immutable, minimal OS with no SSH access
- Secure Boot enabled on supported nodes with TPM-backed disk encryption
- Pod Security Standards enforced via ValidatingAdmissionPolicies
- Network Policies via Cilium restrict pod-to-pod traffic
- OIDC authentication for Kubernetes API via Pocket ID
Lots of dashboards available on the on-cluster Grafana instance. Alerts go out to Discord.
- This cluster originally started from onedr0p/cluster-template, which is absolutely amazing. It makes running Kubernetes at home easy.
- The Home Operations community is amazing as well and will help you. Please join us.
- Sidero Labs for creating an amazing Kubernetes-native system.
- All the Kubernetes SIG groups for maintaining and evolving the world's open, extensible, at-scale resources and workloads orchestration system.