Skip to content

jfroy/flatops

Repository files navigation

β›΅ flatops

A GitOps-managed Kubernetes homelab cluster running on Talos Linux.

πŸ“‹ Overview

This repository contains the declarative configuration for kantai, a bare-metal Kubernetes cluster. The cluster is designed for home infrastructure workloads with a focus on:

  • GitOps-driven operations via FluxCD
  • Advanced networking with Cilium, Envoy Gateway, external-dns, Cloudflare, and cert-manager
  • Distributed storage using Rook-Ceph
  • GPU workloads with NVIDIA GPU Operator
  • Comprehensive observability using VictoriaMetrics and Grafana
  • Continuous integration via Renovate

πŸ—οΈ Cluster Architecture

Nodes

Node Role Hardware
kantai1 Hyper-converged control plane and workloads
  • AMD EPYC 7443P, 64 GiB
  • NVIDIA RTX 4000 Ada Generation, 24 GB
  • Micron 9300 PRO, 4 TB, x7
  • Seagate Exos X20, 18 TB, x15
  • NVIDIA ConnectX-5
  • LSI 9500-8e
  • 45Drives HL-15
kantai2 Virtual arm64 control plane and workloads
  • Apple M2 Mac Mini, 16 GB (mem), 500 GB (block)
  • UTM + QEMU hypervisor
kantai3 Hyper-converged control plane and workloads
  • AMD Ryzen Embedded V1500B, 32 GB
  • NVIDIA T400, 4 GB
  • Seagate Exos X18, 18 TB, x6
  • NVIDIA ConnectX-3
  • QNAP TS-673A

Network

kantai is connected to an all-Ubiquiti network, with a Hi-Capacity Aggregation as the TOR and a Dream Machine Pro as the gateway/router/firewall. Recent versions of Unifi Network and Unifi OS support BGP, which is used to advertise load balancer addresses and thus provide node-balanced services to the network. The cluster's virtual network is dual-stack IPv4 and IPv6.

The cluster uses kantai.xyz as its public domain. It is registered at Cloudflare which also acts as the DNS authority. Cloudflare also proxies requests for services available from the public internet and tunnels them to the cluster for DDOS and privacy protection.

The cluster integrates with a Tailscale tailnet for private secure global access.

IPv4

  • Cluster nodes are connected to the main Ubiquiti network which uses 10.1.0.0/16.
  • Cilium advertises routes to load-balanced services using BGP.
  • A Unifi network matching the load balancer CIDR is programmed to prevent unnecessary NAT hairpinning and allow flows through the firewall.
  • Cilium masquerades pod addresses to node addresses.
Role CIDR
Pod 10.11.0.0/16
Service 10.11.0.0/16
Cilium LB IPAM 10.11.0.0/16

IPv6

For IPv6 networking, I decided to use globally routable addresses for pods, services, and LB IPAM. This means no masquerading is necessary, which is more in the spirit of IPv6. Routes and firewalls must still be programmed for traffic to flow.

  • Cluster nodes are connected to the main Ubiquiti network which receives an IPv6 /64 prefix via prefix delegation and assigns addresses to clients via SLAAC.
  • 3 additional /64 prefixes are manually reserved for pods, services, and Cilium LB IPAM.
  • Cilium advertises routes to load-balanced services using BGP (same as IPv4).
  • A Unifi network matching the load balancer CIDR is programmed to prevent unnecessary NAT hairpinning and allow flows through the firewall (same as IPv4).
  • IPv6 masquerading is disabled.

πŸ”§ Core Components

GitOps & Cluster Management

FluxCD

The cluster is managed entirely through GitOps using FluxCD. All resources are declared in this repository and automatically reconciled to the cluster. The Flux Operator manages the FluxCD instance.

  • Kustomizations define the desired state of each application
  • HelmReleases manage Helm chart deployments
  • OCIRepositories pull charts from OCI registries
  • Drift detection ensures cluster state matches Git

tuppr

Automated Talos and Kubernetes upgrades are managed by tuppr. Upgrade CRDs (TalosUpgrade, KubernetesUpgrade) define version targets with health checks that ensure VolSync backups complete and Ceph cluster health is OK before proceeding.

Renovate

The repository is constantly updated using Renovate and flux-local. Minor and patch updates are applied automatically while major releases require human approval.

Networking

Cilium

Cilium serves as the CNI in kube-proxy replacement mode, providing:

Envoy Gateway

Envoy Gateway provides a complete and up-to-date implemenmtation of the Kubernetes Gateway API with advanced extensions.

An external Gateway is used for routes that should be available from the public internet (via a Cloudflare Tunnel), while an internal Gateway is used for routes that should only be accessible on the local network or on my tailnet.

external-dns

external-dns automatically manages DNS records for services:

Tailscale

The Tailscale Operator integrates the cluster with my tailnet.

  • API Server Proxy - The Kubernetes API server is accessible over the tailnet via Tailscale's API server proxy in auth mode, enabling API server access with tailnet authn/authz.
  • Split-Horizon DNS - A k8s-gateway deployment serves as a kantai.xyz split-horizon DNS server on the tailnet for all HTTPRoute resources with a kantai.xyz hostname, making them resolvable on the tailnet (but not reachable since the Envoy Gateway services use the Cilium BGP LoadBalancer class; see next). The k8s-gateway service itself is exposed to the tailnet using a Tailscale load balancer service.
  • The Unifi gateway is connected to the tailnet and programmed as a subnet router for the Cilium BGP LoadBalancer's IPv4 CIDR, making all such services reachable over the tailnet.

Multus

Multus CNI enables attaching multiple network interfaces to pods. Used for workloads requiring direct LAN access via macvlan interfaces with dual-stack networking support.

Secrets Management

external-secrets + 1Password

external-secrets synchronizes secrets from 1Password into Kubernetes using the 1Password Connect server. A ClusterSecretStore provides cluster-wide access to secrets.

Certificate Management

cert-manager + trust-manager

cert-manager automates certificate lifecycle management:

  • Maintains a wildcard certificate for kantai.xyz using Let's Encrypt DNS challenge (Cloudflare API)
  • trust-manager distributes CA bundles across namespaces

Identity & Authentication

Pocket ID

Pocket ID serves as the in-cluster OIDC provider, enabling:

  • Kubernetes API server OIDC authentication
  • OIDC authentication for apps that do not natively support it via Envoy Gateway's SecurityPolicy extension
  • Centralized identity management for applications

Storage

Rook-Ceph

Rook-Ceph provides distributed storage across the cluster:

  • Block Storage (ceph-block) - Default storage class with 3-way replication, LZ4 compression
  • Object Storage (ceph-bucket) - S3-compatible storage with erasure coding (2+1)
  • Dashboard exposed via Envoy Gateway
  • Encrypted OSDs for data-at-rest security

OpenEBS ZFS

OpenEBS ZFS LocalPV exposes existing ZFS pools on nodes as Kubernetes storage:

  • Provides access to large media and data pools
  • Supports ZFS features (compression, snapshots, datasets)
  • Used for workloads requiring high-capacity local storage

Samba

Samba deployments on storage nodes share ZFS-backed volumes to the local network via SMB, enabling access to cluster-managed data from non-Kubernetes clients.

VolSync + Kopia

VolSync backs up persistent volumes to Cloudflare R2 using Kopia:

  • Daily snapshots with 7 daily, 4 weekly, 12 monthly retention
  • Clone-based backups (no application downtime)
  • Zstd compression for efficient storage

Database

CloudNative-PG

CloudNative-PG manages PostgreSQL clusters for applications:

  • PostgreSQL 18 with vchord vector extensions for AI/ML workloads
  • WAL archiving via barman-cloud plugin
  • Automated backups and point-in-time recovery

GPU Compute

NVIDIA GPU Operator

The NVIDIA GPU Operator enables GPU workloads:

  • Automatic container toolkit management
  • CDI (Container Device Interface) support
  • Time-slicing for GPU sharing
  • DCGM metrics for monitoring

Observability

Metrics: VictoriaMetrics

The VictoriaMetrics Operator manages the metrics stack:

  • VMSingle for metrics storage (12-week retention on Ceph block storage)
  • VMAgent for metric collection
  • VMAlert + VMAlertmanager for alerting
  • OpenTelemetry integration with Prometheus naming

Dashboards: Grafana Operator

The Grafana Operator manages Grafana instances and dashboards:

  • Declarative dashboard management via GrafanaDashboard CRDs
  • Automated datasource configuration
  • Integrated with VictoriaMetrics

Logs: fluent-bit

fluent-bit collects container logs from all nodes, running as a DaemonSet in the observability-agents namespace.

kube-prometheus-stack

The kube-prometheus-stack provides:

  • ServiceMonitors for Kubernetes components (API server, kubelet, etcd, scheduler, controller-manager)
  • kube-state-metrics for resource metrics
  • Dashboards via Grafana Operator integration

Note: Prometheus and Alertmanager from this stack are disabled in favor of VictoriaMetrics. The stack is primarily used for its comprehensive ServiceMonitor definitions and dashboards.

πŸ“ Repository Structure

β”œβ”€β”€ kubernetes/                  # Kubernetes resources
β”‚   β”œβ”€β”€ apps/                    # Deployments by namespace
β”‚   β”‚   β”œβ”€β”€ cert-manager/
β”‚   β”‚   β”œβ”€β”€ cnpg-system/
β”‚   β”‚   β”œβ”€β”€ database/            # Databases (postgres, influxdb)
β”‚   β”‚   β”œβ”€β”€ default/             # Most applications
β”‚   β”‚   β”œβ”€β”€ external-secrets/
β”‚   β”‚   β”œβ”€β”€ flux-system/
β”‚   β”‚   β”œβ”€β”€ gpu-operator/        # NVIDIA GPU operator
β”‚   β”‚   β”œβ”€β”€ kube-system/         # Core infrastructure (Cilium, CoreDNS, etc.)
β”‚   β”‚   β”œβ”€β”€ network/             # Networking (Envoy Gateway, external-dns, etc.)
β”‚   β”‚   β”œβ”€β”€ observability/       # Observability stack
β”‚   β”‚   β”œβ”€β”€ observability-agents/# Privileged observability agents
β”‚   β”‚   β”œβ”€β”€ openebs-system/
β”‚   β”‚   β”œβ”€β”€ rook-ceph/
β”‚   β”‚   β”œβ”€β”€ storage/             # Samba
β”‚   β”‚   β”œβ”€β”€ tailscale/
β”‚   β”‚   β”œβ”€β”€ talos-admin/         # Talos management (backups, tuppr)
β”‚   β”‚   └── volsync-system/
β”‚   β”œβ”€β”€ components/              # Reusable Kustomize components
β”‚   └── transformers/            # Global Kustomize transformers
β”œβ”€β”€ talos/                       # Talos configuration
└── Taskfile.yaml                # Task runner commands

πŸš€ Getting Started

Bootstrap

Bootstrap is currently broken and unusable. I love my pets.

Maintenance

Update Talos node configuration:

task talos:gen-mc
task talos:apply-mc

πŸ”’ Security

  • Talos Linux provides an immutable, minimal OS with no SSH access
  • Secure Boot enabled on supported nodes with TPM-backed disk encryption
  • Pod Security Standards enforced via ValidatingAdmissionPolicies
  • Network Policies via Cilium restrict pod-to-pod traffic
  • OIDC authentication for Kubernetes API via Pocket ID

πŸ“Š Monitoring

Lots of dashboards available on the on-cluster Grafana instance. Alerts go out to Discord.

πŸ™ Acknowledgments

  • This cluster originally started from onedr0p/cluster-template, which is absolutely amazing. It makes running Kubernetes at home easy.
  • The Home Operations community is amazing as well and will help you. Please join us.
  • Sidero Labs for creating an amazing Kubernetes-native system.
  • All the Kubernetes SIG groups for maintaining and evolving the world's open, extensible, at-scale resources and workloads orchestration system.

About

My homelab Kubernetes cluster

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Languages