Post

Finally Building the Kubernetes Cluster: The Long Road to K8s

Finally Building the Kubernetes Cluster: The Long Road to K8s

Note: I’m currently transitioning this site to a new theme (Chirpy). Things might look a little different while I get everything configured!

It’s taken longer than expected, but the Kubernetes cluster is finally underway. Getting here involved navigating some unique challenges—like discovering the so-called “10-inch rack standard” isn’t quite standard after all, causing fitting issues with certain hardware. Dealing with the quirks of the UniFi UDM API, VLAN tagging on the Flex series switches, and mastering RouterOS’s unique approach to “chip” programming versus traditional network interface configurations was also part of the journey.

Other notable hurdles included troubleshooting the infamous netplan try behavior with Ansible playbooks, consistent secrets management across varying architectures, and unraveling undocumented features and quirks of the Minisforum MS-01 nodes. After overcoming these obstacles, the cluster infrastructure is ready for deployment.

A huge thank you goes to my wife, who provided exceptionally creative solutions for mounting hardware and routing cables. Her innovative ideas played a crucial role in making everything fit and function seamlessly.

Future considerations include upgrading to a 19-inch rack and enhancing networking resilience by replacing the 2.5Gb switches and control-plane switch.

Understanding Kubernetes: Components, Tooling, and Choices

Transitioning from a container-centric background to Kubernetes significantly increases complexity. A quick demonstration using virtual machines following the “Kubernetes the Hard Way” guide provided valuable insights into Kubernetes components and their interactions. However, for the actual homelab deployment, kubeadm will be used due to its efficiency and practicality.

Here’s what’s being deployed:

Core Kubernetes Components

Control Plane Components

The Kubernetes control plane manages cluster-wide decisions and monitors cluster health.

  • kube-apiserver: Acts as Kubernetes’ front door, handling all interactions through its API, validating requests, and updating configuration stored in etcd.
  • etcd: The highly reliable, distributed key-value store holding all configuration data, serving as the cluster’s “database.”
  • kube-scheduler: Assigns newly created pods to appropriate nodes based on resources, constraints, and affinity rules.
  • kube-controller-manager: Regulates cluster state, including managing pod replicas and responding to node failures.

Node Components

Every worker node hosts:

  • kubelet: The primary agent ensuring pod containers are running and communicating status to the control plane.
  • kube-proxy: Manages network rules enabling pod communication inside and outside the cluster.
  • Container Runtime: Software that executes containers, with containerd selected for this deployment.

Container Runtime Choices

  • containerd: Selected for its performance and lightweight operation, containerd has become Kubernetes’ preferred runtime. Originally part of Docker, it now independently supports essential container execution.
  • Docker: Though excellent for development, Kubernetes deprecated direct Docker use (Docker shim) due to performance considerations.
  • Podman: A daemonless, rootless alternative that’s compatible with Docker commands, eliminating the need for background services.

Networking in Kubernetes

Kubernetes leverages the Container Network Interface (CNI) standard, offering various plugins:

  • Cilium: Utilizing eBPF technology for efficient networking and security (chosen for this deployment).
  • Calico: Known for robust network policy controls.
  • Flannel: Simplified overlay networking suitable for beginners.
  • Weave Net: Mesh network with built-in encryption capabilities.

CoreDNS serves as the DNS provider, replacing the older kube-DNS, offering in-cluster service discovery.

Kubernetes Security Layers

  • SSL/TLS Certificates: Secure inter-component communication, auto-generated during control-plane initialization.
  • RBAC (Role-Based Access Control): Manages access permissions within the cluster.
  • Secrets Management: Kubernetes’ built-in Secrets resource offers basic encoding but not encryption. I will explore options for this, but ran into some issues with the mixed architecture and the clients earlier in the project

Achieving High Availability

The control plane setup ensures resilience through:

  • Multiple Control Plane Nodes: Each node runs API server, controller manager, and scheduler.
  • Replicated etcd Cluster: Distributed across control-plane nodes.
  • Load Balancing: Employing HAProxy and Keepalived to manage a virtual IP, ensuring API server accessibility even if a node fails.

Homelab Cluster Implementation

The cluster comprises:

  • Admin Node: Raspberry Pi 5 (16GB RAM), Ubuntu 25.04.
  • Control Plane Nodes: 3x Raspberry Pi 5 (8GB RAM), Ubuntu 24.10.
  • Worker Nodes: 2x Minisforum MS-01.

Automation and configuration management utilize Ansible, with sensitive data securely stored in Ansible Vault. VLANs segment network traffic into management, control plane, pod, service, and storage networks.

Why Choose kubeadm?

Instead of continuing with the manual approach demonstrated in “Kubernetes the Hard Way”, kubeadm streamlines setup by:

  • Automating certificate management.
  • Simplifying complex etcd clustering.
  • Creating static pod definitions.
  • Ensuring proper component authentication.
  • Mirroring real-world production setups.

With infrastructure ready, it’s time to dive into configuring the control plane.

This post is licensed under CC BY 4.0 by the author.