← Back to posts

Running Kubernetes the Hard Way on AMD64 Homelab Nodes


Intro

I used Kubernetes the Hard Way to close some gaps in my Kubernetes fundamentals. In the past, I relied on highly automated setups and struggled to debug the final cluster state. This walkthrough forced me to understand each moving part directly.

Main reference repository:

Lab Topology

I ran the lab on one local hypervisor host using a mix of LXC containers and VMs.

NameIDRoleCPURAMStorageIPType
jumpbox115Administration host1512MB10GB192.0.2.115LXC
server117Kubernetes control plane host12GB20GB192.0.2.117LXC
node-0118Kubernetes worker node12GB20GB192.0.2.118VM
node-1119Kubernetes worker node12GB20GB192.0.2.119VM

Key Deviations from the Guide

Most steps were identical to upstream documentation. The main differences were related to architecture-specific binaries and one missing file in the guide.

1) Jumpbox Downloads: ARM64 -> AMD64

I updated downloads.txt to pull AMD64 binaries instead of ARM64 binaries.

https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-apiserver
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-controller-manager
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-scheduler
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-amd64.tar.gz
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-proxy
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kubelet
https://github.com/etcd-io/etcd/releases/download/v3.4.27/etcd-v3.4.27-linux-amd64.tar.gz

2) Compute Resource File

My machines.txt looked like this:

192.0.2.117 server.kubernetes.local server
192.0.2.118 node-0.kubernetes.local node-0 10.200.0.0/24
192.0.2.119 node-1.kubernetes.local node-1 10.200.1.0/24

3) Authentication Config Generation

The command below can fail if kubelet already exists from an earlier step:

mkdir /var/lib/{kube-proxy,kubelet}

In my case, this warning was non-blocking and file copy operations still succeeded.

4) Missing Encryption Config File

I had to add configs/encryption-config.yaml manually, as discussed in:

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration

resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}

5) etcd Bootstrap Commands (AMD64)

scp \
  downloads/etcd-v3.4.27-linux-amd64.tar.gz \
  units/etcd.service \
  root@server:~/
{
  tar -xvf etcd-v3.4.27-linux-amd64.tar.gz
  mv etcd-v3.4.27-linux-amd64/etcd* /usr/local/bin/
}

6) Worker Bootstrap Commands (AMD64)

for host in node-0 node-1; do
  scp \
    downloads/runc.amd64 \
    downloads/crictl-v1.28.0-linux-amd64.tar.gz \
    downloads/cni-plugins-linux-amd64-v1.3.0.tgz \
    downloads/containerd-1.7.8-linux-amd64.tar.gz \
    downloads/kubectl \
    downloads/kubelet \
    downloads/kube-proxy \
    configs/99-loopback.conf \
    configs/containerd-config.toml \
    configs/kubelet-config.yaml \
    configs/kube-proxy-config.yaml \
    units/containerd.service \
    units/kubelet.service \
    units/kube-proxy.service \
    root@$host:~/
done
{
  mkdir -p containerd
  tar -xvf crictl-v1.28.0-linux-amd64.tar.gz
  tar -xvf containerd-1.7.8-linux-amd64.tar.gz -C containerd
  tar -xvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/
  mv runc.amd64 runc
  chmod +x crictl kubectl kube-proxy kubelet runc
  mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
  mv containerd/bin/* /bin/
}

Related reference:

Notes During Route Provisioning

When running ip route add via SSH, I saw the following output:

Pseudo-terminal will not be allocated because stdin is not a terminal.
Linux node-0 6.1.0-25-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.106-3 (2024-08-26) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

This looked noisy, but route provisioning still completed successfully.

Smoke Test Result and Open Question

The final smoke test command:

curl -I http://node-0:32277

still failed for me, consistent with this issue:

Traffic routing from nodes to pods was clear after manually adding routes in the previous step. What remained unclear was service IP routing behavior (for example, an internal service IP like 10.32.0.184) in this lab setup.

Conclusion

The lab was useful and mostly reproducible with AMD64-specific binary substitutions. The two main gotchas were:

  • the missing encryption config file in the guide;
  • lingering ambiguity around the final service-access smoke test.

Even with those rough edges, this run gave me a much better mental model of how a Kubernetes cluster is assembled from first principles.

← Back to posts