Why This Stack?

Running Kubernetes locally or in a homelab doesn’t require a cloud account. If you have a Linux host with KVM support, you can provision virtual machines with Terraform’s dmacvicar/libvirt provider, configure them with Ansible, and end up with a reproducible, single-node cluster that mirrors how you’d build real infrastructure. minus the cloud bill.

This guide walks through every layer: host prerequisites, Terraform configuration, cloud-init bootstrapping, Ansible playbooks for kubeadm, and post-install verification. All code is provided inline and explained.

Prerequisites

Before starting, make sure your host machine satisfies the following.

Hardware: KVM support is required. verify with:

1
2
3
4
5
grep -Ec '(vmx|svm)' /proc/cpuinfo
# Output > 0 means hardware virtualization is available

sudo kvm-ok
# INFO: /dev/kvm exists. KVM acceleration can be used

Software on the host:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Ubuntu/Debian
sudo apt update && sudo apt install -y \
  qemu-kvm libvirt-daemon-system libvirt-clients \
  bridge-utils virtinst virt-manager \
  mkisofs xsltproc

# Verify libvirt is running
sudo systemctl enable --now libvirtd
sudo usermod -aG libvirt $USER
# Log out and back in for the group change to take effect

Terraform (≥ 1.5):

1
2
3
4
5
6
7
8
9
wget -O- https://apt.releases.hashicorp.com/gpg | \
  sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
  https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
  sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install -y terraform
terraform version

Ansible (≥ 2.15):

1
2
3
4
sudo apt install -y pipx
pipx install ansible-core
pipx inject ansible-core ansible
ansible --version

SSH key pair (we’ll inject the public key into the VM via cloud-init):

1
ssh-keygen -t ed25519 -f ~/.ssh/k8s_lab -N "" -C "k8s-lab-key"

Project Structure

Organize everything under a single directory:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
k8s-single-node/
├── terraform/
   ├── main.tf
   ├── variables.tf
   ├── outputs.tf
   ├── versions.tf
   └── cloud_init.cfg
├── ansible/
   ├── inventory.ini
   ├── ansible.cfg
   ├── playbook.yml
   └── roles/
       ├── common/
          └── tasks/
              └── main.yml
       ├── containerd/
          ├── tasks/
             └── main.yml
          └── files/
              └── containerd-config.toml
       └── kubernetes/
           ├── tasks/
              └── main.yml
           └── templates/
               └── kubeadm-config.yml.j2
└── Makefile

Part 1. Terraform: Provisioning the VM

Provider Configuration

The dmacvicar/libvirt provider talks directly to the libvirt daemon over its Unix socket. No cloud credentials, no API tokens.

terraform/versions.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
terraform {
  required_version = ">= 1.5.0"

  required_providers {
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "~> 0.8.1"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

The qemu:///system URI connects to the system-level libvirt daemon. If you need to provision on a remote host, you can use qemu+ssh://user@remote/system instead.

Variables

terraform/variables.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
variable "vm_name" {
  description = "Name of the virtual machine"
  type        = string
  default     = "k8s-node"
}

variable "vcpus" {
  description = "Number of virtual CPUs"
  type        = number
  default     = 4
}

variable "memory_mb" {
  description = "Memory in megabytes"
  type        = number
  default     = 8192
}

variable "disk_size_bytes" {
  description = "Root disk size in bytes"
  type        = number
  default     = 42949672960 # 40 GB
}

variable "ubuntu_image_url" {
  description = "URL for the Ubuntu cloud image"
  type        = string
  default     = "https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img"
}

variable "network_name" {
  description = "Libvirt network name"
  type        = string
  default     = "k8s-net"
}

variable "network_cidr" {
  description = "CIDR for the libvirt NAT network"
  type        = list(string)
  default     = ["10.17.3.0/24"]
}

variable "node_ip" {
  description = "Static IP address for the node"
  type        = string
  default     = "10.17.3.10"
}

variable "ssh_public_key_path" {
  description = "Path to the SSH public key to inject"
  type        = string
  default     = "~/.ssh/k8s_lab.pub"
}

Cloud-Init Configuration

Cloud-init handles first-boot configuration: user creation, SSH key injection, package installation, and kernel module/sysctl setup that Kubernetes requires.

terraform/cloud_init.cfg

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#cloud-config

hostname: k8s-node
fqdn: k8s-node.local
manage_etc_hosts: true

users:
  - name: kube
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    groups: [adm, sudo]
    lock_passwd: true
    ssh_authorized_keys:
      - ${ssh_public_key}

package_update: true
package_upgrade: true
packages:
  - qemu-guest-agent
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg
  - lsb-release
  - socat
  - conntrack
  - ipset

# Load kernel modules required by Kubernetes networking
write_files:
  - path: /etc/modules-load.d/k8s.conf
    content: |
      overlay
      br_netfilter      
  - path: /etc/sysctl.d/k8s.conf
    content: |
      net.bridge.bridge-nf-call-iptables  = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      net.ipv4.ip_forward                 = 1      

runcmd:
  - modprobe overlay
  - modprobe br_netfilter
  - sysctl --system
  - swapoff -a
  - sed -i '/\sswap\s/d' /etc/fstab
  - systemctl enable --now qemu-guest-agent

power_state:
  mode: reboot
  message: "Cloud-init complete. rebooting"
  timeout: 30
  condition: true

The runcmd section disables swap (a hard requirement for kubelet) and loads the overlay and br_netfilter kernel modules that container networking depends on. The sysctl parameters ensure bridge traffic is processed by iptables, which is necessary for kube-proxy and CNI plugins to function.

Main Configuration

terraform/main.tf

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
# --- Network ---
resource "libvirt_network" "k8s_network" {
  name      = var.network_name
  mode      = "nat"
  autostart = true
  addresses = var.network_cidr

  dns {
    enabled    = true
    local_only = true
  }

  dhcp {
    enabled = false # We use a static IP via cloud-init
  }
}

# --- Base Volume (backing image) ---
resource "libvirt_volume" "ubuntu_base" {
  name   = "ubuntu-24.04-base.qcow2"
  pool   = "default"
  source = var.ubuntu_image_url
  format = "qcow2"
}

# --- Node Disk (copy-on-write layer on top of base) ---
resource "libvirt_volume" "k8s_node_disk" {
  name           = "${var.vm_name}-disk.qcow2"
  pool           = "default"
  base_volume_id = libvirt_volume.ubuntu_base.id
  size           = var.disk_size_bytes
  format         = "qcow2"
}

# --- Cloud-Init ISO ---
resource "libvirt_cloudinit_disk" "k8s_cloudinit" {
  name = "${var.vm_name}-cloudinit.iso"
  pool = "default"

  user_data = templatefile("${path.module}/cloud_init.cfg", {
    ssh_public_key = trimspace(file(pathexpand(var.ssh_public_key_path)))
  })

  network_config = <<-EOF
    version: 2
    ethernets:
      ens3:
        addresses:
          - ${var.node_ip}/24
        routes:
          - to: 0.0.0.0/0
            via: ${cidrhost(var.network_cidr[0], 1)}
        nameservers:
          addresses:
            - ${cidrhost(var.network_cidr[0], 1)}
            - 1.1.1.1
  EOF
}

# --- Domain (VM) ---
resource "libvirt_domain" "k8s_node" {
  name   = var.vm_name
  vcpu   = var.vcpus
  memory = var.memory_mb

  cloudinit = libvirt_cloudinit_disk.k8s_cloudinit.id

  cpu {
    mode = "host-passthrough"
  }

  network_interface {
    network_id     = libvirt_network.k8s_network.id
    wait_for_lease = false
    addresses      = [var.node_ip]
  }

  disk {
    volume_id = libvirt_volume.k8s_node_disk.id
    scsi      = false
  }

  console {
    type        = "pty"
    target_type = "serial"
    target_port = "0"
  }

  graphics {
    type        = "vnc"
    listen_type = "address"
    autoport    = true
  }

  qemu_agent = true

  provisioner "remote-exec" {
    connection {
      type        = "ssh"
      user        = "kube"
      private_key = file(pathexpand(replace(var.ssh_public_key_path, ".pub", "")))
      host        = var.node_ip
      timeout     = "5m"
    }

    inline = ["echo 'SSH is ready'"]
  }
}

A few details worth calling out:

The base volume is downloaded once and used as a backing image. The node disk is a thin-provisioned copy-on-write layer on top of it, so spinning up additional VMs later is fast and space-efficient.

cpu.mode = "host-passthrough" exposes the host’s actual CPU features to the guest, which avoids performance penalties and ensures compatibility with any CPU-specific instructions that containerd or the kernel might use.

The remote-exec provisioner at the end is a gate. Terraform will block until SSH is reachable, which means cloud-init has finished and the VM has rebooted. This ensures the Ansible step doesn’t start too early.

Outputs

terraform/outputs.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
output "node_ip" {
  description = "IP address of the Kubernetes node"
  value       = var.node_ip
}

output "ssh_command" {
  description = "SSH command to connect to the node"
  value       = "ssh -i ${replace(var.ssh_public_key_path, ".pub", "")} kube@${var.node_ip}"
}

output "ansible_inventory" {
  description = "Ansible inventory content"
  value       = <<-INV
    [k8s_node]
    ${var.node_ip} ansible_user=kube ansible_ssh_private_key_file=${replace(var.ssh_public_key_path, ".pub", "")}
  INV
}

Running Terraform

1
2
3
4
cd terraform/
terraform init
terraform plan -out=tfplan
terraform apply tfplan

After apply completes, generate the Ansible inventory file:

1
terraform output -raw ansible_inventory > ../ansible/inventory.ini

Part 2. Ansible: Bootstrapping Kubernetes

Ansible Configuration

ansible/ansible.cfg

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[defaults]
inventory         = inventory.ini
host_key_checking = False
timeout           = 30
forks             = 1
roles_path        = roles

[privilege_escalation]
become        = True
become_method = sudo

[ssh_connection]
pipelining    = True
ssh_args      = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no

SSH pipelining reduces the number of SSH operations per task, which noticeably speeds up playbook execution. We disable host key checking because we’re working with ephemeral VMs whose keys change on every rebuild.

Role: common

This role validates that cloud-init did its job and applies any remaining system-level configuration.

ansible/roles/common/tasks/main.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
- name: Wait for cloud-init to finish
  ansible.builtin.command: cloud-init status --wait
  changed_when: false
  timeout: 300

- name: Verify swap is disabled
  ansible.builtin.command: swapon --show
  register: swap_status
  changed_when: false

- name: Fail if swap is still active
  ansible.builtin.fail:
    msg: "Swap is still active. Kubernetes requires swap to be disabled."
  when: swap_status.stdout | length > 0

- name: Ensure kernel modules are loaded
  community.general.modprobe:
    name: "{{ item }}"
    state: present
    persistent: present
  loop:
    - overlay
    - br_netfilter

- name: Verify sysctl parameters
  ansible.posix.sysctl:
    name: "{{ item.key }}"
    value: "{{ item.value }}"
    sysctl_file: /etc/sysctl.d/k8s.conf
    reload: true
  loop:
    - { key: "net.bridge.bridge-nf-call-iptables",  value: "1" }
    - { key: "net.bridge.bridge-nf-call-ip6tables", value: "1" }
    - { key: "net.ipv4.ip_forward",                 value: "1" }

- name: Set timezone
  community.general.timezone:
    name: UTC

Role: containerd

Kubernetes needs a CRI-compatible container runtime. containerd is the standard choice. it’s what most managed Kubernetes distributions use under the hood.

ansible/roles/containerd/tasks/main.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
- name: Install containerd dependencies
  ansible.builtin.apt:
    name:
      - ca-certificates
      - curl
      - gnupg
    state: present
    update_cache: true

- name: Create keyrings directory
  ansible.builtin.file:
    path: /etc/apt/keyrings
    state: directory
    mode: "0755"

- name: Add Docker GPG key
  ansible.builtin.shell: |
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
      gpg --dearmor -o /etc/apt/keyrings/docker.gpg    
  args:
    creates: /etc/apt/keyrings/docker.gpg

- name: Add Docker repository
  ansible.builtin.apt_repository:
    repo: >-
      deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg]
      https://download.docker.com/linux/ubuntu
      {{ ansible_distribution_release }} stable      
    filename: docker
    state: present

- name: Install containerd
  ansible.builtin.apt:
    name: containerd.io
    state: present
    update_cache: true

- name: Create containerd config directory
  ansible.builtin.file:
    path: /etc/containerd
    state: directory
    mode: "0755"

- name: Generate default containerd config
  ansible.builtin.shell: containerd config default > /etc/containerd/config.toml
  args:
    creates: /etc/containerd/config.toml
  notify: Restart containerd

- name: Ensure SystemdCgroup is enabled in containerd config
  ansible.builtin.lineinfile:
    path: /etc/containerd/config.toml
    regexp: '^SystemdCgroup\s*=.*'
    line: 'SystemdCgroup = true'
    insertafter: EOF
  notify: Restart containerd

- name: enable cri plugin in containerd config
  ansible.builtin.replace:
    path: /etc/containerd/config.toml
    regexp: 'disabled_plugins = \["cri"\]'
    replace: 'disabled_plugins = []'
  notify: Restart containerd

- name: Ensure sandbox_image is set to registry.k8s.io/pause:3.10
  ansible.builtin.lineinfile:
    path: /etc/containerd/config.toml
    regexp: '^sandbox_image\s*=.*'
    line: 'sandbox_image = "registry.k8s.io/pause:3.10"'
    insertafter: EOF
  notify: Restart containerd

- name: Enable and start containerd
  ansible.builtin.systemd:
    name: containerd
    enabled: true
    state: started

ansible/roles/containerd/handlers/main.yml

1
2
3
4
5
6
---
  - name: Restart containerd
    ansible.builtin.systemd:
      name: containerd
      state: restarted
      daemon_reload: true

The critical change here is setting SystemdCgroup = true. By default, containerd uses the cgroupfs driver, but kubelet defaults to the systemd cgroup driver. If these don’t match, the kubelet will refuse to start. Aligning both on systemd is the recommended configuration.

Role: kubernetes

ansible/roles/kubernetes/tasks/main.yml

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
---
- name: Add Kubernetes GPG key
  ansible.builtin.shell: |
    curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | \
      gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg    
  args:
    creates: /etc/apt/keyrings/kubernetes-apt-keyring.gpg

- name: Add Kubernetes repository
  ansible.builtin.apt_repository:
    repo: >-
      deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
      https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /      
    filename: kubernetes
    state: present

- name: Install Kubernetes components
  ansible.builtin.apt:
    name:
      - kubelet
      - kubeadm
      - kubectl
    state: present
    update_cache: true

- name: Pin Kubernetes package versions
  ansible.builtin.dpkg_selections:
    name: "{{ item }}"
    selection: hold
  loop:
    - kubelet
    - kubeadm
    - kubectl

- name: Enable kubelet service
  ansible.builtin.systemd:
    name: kubelet
    enabled: true

- name: Generate kubeadm config
  ansible.builtin.template:
    src: kubeadm-config.yml.j2
    dest: /etc/kubernetes/kubeadm-config.yml
    mode: "0644"

- name: Check if cluster is already initialized
  ansible.builtin.stat:
    path: /etc/kubernetes/admin.conf
  register: kubeadm_already_init

- name: Flush handlers
  meta: flush_handlers

- name: Initialize Kubernetes cluster
  ansible.builtin.command: >
    kubeadm init --config=/etc/kubernetes/kubeadm-config.yml    
  when: not kubeadm_already_init.stat.exists
  register: kubeadm_init
  changed_when: kubeadm_init.rc == 0

- name: Create .kube directory for kube user
  ansible.builtin.file:
    path: /home/kube/.kube
    state: directory
    owner: kube
    group: kube
    mode: "0755"

- name: Copy admin.conf to kube user
  ansible.builtin.copy:
    src: /etc/kubernetes/admin.conf
    dest: /home/kube/.kube/config
    remote_src: true
    owner: kube
    group: kube
    mode: "0600"

- name: Remove control-plane taint (allow scheduling on this node)
  ansible.builtin.command: >
    kubectl taint nodes --all node-role.kubernetes.io/control-plane- --kubeconfig=/etc/kubernetes/admin.conf    
  register: taint_result
  changed_when: "'untainted' in taint_result.stdout"
  failed_when:
    - taint_result.rc != 0
    - "'not found' not in taint_result.stderr"

- name: Install Cilium CLI
  ansible.builtin.shell: |
    CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
    curl -L --fail \
      https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz | \
      tar xz -C /usr/local/bin    
  args:
    creates: /usr/local/bin/cilium

- name: Install Cilium CNI
  ansible.builtin.command: >
    cilium install --kubeconfig=/etc/kubernetes/admin.conf    
  register: cilium_install
  changed_when: "'Cilium was successfully installed' in cilium_install.stdout"
  failed_when:
    - cilium_install.rc != 0
    - "'already installed' not in cilium_install.stderr"

- name: Wait for node to become Ready
  ansible.builtin.command: >
    kubectl wait --for=condition=Ready node --all
    --timeout=300s --kubeconfig=/etc/kubernetes/admin.conf    
  changed_when: false
  retries: 5
  delay: 10

ansible/roles/kubernetes/templates/kubeadm-config.yml.j2

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "{{ ansible_default_ipv4.address }}"
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: "v1.31.0"
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
  dnsDomain: "cluster.local"
controllerManager:
  extraArgs:
    - name: bind-address
      value: "0.0.0.0"
scheduler:
  extraArgs:
    - name: bind-address
      value: "0.0.0.0"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock

We set taints: [] in nodeRegistration and also explicitly remove the control-plane taint after init. For a single-node cluster, this is essential. without it, no pods can be scheduled because the only node is tainted as a control-plane node.

Cilium is used as the CNI plugin. It replaces kube-proxy with eBPF-based networking, which provides better performance and observability. If you prefer Flannel or Calico, substitute the installation step accordingly (and adjust the podSubnet if needed).

Main Playbook

ansible/playbook.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
- name: Bootstrap single-node Kubernetes cluster
  hosts: k8s_node
  become: true
  gather_facts: true

  pre_tasks:
    - name: Wait for system to become reachable
      ansible.builtin.wait_for_connection:
        delay: 5
        timeout: 120

    - name: Gather facts after connection
      ansible.builtin.setup:

  roles:
    - common
    - containerd
    - kubernetes

  post_tasks:
    - name: Print cluster info
      ansible.builtin.command: >
        kubectl cluster-info --kubeconfig=/etc/kubernetes/admin.conf        
      register: cluster_info
      changed_when: false

    - name: Display cluster info
      ansible.builtin.debug:
        var: cluster_info.stdout_lines

    - name: Get node status
      ansible.builtin.command: >
        kubectl get nodes -o wide --kubeconfig=/etc/kubernetes/admin.conf        
      register: node_status
      changed_when: false

    - name: Display node status
      ansible.builtin.debug:
        var: node_status.stdout_lines

Running Ansible

1
2
cd ansible/
ansible-playbook playbook.yml

Expect the full run to take 5–10 minutes, depending on download speeds.


Part 3. Makefile for the Full Workflow

Wrap everything into a single Makefile at the project root:

Makefile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
.PHONY: all infra configure destroy ssh status clean

TERRAFORM_DIR := terraform
ANSIBLE_DIR   := ansible

all: infra configure

infra:
	cd $(TERRAFORM_DIR) && terraform init
	cd $(TERRAFORM_DIR) && terraform apply -auto-approve
	cd $(TERRAFORM_DIR) && terraform output -raw ansible_inventory \
		> ../$(ANSIBLE_DIR)/inventory.ini

configure:
	cd $(ANSIBLE_DIR) && ansible-playbook playbook.yml

destroy:
	cd $(TERRAFORM_DIR) && terraform destroy -auto-approve
	rm -f $(ANSIBLE_DIR)/inventory.ini

ssh:
	$$(cd $(TERRAFORM_DIR) && terraform output -raw ssh_command)

status:
	ssh -i ~/.ssh/k8s_lab [email protected] \
		'kubectl get nodes -o wide && echo "---" && kubectl get pods -A'

clean: destroy

Build everything from scratch in one command:

1
make all

Part 4. Verification and Smoke Tests

Once the playbook finishes, SSH into the node and verify:

1
2
make ssh
# or: ssh -i ~/.ssh/k8s_lab [email protected]

Check the node status:

1
2
3
kubectl get nodes -o wide
# NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP   ...
# k8s-node   Ready    control-plane   5m    v1.31.0   10.17.3.10    ...

Check system pods:

1
2
kubectl get pods -n kube-system
# Every pod should be Running or Completed

Verify Cilium health:

1
2
3
4
cilium status
# Cilium:         OK
# Operator:       OK
# Hubble Relay:   disabled

Deploy a test workload:

1
2
3
4
5
kubectl create deployment nginx --image=nginx:latest --replicas=2
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc nginx
# Note the NodePort, then:
curl http://10.17.3.10:<NODE_PORT>