Skip to main content

PXE Boot

How network boot works in this environment, from power-on to Harvester install.

Boot Flow

Power on (nuc-01/02/03)
└─► UEFI firmware → PXE boot via NIC
└─► DHCP request to nuc-00-01 (${IP_PREFIX}.8)
└─► DHCP returns:
next-server = ${IP_PREFIX}.8 (TFTP)
filename = "ipxe.efi"
└─► TFTP downloads ipxe.efi from nuc-00-01
└─► iPXE client starts, re-runs DHCP
└─► DHCP detects iPXE user-class
└─► filename = "http://${ADMIN_IP}/harvester/harvester/ipxe-menu"
└─► HTTP fetches iPXE menu script
└─► Menu displayed (5s timeout → local boot)
└─► User selects node role
└─► Kernel + initrd + squashfs fetched via HTTP
└─► Harvester installer boots
└─► Reads config-{create,join}-nuc-0x.yaml
└─► Automated install runs

Key Services on nuc-00-01

ServicePortPurpose
BIND (named)53/UDP, 53/TCPAuthoritative DNS for ${BASE_DOMAIN}
ISC dhcpd67/UDPDHCP + PXE boot coordination
TFTP (tftpd)69/UDPServes ipxe.efi to UEFI clients
HTTP (Apache on nuc-00)80/TCPServes iPXE menu, Harvester artifacts, configs

Key Files

FileLocationPurpose
ipxe.efi/srv/tftpboot/ipxe.efi on nuc-00-01Initial UEFI iPXE binary
ipxe-menu.tmplFiles/nuc-00/srv/www/htdocs/harvester/harvester/iPXE boot menu script
config-create-nuc-01.yaml.tmplsame directoryHarvester create-cluster config
config-join-nuc-02.yaml.tmplsame directoryHarvester join-cluster config (nuc-02)
config-join-nuc-03.yaml.tmplsame directoryHarvester join-cluster config (nuc-03)
dhcpd.confFiles/nuc-00-01/etc/DHCP + PXE coordination

Template Rendering

The .tmpl files contain ${VAR} placeholders resolved by envsubst at install time. The nuc-00-01/post_install.sh script pulls these from the admin node and processes them:

envsubst < file.tmpl > file

Variables come from Scripts/env.sh + Scripts/env.d/${ENVIRONMENT}.sh.

Harvester ISO Hosting

Harvester artifacts must be downloaded and placed on nuc-00 before PXE booting:

ISO_VERSION="${HARVESTER_VERSION}"
ISO_DIR=/srv/www/htdocs/harvester/${ISO_VERSION}
mkdir -p "${ISO_DIR}"

BASE=https://releases.rancher.com/harvester/${ISO_VERSION}
for f in \
harvester-${ISO_VERSION}-amd64.iso \
harvester-${ISO_VERSION}-vmlinuz-amd64 \
harvester-${ISO_VERSION}-initrd-amd64 \
harvester-${ISO_VERSION}-rootfs-amd64.squashfs
do
wget -P "${ISO_DIR}" "${BASE}/${f}"
done

For Enclave, these files are pre-synced by modules/enclave/hauler_sync.sh and served from the local Hauler file server.

Boot Menu

The iPXE menu presents a 5-second countdown with a default of local boot. If no selection is made, the node boots from its local disk (useful for reboots after installation). Manually selecting a node role triggers the Harvester installer.

Menu options:

  • Create cluster — boot nuc-01 as the first Harvester node (creates the cluster)
  • Join cluster — boot nuc-02 or nuc-03 to join the existing cluster
  • Local boot — boot from local disk (default after timeout)

Node Boot Order

Boot nodes in sequence and wait for each one to complete installation before starting the next:

  1. Boot nuc-01 → select Create cluster → wait for Harvester UI to become available
  2. Boot nuc-02 → select Join cluster (nuc-02)
  3. Boot nuc-03 → select Join cluster (nuc-03)

After all three nodes have joined, the Harvester cluster VIP (${IP_PREFIX}.100) should be accessible.