Controller Manager
The CAPT Controller Manager is the core component of the Cluster API Provider for Tinkerbell (CAPT). It is responsible for orchestrating the lifecycle of Kubernetes clusters on bare-metal infrastructure using Tinkerbell. The CAPT Controller Manager interacts with the Kubernetes API to watch for changes in cluster-related resources and then takes action to ensure that the physical infrastructure, managed by Tinkerbell, matches the desired state.
Core Responsibilities of the CAPT Controller Manager
- Resource Reconciliation:
- The CAPT Controller Manager constantly monitors Kubernetes resources like
Cluster
,Machine
,MachineDeployment
, andKubeadmControlPlane
. When it detects a change in these resources, it reconciles the actual state of the physical infrastructure with the desired state. - For example, if a new machine is added to a
MachineDeployment
, the controller will trigger Tinkerbell to provision a new bare-metal server accordingly.
- The CAPT Controller Manager constantly monitors Kubernetes resources like
- Interaction with Tinkerbell:
- The controller manager interacts with Tinkerbell’s API to create, update, or delete workflows that manage the provisioning of bare-metal servers.
- It translates Kubernetes resource specifications into Tinkerbell workflows, ensuring that the right operating system, network configuration, and Kubernetes components are installed on the hardware.
- Cluster and Machine Management:
- The controller manager oversees the entire lifecycle of the Kubernetes cluster and its constituent machines. This includes initial provisioning, scaling (adding or removing machines), upgrading, and eventually decommissioning.
- It ensures that control plane nodes, worker nodes, and any other specialized nodes are provisioned and maintained according to the specifications defined in Kubernetes resources.
- Error Handling and Retry Logic:
- If a provisioning task fails (e.g., a machine fails to boot correctly), the controller manager includes logic to retry the task, log the failure, and notify users as necessary.
Working Example: Deploying Kubernetes on Intel NUC Hardware Using CAPT
Let’s walk through a practical example of using CAPT to manage a Kubernetes cluster on Intel NUC hardware, a popular choice for home labs and small-scale deployments.
1. Prerequisites
Before deploying the CAPT Controller Manager and provisioning Intel NUC hardware, ensure the following:
- Tinkerbell Stack: Deployed and configured to manage the Intel NUCs. This includes Tink Server, Boots, Rufio, OSIE, and Hegel.
- Intel NUC Hardware: Accessible via BMC (IPMI, Redfish, etc.) with network boot (PXE) configured.
- Cluster API Components: Installed in a management cluster, which will manage the lifecycle of the target Kubernetes cluster.
2. Deploying CAPT Controller Manager
First, you’ll need to deploy the CAPT Controller Manager in your management Kubernetes cluster. Here’s how you can do it:
CAPT Controller Manager Deployment
Create a Kubernetes manifest to deploy the CAPT Controller Manager:
apiVersion: apps/v1
kind: Deployment
metadata:
name: capt-controller-manager
namespace: capt-system
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- name: manager
image: quay.io/tinkerbell/capt-controller-manager:latest
command:
- /manager
args:
- --leader-elect
- --leader-elect-lease-duration=30s
- --leader-elect-renew-deadline=20s
- --leader-elect-retry-period=10s
ports:
- containerPort: 9443
name: webhook-server
volumeMounts:
- mountPath: /webhook-server-cert
name: webhook-cert
readOnly: true
volumes:
- name: webhook-cert
secret:
secretName: capt-webhook-server-cert
Deploy this manifest using kubectl
:
kubectl apply -f capt-controller-manager-deployment.yaml
This deployment launches the CAPT Controller Manager, which will now be responsible for managing clusters and machines using Tinkerbell.
3. Define the Cluster Resource
Next, define the Cluster
resource, which represents the Kubernetes cluster you want to deploy on the Intel NUCs:
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
name: my-cluster
namespace: default
spec:
clusterNetwork:
services:
cidrBlocks: ["10.96.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
serviceDomain: "cluster.local"
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
kind: KubeadmControlPlane
name: my-cluster-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellCluster
name: my-cluster-infra
Apply this resource to your management cluster:
kubectl apply -f cluster.yaml
4. Define the Tinkerbell Cluster Resource
The TinkerbellCluster
resource defines the specific infrastructure settings for your Tinkerbell-managed environment:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellCluster
metadata:
name: my-cluster-infra
namespace: default
spec:
controlPlaneEndpoint:
host: "192.168.1.100"
port: 6443
Apply this resource:
kubectl apply -f tinkerbell-cluster.yaml
5. Define the Control Plane Resource
The KubeadmControlPlane
resource manages the control plane nodes for your cluster:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
kind: KubeadmControlPlane
metadata:
name: my-cluster-control-plane
namespace: default
spec:
replicas: 3
version: v1.21.1
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellMachineTemplate
name: my-cluster-control-plane-template
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
enable-admission-plugins: NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
Apply the control plane resource:
kubectl apply -f kubeadm-control-plane.yaml
6. Define the Machine Template Resource
The TinkerbellMachineTemplate
resource defines the hardware template for the machines that will be provisioned by Tinkerbell:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellMachineTemplate
metadata:
name: my-cluster-control-plane-template
namespace: default
spec:
template:
spec:
providerID: "tinkerbell://my-cluster"
hardwareSelector:
manufacturer: "Intel"
plan: "NUC"
Apply this template:
kubectl apply -f tinkerbell-machine-template.yaml
7. Define the MachineDeployment Resource
For worker nodes, define a MachineDeployment
resource:
apiVersion: cluster.x-k8s.io/v1alpha4
kind: MachineDeployment
metadata:
name: my-cluster-md-0
namespace: default
spec:
clusterName: my-cluster
replicas: 3
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: my-cluster
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-cluster
spec:
clusterName: my-cluster
version: v1.21.1
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha4
kind: KubeadmConfigTemplate
name: my-cluster-bootstrap-template
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellMachineTemplate
name: my-cluster-worker-template
Apply the machine deployment resource:
kubectl apply -f machine-deployment.yaml
8. Monitoring the Cluster Provisioning
Once all the resources are defined and applied, the CAPT Controller Manager will begin provisioning the Kubernetes cluster on your Intel NUC hardware using Tinkerbell.
To monitor the progress:
kubectl get clusters -A
kubectl get machines -A
kubectl get kubeadmcontrolplanes -A
kubectl get machinedeployments -A
These commands show the current state of your cluster, machines, and the control plane.
9. Accessing the Kubernetes Cluster
Once the cluster is fully provisioned, you can access it using kubectl
. To do this, you’ll need the kubeconfig file:
kubectl get secret my-cluster-kubeconfig -o jsonpath={.data.kubeconfig} | base64 --decode > kubeconfig
export KUBECONFIG=
./kubeconfig
kubectl get nodes
This will list the nodes in your newly provisioned Kubernetes cluster, confirming that your Intel NUCs have been successfully converted into Kubernetes nodes.
Conclusion
The CAPT Controller Manager plays a crucial role in the lifecycle management of Kubernetes clusters on bare-metal hardware using Tinkerbell. By integrating with Cluster API (CAPI), it allows you to declaratively manage the provisioning, scaling, and updating of Kubernetes clusters, leveraging Tinkerbell’s capabilities to handle the underlying physical infrastructure. In this example, we walked through deploying a Kubernetes cluster on Intel NUC hardware, illustrating how CAPT simplifies the process of managing bare-metal clusters with the same ease as cloud-based environments.