Cluster Resource
Deep Dive into the Cluster Resource in CAPT (Cluster API Provider for Tinkerbell)
The Cluster Resource in the Cluster API Provider for Tinkerbell (CAPT) is a fundamental component of the Cluster API (CAPI) ecosystem. It represents the desired state of a Kubernetes cluster, serving as a high-level abstraction that defines the overall configuration and behavior of the cluster. The Cluster Resource interacts with other resources, such as Machine
, MachineDeployment
, and infrastructure-specific resources (like TinkerbellCluster
), to orchestrate the creation, management, and scaling of Kubernetes clusters on bare-metal infrastructure.
Core Responsibilities of the Cluster Resource
- Cluster-wide Configuration:
- The Cluster Resource specifies cluster-wide settings, such as the networking configuration, control plane endpoint, and service domain. These configurations are crucial for the proper operation of the Kubernetes cluster.
- It acts as the parent resource to other cluster-related resources, ensuring that the entire cluster is managed consistently and according to the defined specifications.
- Interaction with Infrastructure Providers:
- The Cluster Resource integrates with the Infrastructure Provider (in this case, Tinkerbell) to manage the underlying infrastructure. It uses the
infrastructureRef
field to link to an infrastructure-specific resource, such asTinkerbellCluster
, which handles the bare-metal provisioning. - This interaction allows the Cluster Resource to manage both the virtual Kubernetes components and the physical hardware seamlessly.
- The Cluster Resource integrates with the Infrastructure Provider (in this case, Tinkerbell) to manage the underlying infrastructure. It uses the
- Control Plane Management:
- The Cluster Resource coordinates the setup of the Kubernetes control plane, ensuring that the API server, etcd, and other critical components are properly configured and highly available.
- It references the
KubeadmControlPlane
resource, which manages the specifics of control plane nodes, including their creation, scaling, and upgrading.
- Networking Configuration:
- The Cluster Resource defines the networking settings for the cluster, such as the pod and service CIDR ranges and the DNS domain. These settings ensure that networking within the cluster is configured correctly and consistently across all nodes.
Core Components of the Cluster Resource
- ClusterSpec:
- Role: Defines the desired state of the Kubernetes cluster.
- Description: The
spec
section of the Cluster Resource outlines the cluster’s configuration, including networking settings, the control plane endpoint, and references to other critical resources like the infrastructure provider and control plane. - Functions:
- Specifies the cluster network configuration (pod and service CIDR, DNS domain).
- Defines the control plane endpoint where the Kubernetes API server will be accessible.
- Links to infrastructure-specific resources and the control plane management resource.
- ClusterStatus:
- Role: Represents the current state of the cluster.
- Description: The
status
section of the Cluster Resource provides real-time information about the cluster’s state, such as the API endpoints, conditions, and the overall health of the cluster. - Functions:
- Tracks the current status of the cluster’s components (e.g., whether the control plane is available).
- Provides feedback on the reconciliation process to ensure that the cluster’s actual state matches the desired state.
- InfrastructureRef:
- Role: Links to the infrastructure provider resource.
- Description: This field in the Cluster Resource references the infrastructure-specific resource (e.g.,
TinkerbellCluster
) that manages the underlying bare-metal infrastructure. - Functions:
- Facilitates communication between the Cluster Resource and the infrastructure provider, ensuring that physical servers are provisioned and configured according to the cluster’s needs.
- ControlPlaneRef:
- Role: Links to the control plane management resource.
- Description: This field references the
KubeadmControlPlane
resource, which is responsible for managing the control plane nodes of the cluster. - Functions:
- Ensures that the control plane is correctly initialized, scaled, and upgraded as needed.
- Coordinates the creation and management of control plane nodes in conjunction with the infrastructure provider.
Working Example: Using the Cluster Resource for an Intel NUC-based Kubernetes Cluster
Let’s walk through a practical example of how to define and use the Cluster Resource in CAPT to manage a Kubernetes cluster running on Intel NUC hardware.
1. Define the Cluster Resource
The Cluster
resource defines the desired state of the Kubernetes cluster, including its networking configuration and control plane setup.
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
name: my-nuc-cluster
namespace: default
spec:
clusterNetwork:
services:
cidrBlocks: ["10.96.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
serviceDomain: "cluster.local"
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
kind: KubeadmControlPlane
name: my-nuc-cluster-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellCluster
name: my-nuc-cluster-infra
Key components of this configuration:
- clusterNetwork: Defines the networking settings for the cluster, including the CIDR blocks for services and pods.
- controlPlaneRef: Links to the
KubeadmControlPlane
resource, which will manage the control plane nodes. - infrastructureRef: References the
TinkerbellCluster
resource that manages the underlying bare-metal infrastructure through Tinkerbell.
*** Services and Pods @ clusterNetwork
1.Pods:
Definition: Pods are the smallest and simplest unit in the Kubernetes object model that you can create or deploy. A pod represents a single instance of a running process in your cluster and can contain one or more containers that share the same network namespace (IP address and port space).
• Networking:
• Pod Network (CIDR Block): The pods CIDR block specifies the range of IP addresses allocated to pods in the cluster. Each pod in the cluster is assigned an IP address from this range, and this IP address is used by the pod to communicate with other pods in the cluster.
• Example: In your configuration, 192.168.0.0/16 is specified as the pods CIDR block, meaning that any pod created in the cluster will be assigned an IP address from this range.
2.Services
Definition: Services in Kubernetes are an abstraction that defines a logical set of pods and a policy by which to access them. Services are used to expose a set of pods to other parts of the cluster or external clients. Services provide stable IP addresses and DNS names for a group of pods and load-balance traffic across them.
• Networking:
• Service Network (CIDR Block): The services CIDR block defines the range of IP addresses allocated to Kubernetes services within the cluster. Each service gets an IP address from this range, known as the “ClusterIP”. This IP is used by other services or pods to access the service.
• Example: In your configuration, 10.96.0.0/12 is specified as the services CIDR block, meaning that any service created in the cluster will be assigned an IP address from this range.
Key Differences:
• Purpose:
• Pods: Represent individual or a group of containers running an application, typically with their own IP address and port.
• Services: Provide a stable endpoint to access a set of pods, usually with load balancing, and have their own IP address separate from the pods they represent.
• IP Address Management:
• Pods: Assigned IP addresses from the pods CIDR block. These IP addresses are ephemeral and are specific to the individual pods.
• Services: Assigned IP addresses from the services CIDR block. These IP addresses are stable for the lifetime of the service, even if the underlying pods change.
• Network Interaction:
• Pods: Communicate with each other over the pod network. Pods can directly communicate with other pods using their IP addresses.
• Services: Provide a stable entry point for accessing pods. Other pods or external clients use the service IP to reach the application running in the pods behind the service.
Apply this resource using kubectl
:
kubectl apply -f cluster.yaml
This command creates the Cluster
resource, which then begins orchestrating the setup of the Kubernetes cluster according to the specified configuration.
2. Define the TinkerbellCluster Resource
The TinkerbellCluster
resource is referenced by the Cluster
resource and manages the interaction with the Tinkerbell infrastructure.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellCluster
metadata:
name: my-nuc-cluster-infra
namespace: default
spec:
controlPlaneEndpoint:
host: "192.168.1.200"
port: 6443
Apply this resource:
kubectl apply -f tinkerbell-cluster.yaml
This resource configures the infrastructure settings for the entire Kubernetes cluster, ensuring that Tinkerbell manages the underlying hardware according to the cluster’s needs.
3. Define the Control Plane Resource
The KubeadmControlPlane
resource manages the Kubernetes control plane. It is referenced by the Cluster
resource to ensure the control plane is properly managed.
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
kind: KubeadmControlPlane
metadata:
name: my-nuc-cluster-control-plane
namespace: default
spec:
replicas: 3
version: v1.21.1
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellMachineTemplate
name: my-nuc-control-plane-template
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
enable-admission-plugins: NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
Apply the control plane resource:
kubectl apply -f kubeadm-control-plane.yaml
This resource ensures that the control plane nodes are correctly provisioned and maintained, leveraging the TinkerbellMachineTemplate
for consistent configuration.
4. Create a MachineDeployment Resource for Worker Nodes
To deploy worker nodes in the cluster, define a MachineDeployment
resource that uses the TinkerbellMachineTemplate
:
apiVersion: cluster.x-k8s.io/v1alpha4
kind: MachineDeployment
metadata:
name: my-nuc-cluster-worker-md
namespace: default
spec:
clusterName: my-nuc-cluster
replicas: 3
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: my-nuc-cluster
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-nuc-cluster
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha4
kind: KubeadmConfigTemplate
name: my-nuc-cluster-bootstrap-template
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: TinkerbellMachineTemplate
name: my-nuc-worker-template
version: v1.21.1
Apply the machine deployment resource:
kubectl apply -f machine-deployment.yaml
This resource defines the worker nodes in your cluster, leveraging the TinkerbellMachineTemplate
to ensure consistency across the nodes.
5. Monitor the Cluster Setup
As the CAPT controller manager processes these resources, the Kubernetes cluster will be provisioned on the Intel NUC hardware.
You can monitor the progress with the following commands:
kubectl get clusters -A
kubectl get machines -A
kubectl get kubeadmcontrolplanes -A
kubectl get mach
inedeployments -A
These commands provide insights into the status of the cluster, including the state of individual machines and the control plane.
6. Access the Kubernetes Cluster
Once the cluster setup is complete, access the cluster using the kubeconfig file generated during the process:
kubectl get secret my-nuc-cluster-kubeconfig -o jsonpath={.data.kubeconfig} | base64 --decode > kubeconfig
export KUBECONFIG=./kubeconfig
kubectl get nodes
This command will list the nodes in your newly provisioned Kubernetes cluster, confirming that the Intel NUCs are successfully running as part of the cluster.
Conclusion
The Cluster Resource in CAPT is the cornerstone of the Kubernetes cluster management process. It defines the overall configuration and behavior of the cluster, integrating with the infrastructure provider to manage the underlying bare-metal servers. By abstracting the complexities of infrastructure management, the Cluster Resource allows you to define and manage Kubernetes clusters in a cloud-native way, even when running on bare-metal hardware like Intel NUCs. This approach ensures consistency, scalability, and ease of management across your Kubernetes environments.