Getting Started¶
This is a guide on getting started with Cluster API Provider Nutanix Cloud Infrastructure (CAPX). To learn more about cluster API in more depth, check out the Cluster API book.
For more information on how install the Nutanix CSI Driver on a CAPX cluster, visit Nutanix CSI Driver installation with CAPX.
For more information on how CAPX handles credentials, visit Credential Management.
For more information on the port requirements for CAPX, visit Port Requirements.
Note
Nutanix Cloud Controller Manager (CCM) is a mandatory component starting from CAPX v1.3.0. Ensure all CAPX-managed Kubernetes clusters are configured to use Nutanix CCM before upgrading to v1.3.0 or later. See CAPX v1.5.x Upgrade Procedure.
Production Workflow¶
Build OS image for NutanixMachineTemplate resource¶
Cluster API Provider Nutanix Cloud Infrastructure (CAPX) uses the Image Builder project to build OS images used for the Nutanix machines.
Follow the steps detailed in Building CAPI Images for Nutanix Cloud Platform (NCP) to use Image Builder on the Nutanix Cloud Platform.
For a list of operating systems visit the OS image Configuration page.
Prerequisites for using Cluster API Provider Nutanix Cloud Infrastructure¶
The Cluster API installation section provides an overview of all required prerequisites:
- Common Prerequisites
- Install and/or configure a Kubernetes cluster
- Install clusterctl
- (Optional) Enabling Feature Gates
Make sure these prerequisites have been met before moving to the Configure and Install Cluster API Provider Nutanix Cloud Infrastructure step.
Configure and Install Cluster API Provider Nutanix Cloud Infrastructure¶
To initialize Cluster API Provider Nutanix Cloud Infrastructure, clusterctl
requires the following variables, which should be set in either ~/.cluster-api/clusterctl.yaml
or as environment variables.
NUTANIX_ENDPOINT: "" # IP or FQDN of Prism Central
NUTANIX_USER: "" # Prism Central user
NUTANIX_PASSWORD: "" # Prism Central password
NUTANIX_INSECURE: false # or true
KUBERNETES_VERSION: "v1.22.9"
WORKER_MACHINE_COUNT: 3
NUTANIX_SSH_AUTHORIZED_KEY: ""
NUTANIX_PRISM_ELEMENT_CLUSTER_NAME: ""
NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME: ""
NUTANIX_SUBNET_NAME: ""
EXP_CLUSTER_RESOURCE_SET: true # Required for Nutanix CCM installation
You can also see the required list of variables by running the following:
clusterctl generate cluster mycluster -i nutanix --list-variables
Required Variables:
- CONTROL_PLANE_ENDPOINT_IP
- KUBERNETES_VERSION
- NUTANIX_ENDPOINT
- NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME
- NUTANIX_PASSWORD
- NUTANIX_PRISM_ELEMENT_CLUSTER_NAME
- NUTANIX_SSH_AUTHORIZED_KEY
- NUTANIX_SUBNET_NAME
- NUTANIX_USER
Optional Variables:
- CONTROL_PLANE_ENDPOINT_PORT (defaults to "6443")
- CONTROL_PLANE_MACHINE_COUNT (defaults to 1)
- KUBEVIP_LB_ENABLE (defaults to "false")
- KUBEVIP_SVC_ENABLE (defaults to "false")
- NAMESPACE (defaults to current Namespace in the KubeConfig file)
- NUTANIX_INSECURE (defaults to "false")
- NUTANIX_MACHINE_BOOT_TYPE (defaults to "legacy")
- NUTANIX_MACHINE_MEMORY_SIZE (defaults to "4Gi")
- NUTANIX_MACHINE_VCPU_PER_SOCKET (defaults to "1")
- NUTANIX_MACHINE_VCPU_SOCKET (defaults to "2")
- NUTANIX_PORT (defaults to "9440")
- NUTANIX_SYSTEMDISK_SIZE (defaults to "40Gi")
- WORKER_MACHINE_COUNT (defaults to 0)
Note
To prevent duplicate IP assignments, it is required to assign an IP-address to the CONTROL_PLANE_ENDPOINT_IP
variable that is not part of the Nutanix IPAM or DHCP range assigned to the subnet of the CAPX cluster.
Warning
Make sure Cluster Resource Set (CRS) is enabled before running clusterctl init
Now you can instantiate Cluster API with the following:
clusterctl init -i nutanix
Deploy a workload cluster on Nutanix Cloud Infrastructure¶
export TEST_CLUSTER_NAME=mytestcluster1
export TEST_NAMESPACE=mytestnamespace
CONTROL_PLANE_ENDPOINT_IP=x.x.x.x clusterctl generate cluster ${TEST_CLUSTER_NAME} \
-i nutanix \
--target-namespace ${TEST_NAMESPACE} \
--kubernetes-version v1.22.9 \
--control-plane-machine-count 1 \
--worker-machine-count 3 > ./cluster.yaml
kubectl create ns ${TEST_NAMESPACE}
kubectl apply -f ./cluster.yaml -n ${TEST_NAMESPACE}
cluster.yaml
file generated by CAPX, visit the NutanixCluster and NutanixMachineTemplate documentation.
Access a workload cluster¶
To access resources on the cluster, you can get the kubeconfig with the following:
clusterctl get kubeconfig ${TEST_CLUSTER_NAME} -n ${TEST_NAMESPACE} > ${TEST_CLUSTER_NAME}.kubeconfig
kubectl --kubeconfig ./${TEST_CLUSTER_NAME}.kubeconfig get nodes
Install CNI on workload a cluster¶
You must deploy a Container Network Interface (CNI) based pod network add-on so that your pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Note
Take care that your pod network must not overlap with any of the host networks. You are likely to see problems if there is any overlap. If you find a collision between your network plugin's preferred pod network and some of your host networks, you must choose a suitable alternative CIDR block to use instead. It can be configured inside the cluster.yaml
generated by clusterctl generate cluster
before applying it.
Several external projects provide Kubernetes pod networks using CNI, some of which also support Network Policy.
See a list of add-ons that implement the Kubernetes networking model. At time of writing, the most common are Calico and Cilium.
Follow the specific install guide for your selected CNI and install only one pod network per cluster.
Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is running in the output of kubectl get pods --all-namespaces
.
Kube-vip settings¶
Kube-vip is a true load balancing solution for the Kubernetes control plane. It distributes API requests across control plane nodes. It also has the capability to provide load balancing for Kubernetes services.
You can tweak kube-vip settings by using the following properties:
KUBEVIP_LB_ENABLE
This setting allows control plane load balancing using IPVS. See Control Plane Load-Balancing documentation for further information.
KUBEVIP_SVC_ENABLE
This setting enables a service of type LoadBalancer. See Kubernetes Service Load Balancing documentation for further information.
KUBEVIP_SVC_ELECTION
This setting enables Load Balancing of Load Balancers. See Load Balancing Load Balancers for further information.
Delete a workload cluster¶
To remove a workload cluster from your management cluster, remove the cluster object and the provider will clean-up all resources.
kubectl delete cluster ${TEST_CLUSTER_NAME} -n ${TEST_NAMESPACE}
Note
Deleting the entire cluster template with kubectl delete -f ./cluster.yaml
may lead to pending resources requiring manual cleanup.