Using Autoscaler in combination with CAPX¶
Warning
The scenario and features described on this page are experimental. It's important to note that they have not been fully validated.
Autoscaler can be used in combination with Cluster API to automatically add or remove machines in a cluster.
Autoscaler can be used in different deployment scenarios. This page will provide an overview of multiple autoscaler deployment scenarios in combination with CAPX. See the Testing section to see how scale-up/scale-down events can be triggered to validate the autoscaler behaviour.
More in-depth information on Autoscaler functionality can be found in the Kubernetes documentation.
All Autoscaler configuration parameters can be found here.
Scenario 1: Management cluster managing an external workload cluster¶
In this scenario, Autoscaler will be running on a management cluster and it will manage an external workload cluster. See the management cluster managing an external workload cluster section of Kubernetes documentation for more information.
Steps¶
-
Deploy a management cluster and workload cluster. The CAPI quickstart can be used as a starting point.
Note
Make sure a CNI is installed in the workload cluster.
-
Download the example Autoscaler deployment file.
- Modify the
deployment.yaml
file:- Change the namespace of all resources to the namespaces of the workload cluster.
- Choose an autoscale image.
- Change the following parameters in the
Deployment
resource:spec: containers: name: cluster-autoscaler command: - /cluster-autoscaler args: - --cloud-provider=clusterapi - --kubeconfig=/mnt/kubeconfig/kubeconfig.yml - --clusterapi-cloud-config-authoritative - -v=1 volumeMounts: - mountPath: /mnt/kubeconfig name: kubeconfig readOnly: true ... volumes: - name: kubeconfig secret: secretName: <workload cluster name>-kubeconfig items: - key: value path: kubeconfig.yml
- Apply the
deployment.yaml
file.kubectl apply -f deployment.yaml
- Add the annotations to the workload cluster
MachineDeployment
resource. - Test Autoscaler. Go to the Testing section.
Scenario 2: Autoscaler running on workload cluster¶
In this scenario, Autoscaler will be deployed on top of the workload cluster directly. In order for Autoscaler to work, it is required that the workload cluster resources are moved from the management cluster to the workload cluster.
Steps¶
- Deploy a management cluster and workload cluster. The CAPI quickstart can be used as a starting point.
- Get the kubeconfig file for the workload cluster and use this kubeconfig to login to the workload cluster.
clusterctl get kubeconfig <workload cluster name> -n <workload cluster namespace > /path/to/kubeconfig
- Install a CNI in the workload cluster.
- Initialise the CAPX components on top of the workload cluster:
clusterctl init --infrastructure nutanix
- Migrate the workload cluster custom resources to the workload cluster. Run following command from the management cluster:
clusterctl move -n <workload cluster ns> --to-kubeconfig /path/to/kubeconfig
- Verify if the cluster has been migrated by running following command on the workload cluster:
kubectl get cluster -A
- Download the example autoscaler deployment file.
- Create the Autoscaler namespace:
kubectl create ns autoscaler
- Apply the
deployment.yaml
filekubectl apply -f deployment.yaml
- Add the annotations to the workload cluster
MachineDeployment
resource. - Test Autoscaler. Go to the Testing section.
Testing¶
- Deploy an example Kubernetes application. For example, the one used in the Kubernetes HorizontalPodAutoscaler Walkthrough.
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
- Increase the amount of replicas of the application to trigger a scale-up event:
kubectl scale deployment php-apache --replicas 100
-
Decrease the amount of replicas of the application again to trigger a scale-down event.
Note
In case of issues check the logs of the Autoscaler pods.
-
After a while CAPX, will add more machines. Refer to the Autoscaler configuration parameters to tweak the behaviour and timeouts.
Autoscaler node group annotations¶
Autoscaler uses following annotations to define the upper and lower boundries of the managed machines:
Annotation | Example Value | Description |
---|---|---|
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size | 5 | Maximum amount of machines in this node group |
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size | 1 | Minimum amount of machines in this node group |
These annotations must be applied to the MachineDeployment
resources of a CAPX cluster.
Example¶
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1"