Scoped Autoscaler

Autoscaler features described below can be made to act only on a subset of your cluster. By marking specific workloads for autoscaling, only that subset will be considered by the unscheduled pods policy, and the empty nodes policy will only clean up nodes that the autoscaler has previously created.

While this mode is turned on, the autoscaler created nodes will have a specific taint: scheduling.cast.ai/scoped-autoscaler=true:NoSchedule. This ensures that only the subset of workloads specifically meant for the scoped autoscaler will be scheduled on these nodes.

For pods that you wish to be included, update your relevant deployments to contain this configuration:

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      nodeSelector:
        provisioner.cast.ai/managed-by: cast.ai
      tolerations:
      - key: "scheduling.cast.ai/scoped-autoscaler"
        operator: "Exists"
        effect: "NoSchedule"

The node selector will ensure that pods only schedule on CAST AI-provisioned nodes. Also, this specific selector is what the scoped autoscaler is looking for when deciding which unsheduled pods are within the scope.

Toleration is required for the above-described reasons: we want the pods to actually be able to be scheduled on provisioned nodes. If toleration is not present, this will be treated as misconfiguration and the pod will be ignored.

Evictor also needs to be configured to run in scoped mode. Call PUT /v1/kubernetes/clusters/{clusterId}/policies API by supplying full policy config with evictor settings updated, partial snippet bellow (with scopesMode: True)

  ...
    "evictor": {
      "enabled": true,
      "dryRun": false,
      "aggressiveMode": false,
      "scopedMode": true,
      "cycleInterval": "5s",
      "allowed": true,
      "nodeGracePeriodMinutes": 2
    }
  ...

What’s Next