Spot instances

Autoscaling using Spot instances

The CAST AI autoscaler supports running your workloads on Spot instances. This guide will help you configure and run it in 5 minutes.

Available configurations

Tolerations

When to use: Spot instances are optional

When a pod is marked only with tolerations, the Kubernetes scheduler could place such a pod/pods on regular nodes as well.

tolerations:
  - key: scheduling.cast.ai/spot
    operator: Exists

Node Selectors

When to use: only use Spot instances

If you want to make sure that a pod is scheduled on Spot instances only, add nodeSelector as well as per the example below.
The autoscaler will then ensure that only a Spot instance is picked whenever your pod requires additional workload in the cluster.

tolerations:
  - key: scheduling.cast.ai/spot
    operator: Exists
nodeSelector:
  scheduling.cast.ai/spot: "true"

Node Affinity

When to use: Spot instances are preferred - if not available, fall back to on-demand nodes

When a Spot instance is interrupted, and on-demand nodes in the cluster have available capacity, pods that previously ran on the Spot instance will be scheduled on the available on-demand nodes if the following affinity rule is applied:

spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: scheduling.cast.ai/spot
            operator: Exists

If you want to move pods back to Spot instances, use the Rebalancer feature.

Spot Fallback

When to use: you want to maximize workload up-time and mitigate low spot instance availability.

CAST AI supports the fallback of Spot instances to on-demand nodes in case there is no Spot instance availability. Our Autoscaler will temporarily add an on-demand node for your Spot-only workloads to run on. Once inventory of Spot instances is again available, on-demand nodes used for the fallback will be replaced with actual Spot instances.

Fallback on-demand instances will be labeled with scheduling.cast.ai/spot-fallback:"true" label.

To enable this feature, use Upsert cluster's policies configuration API:

{
 "spotInstances": {
    "enabled": true,
    "spotBackups": {
      "enabled": true, // this parameter will enable spot fallback feature
      "spotBackupRestoreRateSeconds": 1800 // configure how often CAST AI should try to switch back to spot instances
    }
  }
}

Spot Diversity (beta)

📣

Beta release

A recently released feature for which we are actively gathering community feedback.

When to use: you want to minimize workload interruptions, at the expense of the cost increase.

CAST AI can diversify the chosen instance types for Spot instances. Our Autoscaler will try to balance between the most diverse and cheapest instance types. By using a wider array of instance types, overall node interruption rate in a cluster will be lowered, increasing the up-time of your workloads.

Diversity is achieved by sorting the viable instance types by their frequency and price. Frequency is calculated based on how many nodes of a particular instance type family are currently running on the cluster. Lowest frequency instance types are preferred. If frequencies are equal, then the cheaper instance type is preferred. Each node added is included in the frequency scores. So, diversity is achieved even in a single upscaling event.

To enable this feature, use Upsert cluster's policies configuration API:

{
  "spotInstances": {
    "enabled": true,
    "spotDiversityEnabled": true
  }
}

Interruption prediction model

📣

Beta release for AWS customers only

A recently released feature for which we are actively gathering community feedback.

CAST AI can proactively rebalance spot nodes that are at risk of interruption by the cloud provider. The system's response to these potential interruptions varies based on the selected interruption prediction model.

AWS Rebalance Recommendations

AWS's native method informs users of an upcoming spot interruption event that will affect a node of a particular instance type. However, not all instances of the same type might receive a rebalance recommendation, and the exact time of interruption can vary significantly.
When such a rebalance recommendation is received, CAST AI marks all instances of the same type currently in the cluster for rebalancing. Concurrently, the system places this instance type on a gray-list, ensuring it is not utilized during an upscaling event (unless its the only available option). Following this, CAST AI can rebalance up to 30% of the affected nodes in the cluster (or template), doing so sequentially.

CAST AI Machine Learning

CAST AI's machine learning model can predict that a specific instance will be interrupted within the next 30 minutes. Once a prediction is issued by the model, only the affected instance is cordoned and rebalanced.

📘

Choosing this option will increase cluster costs while the model is undergoing training

Currently, we are in the process of training the model; thus, instances are not immediately rebalanced. A new node is added into the cluster while the affected node is cordoned and prepared for removal. The original node is actually deleted 60 minutes after the receipt of the prediction. This additional time is necessary to validate the prediction.

Step-by-step guide to deploying on Spot instances

In this step-by-step guide, we demonstrate how to use Spot instances with your CAST AI clusters.

To do that, we will use an example NGINX deployment configured to run only on Spot instances.

1. Enable relevant policies

To start using spot instances go to Autoscaler menu and enable the following policies:

  • Unschedulable pods policy

    • This policy requests an additional workload to be scheduled based on your deployment requirements (i.e. run on Spot instances).
  • Spot instances policy

    • This policy allows the Autoscaler to use Spot instances.

2. Example deployment

Save the following yaml file, and name it: nginx.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        scheduling.cast.ai/spot: "true"
      tolerations:
        - key: scheduling.cast.ai/spot
          operator: Exists
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: '2'
            limits:
              cpu: '3'

2.1. Apply the example deployment

With kubeconfig set in your current shell session, you can execute the following (or use other means of applying deployment files):

kubectl apply -f nginx.yaml

2.2. Wait several minutes

Once the deployment is created, it will take up to several minutes for the Autoscaler to pick up the information about your pending deployment and schedule the relevant workloads in order to satisfy the deployment needs, such as:

  • This deployment tolerates Spot instances
  • This deployment must run only on Spot instances

3. Spot instance added

  • You can see your newly added Spot instance in the Node list.

FAQ

Why didn't the platform pick a cheaper Spot instance?

Situations may occur where CAST AI doesn't pick the cheapest Spot instance available in your cloud environment. The reasons for that can be one or more of the following:

General

  • The cloud provider-specific quotas didn't allow CAST AI to pick that particular instance type. The usual mitigation for such issues is to increase quotas.
  • The specific Spot instance type could have been interrupted recently, CAST AI puts such instance types on a cooling-off period and prefers other availability zones for the time being.
  • Blacklisted API was previously used to disable instance types in the whole organization or specific cluster.
  • The Spot Diversity feature is enabled, which opts in for picking a wider variety of instance types, even if they are more expensive.
  • As Spot instance availability fluctuates constantly, during the time of instance creation, the particular instance type might not have been available, and CAST AI automatically picked the next cheapest instance.

Zone Constraints

If the added Spot instance was in a different availability zone than the cheapest instance, the following could be the reasons why:

  • Node Configuration (custom or the default one) being used has specific subnets defined, which prevent CAST AI from picking spot instances in other availability zones.
  • Node Template has a specific list of instance types configured, which prevents from picking something cheaper.
  • The subnet for that Availability Zone was full (no IPs left).
  • The specific workload has a zone affinity for a particular zone. This can be achieved in a variety of ways:
    • The pod has nodeSelector or nodeAffinity for a particular zone. Examples:
      ...
      spec:
         nodeSelector:
            topology.kubernetes.io/zone: "eu-central-1"
      ...
      
      ...
      spec:
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: topology.kubernetes.io/zone
                  operator: In
                  values:
                  - eu-central-1
      ...
      
    • The pod has zone-bound volumes (the pod cannot be scheduled in any other zone than the volume). Example:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: zone-volume
      spec:
        capacity:
          storage: 10Gi
        accessModes:
          - ReadWriteOnce
        storageClassName: zone-storage
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: topology.kubernetes.io/zone
                operator: In
                values:
                  - eu-central-1
      ---
      apiVersion: v1
      kind: Pod
      metadata:
        name: zone-pod
      spec:
        containers:
        - name: my-container
          image: my-image
          volumeMounts:
          - name: data
            mountPath: /data
        volumes:
        - name: data
          persistentVolumeClaim:
            claimName: zone-pvc
      
    • The pod has pod affinity with the zone topology key, and the pod that matches the affinity is not in the cheapest zone. Example:
      apiVersion: v1
      kind: Pod
      metadata:
        name: affinity-pod
      spec:
        affinity:
          podAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: topology.kubernetes.io/zone
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - my-app
        containers:
        - name: my-container
          image: my-image
      
    • The pod has topology spread on zone and adding a new instance in the cheapest zone would not satisfy the skew. Example:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: spread-deployment
      spec:
        replicas: 3
        strategy:
          type: RollingUpdate
          rollingUpdate:
            maxSurge: 1
            maxUnavailable: 1
        selector:
          matchLabels:
            app: my-app
        template:
          metadata:
            labels:
              app: my-app
          spec:
            topologySpreadConstraints:
            - maxSkew: 1
              topologyKey: topology.kubernetes.io/zone
              whenUnsatisfiable: DoNotSchedule
              labelSelector:
                matchLabels:
                  app: my-app
            containers:
            - name: my-container
              image: my-image
      

What is considered as spot-friendly workload?

In the Available savings report CAST AI provides recommendation to run workloads on spot nodes in the following scenarios: