Does CAST have a way to configure root volume size when using a node template?

Root volume is currently only accessible in node configurations. Currently, this would require a dedicated node configuration and node template.

We tried to simulate the scenario where Template A linked to Node config A. By default Node config A has 100 + 10 CPU per core disk configuration. The pending pod A has a selector for Template A and requests just CPU/MEM - it gets 2 CPU node with 120 GiB disk. The pending pod B has a selector for Template A and in the requests also has a request for ephemeral storage:

            cpu: '1'  
            ephemeral-storage: "100Gi"

This pod gets a 2 CPU node with 213 GiB disk.

Is it possible to define zones and disk types when creating a node template? Is there an API option available for this purpose?

Node templates don’t offer this option at the moment. However, you can create a custom node config and link it to the template. The node config can have a subset of the cluster’s zones.

A side option would be to add a zone selector for the workloads:

nodeSelector: "some-zone"

Regarding disk types, if you'd like to use storage optimized resources (local SSDs), you can select storage optimized instances only via instance constraints in the template.

Is there a way to specify storage as more of a RAID-0 setup?

CAST AI doesn't support this type of configuration/setup. Currently, CAST AI only supports LVME on EKS storage optimized instances. Besides EKS storage optimized nodes with LVM, we also support custom boot EBS volume configuration for regular instances, more about that here.

You can check if you get a proper RAID setup via init script test.

How does CAST AI enforce "ephemeral storage in order to limit users storage on nodes" when consolidating?

CAST AI respects Kubernetes storage requests:

ephemeral-storage: "100Gi"

This means that CAST AI will find a node with enough local to fit pods that have requirements defined, read more about it here: Pod placement.

What is the default volume size for EBS?

The default volume size is 100GiB + CPU to Storage ratio, to increase it further for every CPU provisioned on the node.