Pod startup failures with PD Standard on C3/C3D nodes (GKE)
Resolve volume attachment failures when pods using pd-standard storage are scheduled on incompatible C3 or C3D machine types in GKE clusters.
Pods fail to start when PersistentVolumeClaims using pd-standard storage are scheduled onto Nodes running C3 or C3D machine types. These instance families do not support pd-standard disks, which prevents volume attachment.
Symptoms
Pods remain in ContainerCreating state and display this warning:
Warning FailedAttachVolume attachdetach-controller
AttachVolume.Attach failed for volume "pvc-example" :
rpc error: code = InvalidArgument desc = Failed to Attach:
failed cloud service attach disk call: googleapi:
Error 400: pd-standard disk type cannot be used by c3d-standard-90 machine type., badRequestRoot cause
When your default StorageClass is configured with volumeBindingMode: Immediate and type: pd-standard, Kubernetes provisions the PersistentVolume (PV) before it schedules the Pod. This creates a timing issue:
- Kubernetes creates the PV as
pd-standard - The scheduler places the Pod on an available Node (potentially C3/C3D)
- The Node attempts to attach the volume, but cannot support
pd-standard - Volume attachment fails, and the Pod cannot start
C3 and C3D machine types support only pd-balanced, pd-ssd, and pd-extreme disk types.
Solution
Prevent Pods requiring pd-standard storage from scheduling onto incompatible Nodes by implementing both configuration changes below.
1. Update StorageClass Volume Binding Mode
Change your StorageClass to use WaitForFirstConsumer. This ensures Kubernetes provisions the PV only after the Pod is scheduled, allowing the scheduler to consider Node constraints, including disk type compatibility.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete2. Add Node Selector to Pods Using PD Standard
Add this NodeSelector to Pod specifications that use pd-standard volumes:
nodeSelector:
volume.scheduling.cast.ai/pd-standard: "true"This ensures these Pods only schedule onto NodePools that support pd-standard disks.
Why Both Changes Are Required
| Configuration | Purpose |
|---|---|
WaitForFirstConsumer | Delays PV provisioning until Kubernetes selects a Node, enabling the scheduler to evaluate disk type compatibility |
| Node selector | Restricts pod placement to nodes that support pd-standard, preventing scheduling onto C3/C3D machine types |
Result
With this configuration:
- Pods using PD Standard Volumes schedule only on compatible Node types
- Kubernetes provisions volumes after selecting a valid Node
- Cast AI's autoscaler respects these constraints when placing workloads
- Volume attachment failures are prevented
Additional Information
For details on disk type compatibility with GCP machine types, see Google Cloud documentation on persistent disk types.
Updated about 2 hours ago
