Introduction to Cost Monitoring

Tracking and monitoring expenses is an essential part of cloud cost management.

Built with cloud-native teams in mind, CAST AI’s monitoring presents all expenses in one place and enables breaking them down by K8s concepts like cluster, workload, and namespaces. The feature lets you analyze the efficiency and resources used by your cluster and workload and demonstrates cost fluctuations over time.

This guide describes the insights you will find in CAST AI’s cost monitoring and how to use it to understand your cloud expenses better.

How to use cost monitoring in CAST AI

Cost monitoring becomes available in your product console right after you connect the cluster.

There is no need to install anything else if the CAST AI agent already runs in your cluster. For workload efficiency, ​​you also need to have a Kubernetes metrics server.

Use it to get insights such as:

  • Your most expensive workload, its efficiency, and how many of your provisioned resources get wasted.
  • Your daily and monthly cluster spend and its efficiency.
  • Your compute spend per on-demand, spot, and fallback nodes.
  • Cost per CPU and MEM.
  • Monthly spend forecasts and the prediction of your end-of-month bill.
  • And many more.

Key reasons to use CAST AI’s cost monitoring

Free for unlimited clusters
There are no limits on the number or size of your clusters.
Available immediately
You get cost insights right after connecting your cluster.
CAST AI refreshes the underlying data every 60 seconds.
Assess your K8s efficiency
Clearly see the difference between the provisioned and requested resources.
Billing access not required
CAST AI uses public pricing, so you don’t have to share billing details.
Unlimited access to historical data
You can analyze months of past data for free.

Main reports

CAST AI’s cost monitoring includes five main sections providing different levels of granularity.

Here’s what you find in all of them:

  • Cluster gets you an overview of cluster expenses: compute spend, cost per provisioned resources, avg. daily cost, and daily compute spend details, incl. cost per CPU and MEM. You also get a forecast of your final monthly bill and the overall change compared to the previous month.
  • Workloads report presents the compute cost for each workload, with additional information on their controller type and namespace and the total cost per CPU and MEM. You can further filter your results by labels and namespaces. Additionally, Workload Efficiency highlights the difference between the requested and used resources for each workload, helping to put a number on wasted resources.
  • Namespaces report provides data on the compute cost for each namespace, incl. average CPU and MEM requirement per hour and the total cost per resource.
  • Allocation Groups report provides insights into the allocation groups you add to your cluster. These custom workload groups allow you to allocate costs by grouping workloads by namespaces or labels.
  • Cost comparison lets you compare the requested CPUs' cost between different periods to understand the level of delivered savings.

Cost monitoring concepts

This section outlines the key concepts you need to understand the cost report:

  • Cluster is a set of nodes that run containerized applications.
  • Workload refers to an application running on Kubernetes.
  • Namespace provides a mechanism for isolating groups of resources within a cluster. Resource names must be unique within a given namespace but not across all namespaces.
  • Cluster compute cost is the total monthly cost of compute resources provisioned in a cluster.
  • Node lifecycle refers to on-demand, spot, and spot fallback nodes, where the last one involves temporarily using on-demand nodes when spot instances become unavailable.
  • Normalized cost per CPU is the total cluster compute cost divided by the total number of CPUs provisioned in a cluster. CAST AI also calculates subtotals of this value for spot, on-demand, and fallback instances.
  • Price per provisioned resource indicates your average cost per resource unit (CPU, RAM). This value comes from the total cost divided by the number of resources in your cluster and mostly depends on your VM type and lifecycle.
  • Price per requested resource shows the cost per resource depending on your workload needs. The value results from the total cost of resources divided by the requested units. You can use it to assess the efficiency of autoscaling – when you overprovision, the cost per requested resource increases.

What’s Next

Dive deeper into your cost reports: