GitOps with IBM Kubecost: API Driven Rightsizing

Use IBM Kubecost APIs to turn rightsizing recommendations into Git based manifest updates that fit cleanly into Argo CD workflows.

Introduction

When using IBM Kubecost, the UI clearly shows valuable insights on running workloads more efficiently through Container Request Right-Sizing Recommendations. Inevitably, someone asks: “We use GitOps—how do we integrate these recommendations into our pipeline?”

This guide shows you how to programmatically apply Kubecost recommendations to your Argo CD-managed deployments using Kubecost APIs and a simple bash script.

Prerequisites

Before we begin, ensure you have:

The Challenge

Manual resource optimization doesn’t scale. Teams need a way to:

  • Continuously monitor resource usage
  • Get right-sizing recommendations based on actual usage
  • Automatically apply those recommendations through GitOps
  • Maintain a full audit trail and rollback capability

Let’s build an automated solution.

Step 1: Deploy a Sample Application

We’ll use the Kubernetes Guestbook application as our example.

Here’s the current deployment with over-provisioned resources:

apiVersion: apps/v1 
kind: Deployment 
metadata: 
name: guestbook-ui 
namespace: guestbook 
spec: 
replicas: 1 
selector: 
matchLabels: 
app: guestbook-ui 
spec: 
containers: 
- image: gcr.io/google-samples/gb-frontend:v5 
name: guestbook-ui 
resources: 
requests: 
cpu: "200m" 
memory: "256Mi" 
limits: 
cpu: "500m" 
memory: "512Mi" 

Notice the container requests 200m CPU and 256Mi memory. Let’s see what Kubecost recommends.

Step 2: Query the Kubecost API

Kubecost provides a powerful API endpoint for request sizing recommendations.

Here’s how to query it:

KUBECOST_ADDRESS='http://localhost:9090/model'  
 
curl -G \ 
-d 'algorithmCPU=max' \ 
-d 'algorithmRAM=max' \ 
-d 'targetCPUUtilization=0.65' \ 
-d 'targetRAMUtilization=0.65' \ 
-d 'window=3d' \ 
--data-urlencode 'filter=namespace:"guestbook"' \ 
${KUBECOST_ADDRESS}/savings/requestSizingV2 | jq '.Recommendations[].recommendedRequest' 

Key Parameters:

  • algorithmCPU/RAM: Set to max to use peak usage as the baseline
  • targetCPUUtilization: Target 65% CPU utilization (room for spikes)
  • targetRAMUtilization: Target 65% memory utilization
  • window: Analyze the past 3 days of metrics
  • filter: Scope recommendations to the guestbook namespace

More information on Kubecost’s recommendations.

Sample Response:

{ 
"cpu": "10m", 
"memory": "20Mi" 
} 

Kubecost recommends 10m CPU and 20Mi memory—a significant reduction from our current 200m/256Mi!

Step 3: Automate YAML Updates with Bash

Now let’s create a script that fetches recommendations and updates our deployment YAML:

#!/bin/bash 
set -e 
 
# Configuration 
KUBECOST_ADDRESS='http://localhost:9090/model' # Change this to your actual Kubecost address 
NAMESPACE='guestbook' # Change this to your actual namespace 
YAML_FILE="guestbook/deployment.yaml" # Change this to your actual YAML file path 
 
 
echo "Fetching recommendations from Kubecost..." 
 
# Get recommendations from Kubecost API 
RECOMMENDATIONS=$(curl -s -G \ 
-d 'algorithmCPU=max' \ 
-d 'algorithmRAM=max' \ 
-d 'targetCPUUtilization=0.65' \ 
-d 'targetRAMUtilization=0.65' \ 
-d 'window=3d' \ 
--data-urlencode "filter=namespace:\"$NAMESPACE\"" \ 
${KUBECOST_ADDRESS}/savings/requestSizingV2) 
 
# Extract values from first recommendation 
CONTAINER_NAME=$(echo "$RECOMMENDATIONS" | jq -r '.Recommendations[0].containerName') 
CPU=$(echo "$RECOMMENDATIONS" | jq -r '.Recommendations[0].recommendedRequest.cpu') 
MEMORY=$(echo "$RECOMMENDATIONS" | jq -r '.Recommendations[0].recommendedRequest.memory') 
 
# Check if we received valid data 
if [ "$CONTAINER_NAME" == "null" ] || [ -z "$CONTAINER_NAME" ]; then 
echo "Error: No recommendations found for namespace $NAMESPACE" 
exit 1 
fi 
 
echo "Updating container: $CONTAINER_NAME" 
echo "New CPU request: $CPU" 
echo "New Memory request: $MEMORY" 
 
# Update YAML 
yq eval -i " 
(.spec.template.spec.containers[] | select(.name == \"$CONTAINER_NAME\") | .resources.requests.cpu) = \"$CPU\" | 
(.spec.template.spec.containers[] | select(.name == \"$CONTAINER_NAME\") | .resources.requests.memory) = \"$MEMORY\" 
" "$YAML_FILE" 
 
echo "Successfully updated $YAML_FILE" 

What this script does:

  • Queries Kubecost for recommendations
  • Extracts the container name and recommended resources using jq
  • Updates the YAML file using yq (preserves formatting and comments)
  • Only modifies the requests section, leaving limits unchanged

Step 4: Integrate with GitOps

Now let’s push changes these changes to your Argo CD repository:

#!/bin/bash 
set -e 
 
# Configuration 
KUBECOST_ADDRESS='http://localhost:9090/model' 
NAMESPACE='guestbook' 
YAML_FILE="deployment.yaml" 
GIT_REPO_PATH="/path/to/argocd-repo" 
GIT_BRANCH="main" 
 
# Fetch recommendations (same as Step 3) 
RECS=$(curl -s -G \ 
-d 'algorithmCPU=max' \ 
-d 'algorithmRAM=max' \ 
-d 'targetCPUUtilization=0.65' \ 
-d 'targetRAMUtilization=0.65' \ 
-d 'window=3d' \ 
--data-urlencode "filter=namespace:\"$NAMESPACE\"" \ 
${KUBECOST_ADDRESS}/savings/requestSizingV2) 
 
CONTAINER=$(echo "$RECS" | jq -r '.Recommendations[0].containerName') 
CPU=$(echo "$RECS" | jq -r '.Recommendations[0].recommendedRequest.cpu') 
MEMORY=$(echo "$RECS" | jq -r '.Recommendations[0].recommendedRequest.memory') 
 
echo "Updating $CONTAINER: CPU=$CPU, Memory=$MEMORY" 
 
# Navigate to repo and update 
cd "$GIT_REPO_PATH" 
git pull origin "$GIT_BRANCH" 
 
# Update YAML 
yq eval -i " 
(.spec.template.spec.containers[] | select(.name == \"$CONTAINER\") | .resources.requests.cpu) = \"$CPU\" | 
(.spec.template.spec.containers[] | select(.name == \"$CONTAINER\") | .resources.requests.memory) = \"$MEMORY\" 
" "$YAML_FILE" 
 
# Check for changes 
if git diff --quiet; then 
echo "No changes detected" 
exit 0 
fi 
 
# Commit and push 
git add "$YAML_FILE" 
git commit -m "chore: update $CONTAINER resource requests 
 
Kubecost recommendations applied: 
- CPU: $CPU 
- Memory: $MEMORY 
 
Based on 3d usage analysis with 65% target utilization" 
 
git push origin "$GIT_BRANCH" 
 
echo "✓ Changes pushed to $GIT_BRANCH" 

Best Practices

1. Start Conservative:

  • Begin with a less aggressive target utilization (e.g., 50%)
  • Monitor workloads for at least 14 days to account for traffic spikes
  • Test in dev/staging environments before production
  • Monitor application performance after changes

2. Add Safety Rails:

# Add minimum thresholds 
MIN_CPU="5m" 
MIN_MEMORY="10Mi" 
 
if [ "$CPU" \< "$MIN_CPU" ]; then 
CPU="$MIN_CPU" 
fi 

3. Use Pull Requests: Instead of direct pushes, create PRs for manual review.

git checkout -b "kubecost-recommendations-$(date +%Y%m%d)" 
# ... make changes ... 
git push origin HEAD 
# Use GitHub CLI to create PR 
gh pr create --title "Kubecost recommendations" --body "..." 

4. Add Notifications: Integrate with Slack or Teams to notify when recommendations are applied.

Conclusion

By combining IBM Kubecost’s intelligent recommendations with GitOps automation, you can continuously optimize your Kubernetes resources without manual intervention. The script is extensible—add validation rules, notifications, or integrate with your CI/CD pipeline as needed.

For more information, visit the IBM Kubecost API documentation. To see this in action, check out our Monthly Kubechat webinar, Closing the Kubernetes Cost Loop with GitOps.

Article Contents

Categories

Tags

Additional Resources

The-ATUM-Poster-A1127-thumb-new-logo

The ATUM Poster

FinOps-A-New-Approach-to-Cloud-Financial-Management-thumb-3

FinOps: A New Approach to Cloud Financial Management

The-Fast-Track-Guide-to-SAFe-Implementation-A2953-thumb-new

The Fast Track Guide to SAFe Implementation