Introduction
Over-provisioned containers are one of the quietest budget leaks in Kubernetes. Pods requesting 512Mi of RAM but using 20Mi. CPU requests of 500m with actual consumption around 10m. Multiply that across dozens of deployments and the waste adds up fast.
In our previous post, GitOps with IBM Kubecost: API-Driven Rightsizing, we showed how to apply IBM Kubecost recommendations through Git so Argo CD can reconcile those changes as part of a standard GitOps workflow. This post builds on that foundation by covering a second approach: using IBM Kubecost Actions to apply rightsizing changes on a recurring schedule, and configuring Argo CD to keep those changes in place instead of reverting them.
The problem: Argo CD self-heal vs. rightsizing patches
Argo CD’s selfHeal: true automatically reconciles drift in managed fields back to the Git-defined desired state. That’s great for stability, but it directly conflicts with any tool that mutates resource specs in-cluster without a Git commit.
When Kubecost Actions updates resources.requests, Argo CD detects the divergence and reconciles the Git-defined state back on the next sync cycle unless those fields are ignored.
The fix: tell Argo CD to ignore differences in resource requests and limits using ignoreDifferences, while letting it continue managing everything else.
Step 1: Configure Argo CD to ignore resource diffs
Add the ignoreDifferences block below to your Argo CD Application manifest. This tells Argo CD to ignore requests and limits differences from reconciliation at both the Pod and Deployment levels.
project: default
source:
repoURL: <$REPO_URL>
path: <$PATH>
targetRevision: HEAD
destination:
server: <$SERVER>
namespace: argocd-prod-apps
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- RespectIgnoreDifferences=true # ← required
ignoreDifferences:
- kind: Pod
jqPathExpressions:
- .spec.containers[].resources.requests
- .spec.containers[].resources.limits
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[].resources.requests
- .spec.template.spec.containers[].resources.limits
Don’t skip RespectIgnoreDifferences=true. Without it, Argo CD respects the ignored fields during health checks but overwrites them on the next sync.
Step 2: Check your recommendations in Kubecost
Go to Savings → Container Request Right-Sizing and confirm Kubecost has enough usage data to make meaningful recommendations. Filter by namespace to scope the view to the workloads you want to automate.
Figure A: In this example, nginx-deployment requests 128Mi RAM and 51m CPU but actually uses 20Mi and 10m—a 0.21% efficiency rating, worth $7.32/month in recoverable waste.
Step 3: Create a Kubecost Action
Go to Actions → Create Action and configure:
- Name: something identifiable in logs, e.g.
ArgoCD Actions Test - Workloads: filter to the target namespace or deployment
- Schedule: use the time-block grid to pick low-traffic UTC hours for when patches should apply
- Profile: Development for aggressive savings (P95 usage), Production for conservative headroom. Match your lookback window to the same risk tolerance.
Click Create Action when done.
Figure B: Screenshot showing “Create Action” in IBM Kubecost.
How it works end-to-end
- Kubecost patches the live Deployment’s resource requests on schedule.
- Argo CD detects a diff but ignores the
resourcesfields per your configuration. - The updated rightsized values persist, while Argo CD continues reconciling the rest of the Application as usual.
On Git drift: Your committed manifests will diverge from the live cluster state for the ignored resource fields. This is an acceptable tradeoff for most teams. If strict manifest purity is required, use Kubecost’s recommendations to drive automated PRs instead.
Conclusion
The ignoreDifferences block is the handshake between Argo CD and IBM Kubecost—Argo CD owns desired state, IBM Kubecost owns resource sizing. A few dollars saved per deployment per month becomes significant at scale, with no ongoing manual effort after setup.
To see this in action, check out our Monthly Kubechat webinar, Closing the Kubernetes Cost Loop with GitOps.