Introducing Resource-Level AWS Cost & Usage Analytics

We’re thrilled to announce a new way for our customers to examine their cloud spend at the resource level by introducing five new measures into Cloudability. You can use these new dimensions and metrics to identify the specific resources or services that drive your cloud costs.

In cloud parlance, the resource is the item you buy (or, rather — rent). For example, on AWS, this could be the storage bucket (S3), the compute instance (EC2), a data warehouse (Redshift), or an individual database (RDS). As cloud usage continues to grow, so do new ways to examine the performance of your cloud deployments. Years ago, most business owners would ask for the basics, like seeing high-level costs over time. Now, we have the means to show much more, at a granular level of detail.

Answering the deeper questions about cloud costs

If you manage hundreds or thousands of AWS resources of all types and sizes, it can be challenging to see which resources are impacting costs. The new measures aim to enhance the way you can monitor your AWS cost and usage.

Some common use cases:

Examine the specific AWS resources that drive cost

Effectively managing your cloud costs starts with an understanding of the specific drivers of spend. For example, if you feel you’re spending too much on data transfer, how do you know where to start to optimize that cost? And, is the right answer a corrective action, or is it just simply understanding the cause of that cost?

With the new “Resource ID” dimension, you can identify the specific compute instances, storage buckets, storage volumes, and other AWS resources, that drive costs the most. This helps you initiate the analysis and discussions with your team about controlling these costs. The Resource ID dimension is sourced from the AWS billing files, an example of which is below. Looking at that example, in S3, the Resource ID shows unique bucket names. For Redshift, it shows the identifier of the cluster. The full ID value is preserved to allow for filtering and also lookups in the AWS console or API.

Service Resource Identifier What is it?
Amazon Elastic Compute Cloud vol-fffcce3e EBS Volume
Amazon Elastic Compute Cloud i-6b0feeaf Instance ID
Amazon Elastic Compute Cloud arn:aws:ec2:us-west-2:111122223333:instance/i-9304e457 CW charges for instance ID
Amazon RDS Service arn:aws:rds:us-west-2:111122223333:db:prod Database
Amazon Simple Storage Service foo-test S3 Bucket
Amazon Redshift arn:aws:redshift:us-east-1:111122223333:cluster:history RedShift Cluster
Amazon ElastiCache arn:aws:elasticache:us-east-1:111122223333:cluster:cached-stuff cache cluster
Amazon CloudFront EZB2RM3M2OMXR Distribution Id
Amazon DynamoDB arn:aws:dynamodb:us-west-1:111222333444:table/mytotallyraddynamodbtable/tableId/15ee7aed-3e8a-4179-bbc0-f5a322246d10 DynamoDB Table
Amazon Elastic MapReduce arn:aws:elasticmapreduce:us-east-1:111122223333:cluster/j-1RJCA1X2Z02U3 MR cluster
Amazon Elasticsearch Service arn:aws:es:us-west-2:111122223333:domain/searchplace Search Domain
Amazon CloudSearch arn:aws:cloudsearch:us-east-1:111122223333:domain/devop search domains
Amazon Virtual Private Cloud vpn-fa8e6d93 VPN ID
Amazon Glacier arn:aws:glacier:us-west-2:111122223333:vaults/FileBackup Vault
AWS Lambda arn:aws:lambda:us-east-1:111122223333:function:SnsToSlack Name of the Lambda function

Here’s a report that identifies all S3 buckets ordered by cost:
Cldy emerge 2016 07 ra gtm 01
You can repeat the same exercise to identify your most expensive EBS volumes, RDS databases, EC2 instances, and more.

Using Resource ID to address costly, untagged assets

A common step is monitoring expensive resources is to make sure those resources are tagged. This helps you collect similar costs, like by app or project. But, given the number of resources, it can be a big challenge to tag everything properly. By combining untagged resources in a report and including Resource ID (as well as other dimensions), you can prioritize tagging by your most expensive assets. The image below illustrates this:
Cldy emerge 2016 07 things missing environment tag

Get quantities of specific resources that drive costs

Use the new “Resource Count” metric to establish the number of unique resources and their costs during a given time period. Combining Resource Count with a contextual dimension gives you a unique count of all kinds of AWS resources and services. The example below is filtered to EBS, and using the dimension usage type, we can see the total count of SSD (gp2) drives as well as other EBS components.
Cldy emerge 2016 07 ra gtm 03
Another example shows the ability to look at the number of S3 buckets over time along with the total storage. As shown below, something changed on June 9th that caused overall storage to increase despite the count of buckets not materially changing. This is a prompt for operations teams to investigate what may have caused this change in usage (and likely, cost).
Cldy emerge 2016 07 ra gtm 04

See costs and usage by specific compute types

With the addition of a new “Operating System” dimension, you can now see costs and usage aligned with current or potential reservation types. Combining Operating System with other important dimensions in this feature delivers a clearer picture of what to reserve, improving the way you can purchase, modify, and manage Reserved Instances (RIs).

As a quick recap, in AWS, you can purchase compute in advance for a discount using RIs. Typically, users find the best RI candidates by evaluating the Operating System, Availability Zone, and Instance Type. This type of evaluation is available within Cloudability in the form of automatically generating recommendations for RI’s. The Operating System measure enhances this existing capability.

Getting a better way to evaluate and buy RIs

Operating System is a Cloudability-created dimension that we generate out of encoded data in AWS’ billing files. This custom measure, in combination with other available dimensions and our current RI recommendation engine, unlocks the ability to see all the key elements required for Reserved Instance evaluation in our Cost and Usage Analytics tool, and empowers users to build reports and dashboards using this data as well.

A data-driven RI purchase, and access to the granular details to modify those RIs when infrastructure needs change, can yield significant savings over the term of the RI. Our new measure gives the visibility for folks to make those types of purchases confidently. Here’s an example report:
Cldy emerge 2016 07 ri analysis deep dive
Using Operating System in such a report lets users get a deeper understanding of their EC2 spending by specific instances and OSs. Using a tag dimension, such as Name in the example above, adds more color to the report and gives the user a better handle on what they are reserving for.

See database-specific cost and usage data

The “Engine” measure for RDS and Redshift delivers a similar benefit as Operating System does for EC2. Again, this is a Cloudability dimension that we generate out of encoded data in AWS’ billing files. Using RDS as an example, this dimension lets you break down specific storage engine types. You can separate their Aurora instances from MySQL or SQLserver, like in the example below:
Cldy emerge 2016 07 rds instance analysis
Like other AWS customers, you might be planning a migration to Aurora. Using this measure, you can track the cost of it versus your previous database.

You can now catalog all of the cost and usage details of their specific databases and the diversity of various instance types being used. This dimension works with RDS, Amazon CloudSearch, Amazon ElastiCache, and Amazon Redshift.

Pro-tip: Do RDS RI planning using “Engine”

Cldy emerge 2016 07 ra gtm 07
Now, you can plan for RDS RIs using Cloudability. Combining Resource ID with engine, instance type, and usage quantity can give users specific detail as to which RDS instances operate consistently enough to make the most of RIs. We cover RDS costs to keep an eye on and a bit on RI planning for RDS in a previous blog article.

Seeing the costs of specific AWS functions

Use the “Operation” dimension to identify AWS API commands associated with a given cost. The data in this dimension varies widely between AWS services. But, it can give you a very specific idea of how AWS resources and their functions individually drive cloud costs.

Example Operations include:

  • Amazon S3: GetObject, GlacierStorage, Other, PutObject, StandardStorage
  • Amazon CloudFront: GET, Other, SSL-Cert-Custom
  • Amazon DynamoDB: CommittedThroughput, Other, Standard Storage
  • Amazon Kinesis: PutRequest, shardHourStorage, CreateDBInstance, Other, RunComputeNode
  • … and many more AWS functions across services

The Operation dimension will make it easier for users to parse out things like S3 Get Requests (to know what buckets are getting accessed), or Send vs. Received on SQS. We’re excited to see how customers use this data to understand their cost patterns down to the operation level and how they use these insights to further improve their cloud cost management.

We’ll introduce new ways to slice and dice cost and usage data at the resource level

There are countless ways to put these new measures to work in different ways within Cloudability’s Cost and Usage Analytics tool and dashboards. We’ll be introducing more use cases in upcoming articles to explore how users can dig up and make sense of cloud cost and usage patterns and trends, while transforming these learnings into actionable, cost-efficiency adjustments to their AWS environments.

For current customers wanting to explore these use cases today, your Technical Account Managers are ready to rock– just get in touch. For folks without a Cloudability account, we have a Free Trial available so you can try these new measures with your cloud cost and usage data.

Article Contents

Categories

Tags

Additional Resources