Allocating Resource Costs When Using Containers

Container adoption has been growing at a whirlwind pace for years, with our State of the Cloud report listing a 246% growth rate in 2017. And with good reason. Docker turned five recently, Containerd hit the 1.0 milestone last December, and Kubernetes (K8s) is now in its fifth year since release. Container use is only going to grow in the future, especially with the wide variety of managed Kubernetes services created within the last few years. The three major cloud players have each rolled out K8s services — GKE for Google Cloud, AKS for Azure and EKS for AWS — and there are a host of company-specific solutions like Red Hat’s OpenShift, Heptio, Platform9 and Docker EE.

But once containers have become widespread in your organization, how do you maintain visibility into cloud resource allocation? Just as importantly, how do you tie cloud resource use back to specific teams? The very nature of containers makes resource cost allocation a sticky situation that requires a tailored approach.

Containers Are All About Sharing Resources

Picture a busy shipping dock, one of the enormous ones with train lines, truck depots, half a dozen massive ships, and stacks upon stacks of modular containers. Those modular containers are a good way to picture software containers. Instead of having to deal with the hassle of purchasing trucks, trains, ships and everything that goes along with them, companies just load up a container and contact a shipping logistics company. That logistics company then uses a massive amount of shared resources to route the containers where they need to go.

Software containers offer a similar benefit. Instead of having to spin up a VM machine every single time an application is run, developers share clusters. The orchestration tool takes care of all the logistics of making sure the application gets the resources it needs to run, scaling up and down as needed across multiple applications. It’s one of the greatest strengths of container-based architecture.

Tracking and Charging Back Shared Resources is Always Tough

The advantages of shared resources inherently make it tricky to tie the cost of resources back to specific teams. Traditional cost-tracking methods are going to have trouble providing the amount of precision that’s needed.

Think about tracking the charges for those modular shipping containers as they move between ships, trains and trucks. A single train can contain a hundred different containers, while a ship can contain thousands, not to mention the shared resources of the crane and loading crews. So who gets charged for what service, and how is resource use divvied up? The amount of resources needed to move a container from point A to point B are going to vary wildly depending on weight, distance traveled, remoteness of location and other factors.

Tracking and allocating costs for container-based systems run into similar troubles. A single cluster composed of many nodes could be shared by a large number of different teams. Understanding which team used which portion of the memory, CPU, disk and network of the underlying instances becomes very difficult to determine without building your own specialized tool to track it.

The Key is Linking the Right Costs to the Right Pieces

If that shipping logistics company wants to stay in business, they need to figure out exactly which resources are used on each container and make sure it gets billed back to the right company. Successful shipping logistics companies have complicated systems for this exact purpose — systems often built by third-party vendors. These systems link the scannable markings on each container to specific clients and invoices with charges based on weight, shipping distance, labor and other costs. They know exactly how many resources will be used to move a container of TVs from Shanghai through the Port of Los Angeles via train to Phoenix and then on trucks to stores across Arizona.

Allocating and charging back costs for container-based architecture needs to have that level of precision if it’s going to be effective. Fortunately, K8s orchestration systems bring identification to the table in the form of namespaces, services and labels, among other objects. The trick is finding the link between those objects, the resources they use, and the specific charges associated with those resources.

At Cloudability, our approach to this problem is an allocation system that pulls usage data and attaches costs to K8s objects. Costs can then be broken down by namespace, label, cluster or service. In addition to showing the allocation of costs within a K8s system, this methodology allows us find how much of these resources are idle, helping teams be more efficient and optimize their costs. Since container-based systems are known for changing rapidly, we also made sure to pull the data frequently so it’s as accurate as possible.

While the shared resources of containers introduce new challenges when it comes to allocating costs, the challenges are not insurmountable. Like with shipping containers, it all comes down to having a system that ties teams to specific resource use, then correctly ties that resource use to costs. Once that system is in place, companies will be able to to lower costs, increase efficiency and get the most out of their cloud.

Ready to start allocating your container costs? Sign up for your free trial of Cloudability.

Article Contents

Categories

Tags

Additional Resources