Make Kubernetes Your Competitive Advantage, Not High-Interest Tech Debt

Understand the three main modes of running containers on public cloud and pick the right one for your engineering profile.

Amazon, Microsoft and Google are innovating at breakneck speed when it comes to providing options for running containerized infrastructure on public cloud. We interviewed Colin Panisset, contributor to the open-source Kubernetes (K8s) and Docker projects and Director of Transformation at Cevo Australia, to explore the major decision points and governance models that are key to a successful container deployment. The cloud enablement experts at Cevo work with some of Australia’s largest enterprises on transformative projects involving the major and secondary cloud providers.

Making informed decisions at key moments can be the difference between building a mountain of high-interest tech debt versus infrastructure that gives your business an ongoing competitive advantage.

Finding the right balance between flexibility and engineering effort

Over the course of five years with Cevo, Colin has worked on container-based transformation projects both big and small, experience that allows him to pinpoint successful strategies and pitfalls to avoid. Colin says organizations should understand the three main modes of running containers on public cloud and pick the right one for their engineering profile:

  • Hand rolled: “Hand rolling” your own K8s involves the heaviest lift on your end, but gives you the most control. In this mode you are responsible for everything: provisioning servers for the control plane and worker nodes, maintaining the deployed platform software (including the K8s components), and setting the numerous required configuration items. This mode of K8s can be ran on-premises or on any public cloud.
  • Managed Service: Each of the three major public clouds offer their own managed Kubernetes service (AWS EKS, Microsoft AKS and GCP GKE), offloading a significant amount of the heavy lifting including hosting the control plane and automated provisioning and updating of worker nodes. However, your engineering team is still responsible for many configuration and monitoring tasks.
  • Proprietary-Serverless: Each of the major providers also provides a fully managed serverless option for running your containers (AWS ECS, Azure Container Instances, GCP Cloud Run). Apart from being serverless, these proprietary offerings aren’t actually K8s at all; instead, orchestration is implemented by each cloud vendor. While this mode is an opinionated implementation (less flexibility), you won’t have to worry about most configuration, scaling and monitoring concerns. The goal of the cloud vendor is to offload enough of this work to provide autonomous container operations for your engineering team.

When considering which approach to take when rolling out a container initiative, it’s worth thinking about containers on public cloud as a “cloud-inside-a-cloud.” This inside cloud has its own, and unique, set of cloud concepts across areas like infrastructure (e.g., load balancers, network, autoscaling), identity & access management (IAM), and governance. Right now, with the current maturity of K8s itself, even the managed services offered by the cloud vendors require significant engineering effort, typically a dedicated platform team, to manage this complexity and ensure the desired benefits are realized.

“Be as diligent as possible in identifying how many platform engineers you’ll need to build and maintain each container environment, then clarify if the expected business value or competitive advantage realized justifies this investment,” Colin says.

For example, Colin has found that for hand-rolling K8s platforms, the investment is so large and the work so complex that it generally doesn’t make sense unless you are Google or Amazon. Colin says that taking the managed service route makes K8s viable to more enterprises. This is especially true for organizations that are running multi-tenant (either as a SaaS provider or internal platform) infrastructure at scale.

Colin is seeing a surprising amount of positive momentum with proprietary-serverless offerings, where implementation for the customer is generally straightforward enough that energy isn’t required to spin up a team to own and manage the container environment. This is making containerization on public cloud accessible to far more enterprises, with many now running thousands of workloads in this way. “It may not be quite as cracking fast to spin up a new container (however, it’s getting close), but the time and effort reduced not having to worry about un-differentiating work is huge,” Colin says.

Organizations may be concerned by the degree of vendor lock-in and reduced flexibility that comes with offloading more of the heavy lifting to the cloud vendor. Colin advises acknowledging that there will be a material degree of vendor lock-in regardless of the approach you take. This will result from the fact you will undoubtedly integrate with other cloud services due to the fact the managed service offerings implement K8s differently and the many ways leaky abstractions occur.

Separating production and non-production workloads

Regardless of how you host your software applications, there have always been various options for how to separate production and non-production workloads. Although this is also true of containers, there is one approach that stands out with clear benefits for permissioning and cost allocation.

“The cleanest way to separate prod and non-prod workloads is to have dedicated clusters,” Colin says. “Doing this via Namespace or another construct requires a nuanced level of engineering that is excessively costly to achieve the same level of safety.”

For Colin this has been a lesson learnt working alongside Cevo’s customers. “When attempting to separate prod and non-prod data using Namespaces, it was just too easy for things to go wrong, for the wrong data to be accessed, for the wrong data to be deleted!”

Governance focused on your business systems or engineering possibilities?

Colin suggests your cloud center of excellence (CCoE) think about governance and how to allocate costs as early as possible. For K8s, just like the public cloud it’s hosted on, there is an incredible amount of flexibility and engineering possibilities for grouping resources. Unless you have previous experience to guide you and perhaps have a template to follow, there’s a good chance your first attempt at governance will be wrong. So, use a light touch to begin with in case things change over time (restructures will happen!), and remain nimble where you can.

From the beginning, you’ll want to report back required financial reporting numbers to the business. You can think of this as your “top-level allocation” (TLA): for example, your cost associated to each customer, department or business unit.

This TLA can be achieved in several different ways, but one method that has consistently worked well is to scope this by Namespace. In this model, your clusters separate prod and non-prod environments and are region specific, while your Namespaces provide a “vertical scoping” with labels cutting across horizontally (individual components can be deployed multiple times across Namespaces).

k8s cevo post 1 1 scaled - Make Kubernetes Your Competitive Advantage, Not High-Interest Tech Debt - Apptio
Cluster, Namespace, and label governance model for K8s

Once there is a need to add more metadata to identify service name, application name, project, cost type, etc., consider where the operational strengths of your organization lie. Does your company have a strong CMDB and supporting practices? If so, adopt a labelling policy that attaches a single CMDB ID to each workload and use your cost management solution to map all other details. If not, consider adding all metadata as separate labels.

Once you have your governance model in place, even an initial one, you can leverage Apptio Cloudability to accurately allocate the associated resource costs back to the business in near-real time and fully integrate this information into your broader cloud financial management process.

Set yourself up to deliver business outcomes into the future

Running K8s and container technologies on public cloud is a hot topic. While the rapid innovation from the cloud providers is opening many exciting implementation options, keep the level of complexity and ongoing engineering commitment attached to each top of mind.

“What is shiny right now too easily becomes tech debt for tomorrow, something that will actually slow you down in the future,” Colin says. “Most of all, make choices that align to your organizational profile—ones that are guided by how they will differentiate your business.”

If you’re looking for expertise to drive your cloud initiatives forward, contact the team at Cevo.

Additional Resources