For folks wanting a compute-optimized AWS EC2 instance on a budget, you might have noticed that the t2.medium has similar specs to the c4.large. We’ll explore these two instances from a cloud cost efficiency perspective to see which delivers more bang for the buck.
For folks wanting a compute-optimized AWS EC2 instance on a budget, you might have noticed that the t2.medium has similar specs to the c4.large. We’ll explore these two instances closely to see if there’s more bang for the buck in using the cheaper t2.medium and what users might expect to encounter from a cost and usage perspective.
What’s the difference between both compute-specialized families?
For AWS users seeking for CPU-optimized cloud computing, comparing the EC2 T2 and C4 families is where to begin. Both families excel at delivering compute-optimized performance, with a few nuances that EC2 users should be aware of before provisioning either choice.
The C4 family delivers a fixed amount of optimized compute performance. It provides more compute than the general-purpose M4 family, while offering less memory. C4 instances features EC2-optimized, high-frequency Intel Xeon E5-2666 v3 (Haswell) processors. They have a wide variety of instance sizes available within the family, ranging from the c4.large to the c4.8xlarge.
The T2 family excels in offering optimized compute, but in a different way. It provides high-frequency Intel Xeon processors with turbo modes up to 3.3GHz that can provide a burst in compute capacity whenever vCPU utilization goes over a certain threshold. When not utilizing its stock of bursting CPU credits, it saves them for later use.
Specs & Pricing - t2.medium vs. c4.large
We’ve noticed, with our customers operating new projects, that until AWS EC2 users understand the actual resource needs and limitations of their work (Does the project require more compute? More memory? More disk I/O?), either going with the general-performance M4, or optimizing for compute with the C4, seems to be a where users begin. And then, we have the T2 sizes throwing affordable options into the mix. So, which is the more efficient, compute-optimized starting choice?
First, a look at specifications and pricing for our compute-optimized options:
|Instance||Memory||Compute units||Linux On-Demand Cost||Windows On-Demand Cost|
|t2.medium||4.0 GB||burstable, with credit limit||$0.052 per hour||$0.072 per hour|
|c4.large||3.75 GB||8 units||$0.105 per hour||$0.193 per hour|
This exercise uses U.S. West pricing and is subject to change if AWS makes pricing updates like this one.
From this comparison, we see that the t2.medium offers a little more memory, the burstable compute feature, but without the larger instance sizes of the c4 family or its EBS optimization. Note: EBS volumes can be attached to the t2.medium, it just won’t have the increased throughput that the C4 offers for EBS users.
The t2.medium costs nearly half the hourly rate of the c4.large across both Linux and Windows. This means users can access the t2.mediums specs and reap significant savings over time. So, where can the t2.medium shine, and what can users miss out on by going the more cost-effective route?
The t2.medium can be an “elastic” choice for beginning dev and test projects
From a development and testing standpoint, the t2.medium, and other T2 sizes, can be a great, cost-effective choice, as well as a way to get a baseline understanding of the project’s compute utilization. The ability for the instances to provide bursting vCPU performance provides a means of “free” elasticity and scaling performance for new projects.
Assuming the instances have time to recuperate bursting credits (see more vCPU bursting details below), the T2s will help projects scale whenever there is a demand for higher levels of compute. Users who monitor these conditions can then get a better sense of what to actually provision projects for in a live, production environment, as well as determining what types of actual (non T2 bursting) scaling and elasticity to provide.
t2.medium limitations: scalability and bursting limits
From a cost standpoint, the t2.medium, assuming it fits a given workload, will be more cost effective at first, but it can run into a few limitations during the growth of a given project.
Its burstability can be beneficial through times of high CPU utilization, but the ability to burst is finite. As we cover in our other article about the T2’s bursting capability, there are limits to this high-computing state.
Each instance in the t2 family has a burstable CPU credit limit. In other words, it saves up credits to burst, and once those credits are used up, the bursting capability is no longer available. With the t2.medium’s available cap of 576 burst credits, it can sustain a CPU burst state for 9.6 hours. To sustain bursting, the t2.medium will need time to operate under the performance level that would trigger burst, in order to regain credits. This low utilization time would need to be expected by the team.
From a scaling standpoint, the T2 family does not have the larger instance sizes that the c4 family provides. Though the C4 family provides a fixed rate of performance across sizes, they offer sizes up to 8xlarge, compared to the largest t2.large size. It’s often that EC2 users size up their compute instances as their projects require more resources. t2 family users will run into a ceiling sooner than c4 users.
Monitoring utilization closely opens opportunities to save
Regardless of which instance AWS users start with, they can get a better sense of utilization and efficiency by monitoring critical EC2 metrics, like vCPU compute, and disk I/O to get a sense of whether the t2.medium or c4.large are a solid fit. Monitoring and adjusting EC2 instances to maintain fit improves cost efficiency over the life of the project.
It gets even better. Once the proper instances families and sizes are chosen, investing in one to three-year Reserved Instances can help save even more on EC2.
For folks wanting to see this kind of cloud cost and usage reporting at work, please reach out to us for a free trial of Cloudability. Or, get in touch with our EC2 experts to learn more.