t2.Large vs. m4.Large: An AWS EC2 Comparison

The AWS EC2 t2.large and m4.large instances look very similar. It can be a challenge to decide which is best from a price and performance perspective. We’ll explore the benefits of each instance from a cloud cost and efficiency point of view.
NOTE: The T2 and M4 generations have been replaced by the T3 and M5 generations. To see our post comparing T3 and M5, click here.

On paper, the t2.large and m4.large instances look very similar. So, it can be a challenge to decide which is best from a price and performance perspective.

A right-sized and optimized cloud environment is going to give anyone the best savings. So, let’s explore some of the reasons why your engineering team would consider either EC2 instance over the other.

We’ll explore what each option means from a cost management standpoint. We’ll also discuss what kind of development environments and scenarios fit each instance best and how to best visualize cloud compute costs to get a better understanding of potential cloud efficiency.

Let’s start by looking at their specs, as listed on the AWS EC2 website.

Specs & Pricing – t2.large vs. m4 large

Both are similar, but with their own nuances.

Specs

Instance Memory Compute units Cores Storage EBS-optimized?
t2.large 8.0GB burstable 2 EBS n/a
m4.large 8.0GB 6.5 units 2 EBS yes

 

Pricing

Instance Linux On-Demand Linux Reserved Instance Windows On-Demand Windows Reserved Instance
t2.large $0.104/hr $0.072/hr $0.134/hr $0.106/hr
m4.large $0.120/hr $0.083/hr $0.246/hr $0.184/hr

This exercise uses U.S. West instance pricing from 2016 and is subject to change if AWS makes changes to their rates.

Both the t2.large and m4.large feature dual-core vCPUs, with the t2 sporting high-frequency Intel Xeon processors with turbo modes up to 3.3GHz. The m4s feature their 2.4GHz Intel Xeon Haswell CPUs, which AWS markets as being “optimized for EC2.” While the t2.large features burstable compute, the m4.large has a cap of 6.5 units.

Both instances feature the same amount of memory and both require the provisioning of AWS EBS volumes. As far as cost management goes, users should be ready to also account for the spending on EBS when using either the t2 or m4.large. If storage access speed is a big deal, it’s major to note that the m4.large features EBS optimization. If local storage is preferred, users will have to look at the previous generations of the M family or use S3, for instance.

Linux users can take advantage of some pretty low, comparable pricing whether using on-demand or Reserved Instance rates for either the t2.large or m4.large. For environments requiring Windows, the m4.large hourly prices are nearly double that of the t2.large.

Consider the t2.large’s bursting compute capacity

The ability for the t2.large instance to burst allows users to go beyond the baseline compute power of the instance through the use of CPU credits.

These credits are kept for 24 hours, with a max credit level of 864. The instances can only burst for as long as there are CPU credits to use. So, it’s a great idea for t2 users to have a means of tracking the collection and use of CPU credits.

When users spin up new t2 instances, there’s a pool of CPU Credits available to push the cores to full speed. Combine this with EBS’s “boot burst” and the generally low cost to get t2s going and AWS users have a great starter environment for their development or test environments.

There are AWS users out there who use the t2.large instance in their production environment as well. The confidence in making such a move comes from understanding their actual instance utilization using a cloud cost management tool. Users can support their overall compute workloads and make the most of the t2.large’s low rates and bursting without “breaking the bank”.

When “burstability” isn’t a fit

The t2.large’s vCPU utilization is an aggregate value of both cores. So if one is using 100%, and another uses 40%, 70% is the aggregate total, which is above the t2.large’s 60% threshold and is considered burst-mode. So, at the credit cap of 864, the t2.large has a reserve of nearly 14.4 hours of “burstability”.

In an environment that utilizes burst infrequently, allowing the restoration of CPU credits to happen, this is operationally fine. However, if the development environment is in a steady burst mode, consuming all available credits and cores throughout the day (14.4 hours can only go so long) without a chance to recover CPU credits, this is a sign that a larger instance or different, high-performance family, might be a better fit.

Before spinning up new projects or migrating, users should monitor critical EC2 metrics from past projects, like vCPU compute, memory use, and disk I/O to get a sense of whether or not any of the EC2 instances are a solid fit. Nothing leeches from cost efficiency or puts production at risk than a poorly provisioned EC2 instance!

The bottom line: it depends on your project

While similar in specifications, the t2.large seems suitable for new web development and engineering projects that require a low bar of entry in cost, a decent level of performance, and the leeway to handle utilization spikes throughout the project. This is great for web presences and apps that expect to see a surge in user growth, but don’t quite know what to expect as their baseline performance. Assuming the vCPU needs don’t grow beyond two cores, the t2.large is the solid choice.

Meanwhile, the m4.large can be a great choice for projects with known, stable, and expected CPU, memory, and storage utilization. This is assuming the user can make the most of the m4.large’s EBS optimization and the 6.5 available units of compute.

Like the AWS EC2 site states, the m4 family is “great for many web server applications and other general uses.” Users who want to account for a growing understanding of their compute needs and require an environment to handle spikes and processing anomalies could greatly utilize the t2.large. Ultimately, it’s an understanding of compute needs for the project that will determine which is the best fit.

Another way to see it…

Scenario Recommendation
“My team is building a slick new app/project and we’re prototyping and evolving our understanding of the app’s utilization of AWS and user loads along the way.” t2.large
“My team is building the next version of our flagship app and we need a solid EC2 instance to run it.” m4.large

What’s more important here is that EC2 users understand their actual compute needs and can cater their choice to what their environment requires. Doing so lets teams confidently provision EC2 instances, and also look ahead and apply Reserved Instances to really get the most savings and compute for their money.

Using a cloud cost monitoring tool, like Cloudability, can lend actual facts and figures for that exercise. Let us know if you’d like to check it out and we’d love to offer a free trial.

Article Contents

Categories

Tags

Additional Resources