What a week it’s been for Amazon Web Services! We’ve already seen the release of the powerful new M4 instance family, new Reserved Instance options for RDS, and now, even more: a new instance size for t2s, a family which was announced last summer to serve as low cost instances with burstable performance.
The new t2.large instances are the family’s biggest member yet, and are an apparent response to enthusiasm towards the t2 family among AWS users. In their announcement blog post, AWS stated that the t2.larges address a customer desire for more memory and higher baseline CPU, and would best serve applications that don’t need the full CPU capabilities often, but which occasionally do need to burst. Use cases provided by AWS include dev environments and web servers.
As with any new AWS release, we were eager to see how the t2.large compares to other AWS offerings from both a performance and cost perspective—particularly with regard to memory and CPU.
Alongside the announcement of the new t2.larges, AWS provided a spec comparison across all sizes of t2:
When compared to its family members, the t2.large is clearly the performance frontrunner from a memory and baseline CPU perspective. Here’s how that breaks down in terms of an hourly cost-per-unit of performance, with the ECU value representing the baseline performance of the number of allocated vCPUs.
These prices actually break down in such a way that bigger isn’t necessarily better. In terms of RAM, T2s are priced to be exactly proportional; whichever size t2 you’ll need to use to service your application won’t be any better or worse of a deal than any other sizes from a RAM perspective. However, cost per baseline CPU is not so neatly proportional. Cost per baseline CPU is actually lowest for the t2.medium; the new t2.large is more costly by about 2 cents an hour (which is a small price to pay for those who would regularly require that much ECU). The micro and small sizes are the least cost efficient for baseline CPU, at twice as much as the mediums per ECU per hour.
We’ve seen that the t2.large is the most powerful t2 instance yet, and that it’s priced more-or-less proportionately compared to other t2 sizes. But how do its compute and memory capabilities fare against comparable instances from other families across these same specs?
The table below compares the baseline ECU and the RAM of a t2.large to comparable instances in the General Purpose family, Memory Optimized family, and Compute Optimized family:
The t2.large, as the least expensive option of the four, is actually a more cost effective option than m4 or c4 when it comes to memory capabilities, as it has a better price-per-GiB per hour. That said, the Memory Optimized r3.large is, appropriately, the most cost-effective option. In terms of ECU, the t2.large’s capabilities are extremely limited at baseline performance levels, and as such is the most costly of these instances per hour. Won’t be utilizing those bursts? Consider the c4.large as a cheaper option.
If you’re already using t2 instances and want something with a little more oomph, the increased memory and baseline CPU of the t2.large is definitely something to get excited about. And if you’re looking to try out some burstable instances, the t2.large is certainly worth looking at for your applications with high demand (just remember that the t2.medium is slightly more cost effective if you can get away with it).
Ultimately, AWS has built a performative and reasonably well priced addition to the t2 family—and if you need the bursts and the memory, it’s worth the price. But how do you know if you need it?
You can check your own infrastructure for opportunities to introduce burstable instances by running a CPU Utilization Report within Cloudability, broken down by the hour:
If you find instances that are regularly running at low utilization except for certain times of day, they may qualify for replacement with a burstable t2 instance.