What is Intelligent Tiering?
Intelligent Tiering uses monitoring and automation capabilities to move data between a frequent-access tier (FA) and an infrequent-access (IA) tier for cost optimization. In other words, Intelligent Tiering ensures you’re not paying FA prices for data that isn’t being accessed.
Intelligent Tiering is completely automatic. You don’t have to do anything except write data into the S3 Intelligent Tiering storage class and it takes care of the rest. Intelligent Tiering monitors the access patterns at the object level to decide if it should move an object. If an object hasn’t been accessed for 30 consecutive days, Intelligent Tiering moves it from FA to IA. If an object is accessed while it’s in IA, Intelligent Tiering immediately moves it from IA to FA.
While there is a monthly monitoring and auto-tiering fee, there are no data retrieval fees, so you don’t have to worry about unexpected bill spikes if a data access pattern changes. There’s also no performance impact when moving from one access tier to the other access tier. Intelligent Tiering has the same performance specs as S3 Standard, with access times in the milliseconds. Intelligent Tiering is designed for 99.9% availability over a given year and 99.999999999% durability of objects across multiple Availability Zones.
What’s It For?
One of the most common ways customers move data stored in S3 from one tier to another is with S3 Storage Class Analysis and Lifecycle policies. Storage Class Analysis lets you analyze storage access patterns to help you decide when to transition data from one S3 storage class to another. With this knowledge, you can write effective Lifecycle policies that, after some amount of time you specify, move the right data to a less expensive storage tier.
These two features work really well together if your data has predictable access patterns. However, many customers have data with changing access patterns. For example, perhaps access frequency cools off for a bit and heats back up when you run a big analytics job. In fact, there are many use cases where data becomes active again after it’s been moved down to a colder storage tier. This is exactly the situation S3 Intelligent Tiering is designed to handle. If you have unpredictable workloads, changing access patterns, or simply don’t want to constantly analyze your access patterns, you may want to look into S3 Intelligent Tiering.
What Are the Use Cases?
Intelligent Tiering is designed for three use cases:
- Unpredictable workloads
- Changing access patterns
- Lack of experience with storage optimization
Let’s look at an example of each one.
Unpredictable workload
Say there’s a company that collects and stores satellite images. These images are accessed very frequently at the beginning of their lives but, after some time, access times cool off. Months can go by without anyone looking at these images. After 30 days, Intelligent Tiering moves the images to IA, which lowers costs. However, if an unpredictable event, such as an earthquake, occurs in a particular geographic area, people become very interested in those images once again. Because Intelligent Tiering monitors the objects, it automatically moves them from the IA tier back to the FA tier. Customers can access them again with no performance impact.
Changing access patterns
A company using S3 for their data lake has hundreds of data scientists using many different applications who, on a daily basis, store objects in S3. Some scientists store objects and forget about them. Some store objects and access them a lot. Access patterns change all the time. With S3 Intelligent Tiering, if an object isn’t accessed for 30 days, it moves to the IA tier and the company saves money.
What about experience?
The last use case is around how much experience your company has in understanding storage patterns and optimizing them. Many customers don’t know how to optimize storage or they simply don’t want to deal with it because they want to focus on their company. But ignoring storage costs can get very expensive, very quickly. If you store one petabyte in S3 Standard, it’s going to cost about $24,500 a month or almost $300,000 a year. With Intelligent Tiering, a company can automate storage optimization and start saving money without having to invest any of its own time into the optimization process.
How Do I Use It?
You assign data to the S3 Intelligent Tiering just as you do with any other S3 tier. You can use either the console or the API. From the console, you simply select Intelligent Tiering from the Storage Class menu. You can also create a Lifecycle policy that transitions existing data to the Intelligent Tiering class 30 days after it’s been created. For programmatic access, you can use the storage class INTELLIGENT_TIERING from the S3 CLI and S3 APIs.
What’s It Cost?
Just like the other storage classes, you pay for monthly storage, requests, and data transfer. Storage for objects in the frequent-access tier is billed at the same rate as S3 Standard. Storage for objects in the infrequent-access tier is billed at the same rate as S3 Standard-Infrequent Access. You also pay a small monthly fee for monitoring and automation of $0.0025 per 1,000 objects.
Anything to Look Out For?
There are only a few things to look out for if you’re considering Intelligent Tiering. One is that Intelligent Tiering has a minimum storage time of 30 days. If you delete data sooner than that, it’s not a good fit for you. The other point to consider is object size. If you have many small objects, it can affect both your monitoring fees and your storage fees. Objects smaller than 128 KB are never moved to the IA tier, so you’re always paying premium prices for them. By and large, if your objects are at least in the MBs, they have a relatively long lifetime, and if they fit into one of the use cases we discussed, then definitely take a look at S3 Intelligent Tiering.