ioBasis
Cloud made simple

Strategies to reduce Amazon EBS Storage Costs

StaffAWS Costs Optimization

In this article, we will give describe some strategies to reduce Amazon EBS costs.

This article is a part of our AWS Cost Reduction series

Amazon Elastic Block Store (Amazon EBS) is the main storage used for EC2 instances. It’s persistent and hosts both the instance operating system and user data.

Unfortunately, Amazon EBS costs can scale up very quickly. And it doesn’t seem easy to reduce these costs. But it’s certainly possible.

How EBS Volumes are charged

First, let’s explore how Amazon EBS volumes pricing. These are charged by GB provisioned. So the biggest the volume, the more you pay.

They are also charged from the moment they are created until they are eliminated. So it doesn’t matter if it’s not attached to an instance, or it’s attached to a stopped instance. They will be charged anyway.

Another important point is that provisioned means that it doesn’t matter how much data is inside the volume. For example, you provision a 500 GB volume, but you only use 10GB. Then you will be charged for the whole 500 GBs, independently that most volume space is unused.

Using the right EBS Type

There are 4 types of EBS Volumes:

  • General Purpose SSD (gp2) Volumes
  • Provisioned IOPS SSD (io1) Volumes
  • Throughput Optimized HDD (st1) Volumes
  • Cold HDD (sc1) Volumes

The most common type of EBS Volume is the first one. It’s based on solid-state drives (SDD). You should start with this volume for your instances. And if you realize that your Volume is used intensively, then you will need to add more space to the volume or switch to a Provisioned IOPS volume. Both options are more expensive. But they allow adding more IOPS to your current volume. You could also watch to CloudWatch metrics for the volume to understand if IOPS is getting close to the IOPS limit. Note also that the minimum size of these volumes is 1 GB and 4 GB respectively.

Throughput Optimized HDD and Cold HDD volumes are over 50% cheaper than the other two. They are volumes based on hard disk drives (HDDs). They are good for applications that read or write large volumes of sequential data, like logs of ETL workloads. Note that their minimum provisioning size for them is 500 GB.

In terms of cost reduction, the first step is to define the type of volume you need for each instance. In case you keep lots of persistent data inside EBS, then you should consider migrating it in a Throughput Optimized HDD (st1) or Cold HDD (sc1) Volumes

Migrating to smaller volumes

You might think that reducing Amazon EBS costs means just reducing the EBS volume sizes. The problem is that this isn’t possible using the AWS Console or CLI. AWS considers EBS volumes as blocks of data. And doesn’t understand the file systems inside them, or how data is arranged inside the volume. So AWS can increase the volume size, but it can’t reduce the size.

But there is a manual process that allows you to migrate a volume to a smaller one. The volume has to be mounted in another EC2. Then the files have to be copied into a new smaller volume. And this new volume replaces will replace the other one. We can call this process a volume migration.

Reducing data on volume

Before migrating to a smaller volume, we need to free some space on it. There are many ways to do this, but let’s mention some ideas:

  • Removing unnecessary files
  • Removing unused applications
  • Eliminating temporary files or caches

You could use free tools like Tree File Size (for Windows-based OS) or Disk Usage Analyzer for Ubuntu, or just ncdu command in Linux based systems.

Initially deploying small EBS volumes

This is another strategy to reduce your EBS costs. If you need to deploy a new EBS Volume, try to create a small one. Then you can expand the volume size. But remember that reducing volume size takes much more work.

Removing unattached volumes

This is another frequent issue. Clients might have volumes that are unattached. They might have terminated their EC2 instance, but forgot to terminate the volume associated. These volumes are not used, and still charged daily. So consider performing a quick review of all your volumes, and defining which are not in use. In order to identify them, just check on AWS Console which ones aren’t in in-use state (for example having “available” or “error” states)

Using “Delete on Termination”

The Delete on Termination is an option you could set when launching your EC2 instance. It allows you to automatically remove the EBS volume (attached to an EC2 instance) when that instance is terminated. So this avoids keeping an unattached volume. You should use it with caution because data on EBS volume will be lost.

Moving data to S3

Large files in EBS volumes could also be moved to S3. For example, Standard S3 rates are 77% below General Purpose SSD (gp2) volume prices per GB. Additionally, you pay only for data used, and S3 capacity is doesn’t need to be provisioned. And the data is replicated across three AZs at least.

Reducing IOPS

In case you are using Provisioned IOPS SSD (io1) Volumes, remember that you are paying additionally for the IOPS capacity. You should check the current IOPS usage using CloudWatch, and compare it with the provisioned amount. And this might allow you to reduce the provisioned IOPS rate, and get an extra cost reduction.

Using EC2 Instance Store

Instance Store volumes are attached directly to an EC2. They have a high throughput and very low latency. But these volumes are temporary. The data is available only while the EC2 is running. When the instance is stopped (or terminated), the data is removed. So these volumes are ideal to keep buffers, caches, and temporary data.

If you use big amounts of temporary data, you should consider moving them to an instance store volume. Keep in mind that only a few EC2 instances types support it.

Avoid Windows-based Operating Systems

If you have small disks, it’s a good idea to avoid Windows-based OSes. The files belonging to the OS occupy above 25 GB. And it could keep growing with OS updates. If you can switch to another OS, then you will also save lots of GBs in each volume.

Defining Snapshot Policies

A snapshot is a whole copy of an EBS volume at a certain point in time. Snapshots are stored in S3 and replicated across three AZs within the region. So they are highly redundant.

Snapshots are also incremental. This means that new backups store only the data that changed since the last snapshot. If data in the EBS is changed frequently (for example when the instance has a database that incorporates data on a daily basis), then snapshots will be larger. And the cost will be higher also. But if the data is quite static, then snapshots will be small.

Snapshots can be created manually by a user at a certain point in time. Or they can be created by LifeCycle Manager. In the latter case, AWS automatically creates the snapshots for you at regular intervals (from 1 to 24hs). It also keeps only a fixed number of backups and removes the oldest ones.

Snapshots price is approximately 50% of the cost EBS storage per GB. But these costs could increase fast if you have several snapshots of a volume, or it’s data changes fast.

It’s important to deploy a company policy for backups. For example, you could define that snapshots for production servers must be retained for 14 days. And there shouldn’t be backups for development servers. But in the end, this will depend on how critical the information is.

If you need help to reduce your AWS bill, you can send us an email. We are happy to help.

If you enjoyed this article, please click below to share!