Six supercharged tips to reduce S3 bucket-related threats and ensure ‘water-tight’ cloud security

When it comes to AWS security, S3 buckets are undeniably the most vulnerable aspect of it all. Misconfigured S3 buckets have resulted in large-scale data breaches involving major organizations and continue to compromise data security at a staggering pace. Some notable S3 bucket-related breaches in the past include FedEx, Verizon, Dow Jones, and even WWE. Such breaches were avoidable because AWS is designed to be highly-secure, if configured properly.

This article is the sequel to our AWS Security Logging Fundamentals — S3 Bucket Access Logging tutorial, aimed at offering quick, actionable tips to not only help prevent, but also monitor and remediate S3 bucket-related breaches. Let’s review these techniques and best practices, tailored by our in-house security specialists, to help secure your cloud infrastructure and give the much-needed security boost that S3 buckets deserve.

To receive the next posts in this series via email, subscribe here!

The Need for Securing S3 Buckets

Traditionally, companies have migrated to AWS because of the simplicity it promises. However, without applying the principles of data governance, even the most sensitive data ends up on the cloud. To make matters worse, a large number of companies migrate to AWS too fast, without dedicated personnel handling their data security.

Most of the S3 bucket breaches we have seen in the past, involved companies choosing the “all users” option, basically configuring data to be accessed publicly. The chances of inexperienced users misconfiguring S3 buckets and changing access control are fairly easy. Such a change can open up your S3 buckets for public access, resulting in unauthorized access and data breach. S3 buckets are vulnerable to being misconfigured easily, which is why they are a big security concern.

Let’s dig into the following practical techniques you can employ to strengthen S3 bucket security:

Tip 1: Securing Your Data Using S3 Encryption

Encryption is an essential step towards securing your data. S3 offers the following two options to protect your data at rest:

  • Server-Side Encryption: Using this type of encryption, AWS encrypts the raw data you send and stores it on its disks (on data centers). When you try to retrieve your data, AWS reads the data from its disks, decrypts, and sends it back to you.
  • Client-Side Encryption: Using this type of encryption, instead of AWS, it’s you who encrypts the data before sending it to AWS. Once you retrieve the data from AWS, you need to decrypt it.

Depending on your security and compliance requirements, you can choose an option that suits your preference. If you are fine with AWS managing the encryption process, then opt for Server-Side Encryption. Otherwise, go with Client-Side Encryption if your data is sensitive and you’d like to encrypt it yourself.

Additionally, encryption could be categorized further based on key management requirements. Here’s a self-explanatory table describing this:

The following example describes how you can secure data in S3 buckets using SSE-S3:

  1. Go to the Management Console and click on S3 under Storage, then click on Create bucket:

2. Once you have created a bucket, you will be able to see objects and data inside the bucket

3. Next, click on the checkbox and you will see Encryption under Properties as shown in the following screenshot:

4. When you click on the Encryption label, a new window will pop up, where you can select Encryption to AES-256 to use Server-side Encryption to encrypt your data. Then, hit Save

5. Go back and check the object properties again. The new encryption type will be reflecting now:

Sit back and relax! Your file is now securely encrypted on AWS.

Tip 2: Managing Access Control

Access Control is the most important pillar to help strengthen data security. We’ve identified five ways in which you can control access to your S3 buckets and resources. Let’s review each of the access control methods to help you build an effective S3 security mechanism:

Limiting IAM User Permissions

Identity and Access Management (IAM) directly enables fine-grained access controls. By applying the principle of least privilege, you can assign users with the minimum access and resources required to administer buckets or read/write data. This minimizes any chance of human-related errors, which is one of the top reasons for misconfigured S3 buckets that leads to data leakage.

As a rule of thumb, start with a minimal number of permissions necessary and then add permissions gradually as you need them. If you are looking to get started with IAM or accelerating your organization’s identity and data protection efforts, then you might find our tutorial on AWS Identity and Access Management (IAM) Fundamentals useful for further reading.

Restricting S3 Access Using Bucket Policies

Bucket policies are similar to IAM user policies with the main difference being that bucket policies are attached to S3 resources directly. Bucket policies allow you to be flexible and manage bucket access with fine-grained permissions.

There are several situations in which you’d want to use bucket policies. Let’s review some of the common scenarios here.

When you are allowing access from a different AWS account or an internal AWS service (CloudTrail for example):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowExternalAccounts",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::123456789012:root",
          "arn:aws:iam::234567890234:root"
        ]
      },
      "Action": "s3:PutObject",
      "Resource": [
        "arn:aws:s3:::my-data-bucket/*"
      ]
    }
  ]
}

When you are allowing access from specific IP addresses or ranges:

{
  "Id": "AllowByIP",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::my-data-bucket/*",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": [
            "157.130.196.214/32",
          ]
        }
      }
    }
  ]
}

Or if you want to allow requests to the bucket when certain conditions are met, such as HTTPS, MFA, time restrictions, and more.

Using S3 Access Points To Assign Access Policies

During re:Invent 2019 in Las Vegas, Amazon announced S3 Access Points, a feature to improve access control with mixed-use S3 buckets, resulting in easier management of bucket policies.

Before S3 Access Points, bucket policies were applied to all data inside of a bucket with varying permissions, making it difficult to manage. S3 Access Points offer a new way to manage data at scale.

How S3 Access Points Work?

S3 Access Points have unique hostnames with their own access policies that describe how data could be managed using that endpoint. Access point policies are similar to bucket policies, with the exception that they are only linked to the access point. S3 Access Points can also be restricted to a Virtual Private Cloud (VPC) which helps to firewall your S3 data within that private network. This means each access point has a unique DNS name, making it easier for you to address your buckets. The following illustration explains how you can use S3 Access Points to simplify access management:

Controlling Access Using ACLs

Before AWS IAM became popular, ACLs (Access Control Lists) were the traditional feature used to control access to S3. Misconfigured ACLs are one of the biggest reasons why S3 data leaks are so widespread.

ACLs can be applied either on the bucket or on the object level. Simply put, Bucket ACLs offer access control at a bucket level, whereas Object ACLs offer access control at an object level. By default, Bucket ACLs only allow access to the account owner, but it’s generally very easy to make your buckets accessible publicly, which is why AWS recommends against using these.

Amazon also offers a set of predefined grants and permissions known as canned ACLs to support S3. For example, to make a S3 bucket private, you can use the private canned ACL. Similarly, to make a S3 bucket public, use the public-read canned ACL which gives Read access to all users. And by using the log-delivery-write canned ACL, the LogDelivery group is granted Read and Write access, which is also how S3 logging is enabled. For a complete list of all canned ACLs and their related predefined grants, refer to the canned ACL table in the link here.

Using Amazon S3 Block Public Access

Finally, Amazon offers a centralized way to restrict public access to your S3 resources. By using the Amazon S3 Block Public Access setting, you can override any bucket policies and object permissions set before. It should be noted that block public settings can only be used for buckets, AWS accounts, and access points. You can learn more about using Amazon S3 Block Public Access here.

Tip 3: Maximizing S3 Reliability With Replication

By having a data protection strategy that focuses on maximizing resilience, your organization can enhance S3 security and reliability efforts. Let’s review five such strategies to empower your data security choices:

  • Creating data copies: This is the most commonly used strategy because it reinforces data protection. You can centralize and automate backup processes using the AWS Backup service which supports most of AWS services including Amazon EFS, DynamoDB, RDS, EBS, and Storage Gateway.
  • Selecting availability levels based on workload requirements: As S3 services are available with different availability levels, for low priority workloads use IA storage and then move towards a better class of service when your IT workloads demand higher-availability. This ensures optimization based on workload requirements as overinvestment in storage is costly.
  • S3 versioning: Data also faces significant risk due to disasters and failure on the part of infrastructure. By considering S3 versioning to retrieve deleted data, you can avoid tricky and cumbersome backup restoration processes. The S3 versioning process involves saving version of an S3 bucket object whenever a PUT, COPY, POST or DELETE action is carried out. You can learn more about how to configure versioning on an S3 bucket here.
  • Using Cross-Region Replication (CRR): You can consider using the Cross-Region Replication (CRR) feature to overcome the problem of a single point of failure and improve data availability. In addition to availability, CRR also helps meet compliance standards if you’re required to have your data stored around different geographic locations.
  • Using Same-Region Replication (SRR): If the regulatory compliance requires storing data locally or in the same region, then SRR could be a great option. AWS uses an in-built data replication feature to replicate an S3 bucket across storage devices in three physically separated availability zones within a region. This automatically ensures data security and reliability in the event of infrastructure failure or a disaster.

To learn more about Replication, please refer to the Replication section on the AWS Developer Guide.

Tip 4: Enforcing SSL

Using SSL is a great way to secure communication to S3 buckets. By default, S3 Bucket data can be accessed by using either HTTP or HTTPs, which means that an attacker could theoretically MITM (Man in the Middle) your requests to S3.

Let’s understand this with an example. To enforce end-to-end encryption on all traffic to a bucket, you can apply a bucket policy containing an explicit deny condition as shown here:

{
  "Id": "ExamplePolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSSLRequestsOnly",
      "Action": "*",
      "Principal": "*",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::awsexamplebucket",
        "arn:aws:s3:::awsexamplebucket\/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}

All requests that do NOT use HTTPS will now be denied.

Tip 5: Enhancing S3 Security Using Logging

S3 bucket access logging is a feature to capture information on all requests made to a bucket, such as PUT, GET, and DELETE actions. This empowers your security team to identify attempts of malicious activity within your buckets.

Logging is a recommended security best practice that can help teams uphold compliance standards, identify unauthorized access to their data, or perform a data breach investigation. In our tutorial on S3 Bucket Access Logging, we’ve offered detailed step-by-step instructions on how to leverage S3 bucket access logging to maximize security.

Tip 6: Putting S3 Object Locking To Work

S3 Object Locking makes it difficult to delete data from S3. Malicious actors have been known to cause damage to organizational data predominantly in two ways:

  • By stealing data
  • By deleting data or assets

S3 Object Locking works on the latter by preventing deletion of an object or from being overwritten. It essentially makes the S3 object immutable by offering the following two ways to manage object retention: by specifying a retention period or by placing a legal hold until you release it.

S3 Object Lock also helps you comply with regulatory requirements that require WORM, or simply for compliance purposes that require setting up an additional layer of protection. To learn more about how S3 Object Lock can help you with regulatory requirements, click here.

Enabling Object Lock

To enable Object Lock when creating a bucket, please use the following steps:

  1. On Management Console, go to S3 under Storage and click on Create bucket:

2. After you input a bucket name and hit Next, you’ll see a checkbox under Object lock in Advanced settings as shown below:

3. Click on the checkbox and you’ll see a notification indicating that the Object Lock is enabled. You can now click Next and continue creating your bucket.

Note: Object Lock works only when versioning is enabled. This way retention periods and legal holds apply to individual object versions.

To learn more about applying Amazon S3 Object Locks, check out the official AWS documentation here.

How Panther Can Help

Panther is a powerful, cloud-native SIEM designed to analyze both security logs and cloud resources for total visibility of your infrastructure and resources. Panther directly supports S3 access log analysis and comes pre-installed with common detections for S3 attacks.

By deploying Panther, you can gain real-time insights into your environment and identify every suspicious attempt. Moreover, Panther’s log analysis allows you to convert log data into meaningful insights without being overwhelmed by machine-generated data. Panther equips you with everything you need to stay a step ahead in the battle against data breaches.

Wrapping Up

AWS promises high levels of security, when understood and configured properly. Unfortunately, the truth of the matter is that many organizations don’t have the necessary resources and skills needed to build and maintain highly-secured AWS environments. It’s far too easy to misconfigure S3 buckets and make your data publicly accessible — with hundreds of services and tools at its helm, the complexity of AWS makes it challenging to achieve total visibility.

We hope that by following the techniques and strategies described in this article, you’ll prevent S3 buckets from being misconfigured, protect your IT workloads, and prevent data breaches.

Thanks for reading! Subscribe here to receive a notification whenever we publish a new post.

References

  1. Unsecured server exposed thousands of FedEx customer records
  2. Millions of Verizon customer records exposed in security lapse
  3. Data Governance
  4. Maximize Amazon S3 reliability with shrewd choices
  5. AWS Security Logging Fundamentals — S3 Bucket Access Logging
  6. AWS Identity and Access Management (IAM) Fundamentals
  7. Replication: AWS Developer Guide
  8. Using Amazon S3 Block Public Access