How to fix misconfigured AWS S3 buckets to avoid data leaks

 In AWS

Amazon S3 buckets are natively private out of the gate. However, if there’s one thing to take away from FedEx’s data leak earlier this year, it’s that accidents can happen if you’re not careful.

Many times during deployments, organizations end up opening their data to major risks and data leaks. Sometimes they don’t understand how cloud storage should be configured and connected to their compute resources. Other times they don’t know how to present a hardened cloud storage solution.

It’s more important than ever that companies get a firmer grasp on the S3 hardening process to avoid potential security issues.

Luckily, S3 hardening isn’t complex.

3 steps to improve S3 security

Organizations should take on a nearly SIEM approach and manage security information through hardening, encryption, access control, and public/private access restriction. Security event management improves security by using proper tools to monitor things such as data transfer encryption, user access, and intrusion detection and mitigation.

How to avoid data leaks from misconfigured AWS S3 buckets

This process can be broken down into three steps:

  1. Begin intelligence gathering to understand what you’re trying to accomplish. In this case, that’s where and how data should be stored in S3.
  2. Using the information gathered, determine if these buckets need to be attached to customer/web-facing instances or if the data stored in them is to remain in private space. Then plan your implementation accordingly. Any S3 bucket that is instantiated with critical data should never be served with direct access; there should always be a VPC in place.
  3. Finally, put into place steady state and continuous operations. Set up logging, monitoring with event triggers, and best practices around intrusion detection and prevention.

Oftentimes in the case of S3, the best practice approach is to “do nothing.” However, since natively S3 buckets are private out of the gate, this is not always be the best practice for every use case.

Implement permissions to harden your data stores

The first phase upon creation of an S3 bucket is to set up proper permissions around the bucket. Best practices for achieving proper hardening and security for your data stores include:

1. Implement Identity and Access Management (IAM) policies. Use organization or user role-based access to the buckets. This ensures that only the proper parties have access to the data you’re protecting.

2. Put bucket policies into place. Set permissions around specific users and groups for access, including:

  • Read-only permissions
  • Multi-factor authentication
  • HTTP access restriction using referrers, query string authentication, and URL-based access
  • IP access control, creating access control lists (ACL) to whitelist or blacklist specific networks from accessing your data
  • Disable or enable cross-account access to the bucket
  • VCP endpoint restriction to specific endpoints or to specific VPCs

After properly hardening your S3 buckets, you should begin security event monitoring around them.

How to set up security event monitoring for S3 buckets

CloudWatch isn’t going to alert you to security events. However, you can leverage it to set triggers around specific types of access or request issues. Set thresholds to create notifications in the event of any identified patterns that would reflect a potential data access attempt from bad actors.

From here, set thresholds around failed GetRequests from the same source or multiple sources — HTTP HEAD requests from unknown sources, LIST requests, and 4xx&5xxErrors. These are just a few patterns to look for to help get early intrusion attempt detection on your data.

Another step to ensure security is to set up rules in your AWS configuration to monitor for bucket creation outside of your policy. Do this by creating an IAM role and policy that grants lambda function permissions to read all S3 bucket policies and send alerts through Simple Notification Service (SNS).

Follow this by creating and configuring a CloudWatch event rule that would review ACL policies and check for out-of-scope ACLs, as well as access to other things that would violate your create policies. Furthermore, you can create a lambda function using the role and policy creation tool to review your S3 buckets and send notifications for noncompliant bucket creations.

What you should know about Tiros and Zelkova

In a further attempt to enhance security, AWS is testing two tool sets: Tiros and Zelkova. Both use analytics to verify security policies and controls, and metrics around your security configurations.

Tiros maps the connections between network elements, like S3 buckets and connections. It’s particularly useful for detecting unauthorized access from the internet.

Zelkova creates benchmarks for comparison between different S3 buckets or other AWS components. This helps developers understand how permissive their setups are compared to their existing infrastructure, or a model S3 bucket. Zelkova uses automated logic to check configurations for specific scenarios.

Together, the two tools can help spot mistakes before they go live.

The bottom line: Make sure your S3 buckets are secure

Data leaks can be disastrous for your business, and the misconfiguration of AWS S3 buckets can lead to this sort of catastrophe. By properly hardening your S3 buckets, you take a giant step toward mitigating potential security threats.

Related Posts: You may also be interested in...


Leave a Comment

five − two =

Pin It on Pinterest