Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 3: Pentesting AWS Simple Storage Service Configuring and Securing

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

Ch 7 – Reconnaissance – Identifying Vulnerable S3 Buckets

Simple Storage Service (S3) buckets are a popular attack surface, partly because they are fairly easy to misconfigure. The warnings about making them public have gotten fairly intrusive (for lack of a better word and in no way indicating it’s a bad thing), but oopsies can still happen since some buckets need to be public and things can get misplaced.

Setting up an S3 bucket

Go to the S3 bucket area, create, name (must be unique and has some rules), pick a region, then hit Create to use defaults or Next to go through configurations. The current default permissions block all public access. It’s a good idea to click through the configuration steps if you aren’t familiar with S3 buckets just to see what options are there.

Uploading is straight forward – click on bucket, click Upload, drag and drop or browse as is typical. You get information about permissions when uploading – including a rather large block about public permissions and that not granting public read access is recommended.

Downloading is equally easy – select item checkbox, click download.

S3 permissions and access API

S3 permissions include access control policies (mostly the web UI) and IAM access policies (JSON). Permissions can be at the bucket or object level and access to the bucket is needed to access objects.

Time to work with the AWS CLI…current version is 2, but using apt pulls version 1. For continuity with the book, I’m sticking with the apt version. More info on the AWS CLI, including how to install on Windows.

Add user creds…AWS Management Console, click on username, click on My Security Credentials, go to Access keys, and copy latest. Or create one if needed. AWS nicely puts up a warning that creating root user access keys are probably a bad idea for long-term access keys and creating a new IAM user with limited access is recommended. This sounds like something to go back and do, but for now, YOLO (because this key will get destroyed as soon as I’m done with this section).

Back to the AWS CLI, aws configure and enter key info.

Accessing contents of a bucket: aws s3 ls s3://<bucketname>

Upload file: aws s3 cp <filename> s3://<bucketname>/<folder>/<filename>

Use a folder name if you want to create or use an existing folder in the bucket.

Delete: aws s3 rm s3://<bucketname>/<filepath>/<filename>

Pretty straightforward…just don’t forget the s3. I noticed I kept skipping that part.

ACPs/ACLs

Similar to firewall rules, can allow read, write, read-acp, or write-acp. The ACP options allow read/write access to the ACLs of the bucket/object. You get a max of 20 policies for a specific grantee (any AWS account or predefined group). IAM accounts cannot be considered as a grantee.

Bucket policies

Buckets policies are also applied to each S3 bucket. These are easy to replicate for use on multiple buckets. Apply to an individual folder or the whole bucket. Add via the web console Permissions tab.

IAM user policies

IAM user policies provide S3 access to individual IAM accounts. AWS IAM looks like typical IAM and integrates with existing identity systems. Thank goodness.

To choose between IAM or bucket policies, the book recommends asking if the permissions are for specific users for a number of buckets or for multiple users needing their own set of permissions. If it’s multiple users needing their own set of permissions, use IAM policies. This older AWS blog does a nice job explaining the difference and when to use which. It was updated in Jan 2019, so it looks like they are updating it as needed. My takeaway was bucket policies apply to buckets, IAM user policies can be applied across AWS, so think about what functionality is needed.

Access policies

This section gives an example of the JSON format. Basically, 3 main sections – Statement, Action, and Resource. When you are making a bucket policy, a Principal element is included in the Statement portion. The Statement part is the who is being granted access to what part – this is why the Principal part is needed for bucket policies. For IAM policies, that’s covered by what it’s attached to. The Action part is what is being permitted. And the Resource part is what is being accessed. More info about the policy language to help understand what’s what.

Creating a vulnerable S3 bucket

Create a new bucket and make it vulnerable…or take the one you just created and make it vulnerable. Your call. The AWS S3 console changed overnight while I was working on this – that’s always fun. Currently, you click on the bucket, go to Permissions, edit the Block public access settings to allow access, and then edit the Access Control List to allow everyone access. And the bucket gets an orange Public label on the Permissions and Access Control List so you know where the public access is coming from. Nice feature.

AWS really wants you to be sure you want to make the bucket public – you have to confirm your choice by typing confirm in the save box.

Put some files in the bucket if you haven’t already, and now you have a nice little S3 bucket flapping in the wind. Time to exploit it…

Ch 8 – Exploiting Permissive S3 Buckets for Fun and Profit

So you can read sensitive info or even modify JS as a backdoor to affect all users of a web app who load the infected JS. (So maybe don’t do that?)

Extracting sensitive data from exposed S3 buckets

Well, the good news is that the AWSBucketDump github is still the correct tool. The bad news is that there is a broken dependency with urllib3. Known issue since Aug 2019…to the Googles – I was able to fix using pip to uninstall and reinstall requests.

python ./AWSBucketDump.py -D -l <file> -g <file>

You can see what it found by looking in the interesting_file.txt file.

Dust off the cobwebs from THP3 and decided to use BucketFinder since apparently I found it easy when I was working through that book. Basically pull over using wget, unzip, go.

sudo wget https://digi.ninja/files/bucket_finder_1.1.tar.bz2 /opt/BucketFinder
sudo tar -xvjf ./bucket_finder_1.1.tar.bz2
sudo ./bucket_finder.rb /path/to/wordlist

You can direct Bucket Finder to a specific region with --region <region> and choose to download the files with --download (careful with that option, some big files can get pulled).

Both of these tools are helpful for pen testing or working to protect your own environment. But be mindful of ethics and responsible disclosure should you stumble upon something you shouldn’t.

Injecting malicious code into S3 buckets

Basically if the S3 bucket an application pulls from is publicly writeable, bad things can happen. The book steps through the process of setting up a basic HTML page using a vulnerable bucket. This is fairly straightforward – replace the file being fetched with something malicious. I’ve done similar exploits using other fetching methods. So, skipping this lab, but writing the notes out for reference of how I would use the info to do bug hunting.

Pick webpage, view source, search for s3 in the code, poke at the bucket to see if it’s vulnerable. Depending on the scope of the bug bounty, that may be as far as you should go. If you have scope to test POC, you could create an alternative (I’m going to say alternative instead of malicious because if you are doing this for something like a bug bounty, you don’t want to put something truly malicious there – you might in a pen test or red teaming though) script to be called and upload it to the bucket.

aws s3 cp /path/alternative.js s3://<bucket>/alternative.js --acl public-read

Remember you will need to call your file the same as whatever the webpage does.

Backdooring S3 buckets for persistent access

Another thought exercise…references can be made to buckets that do not or no longer exist. This will cause problems. Can have an S3 bucket URL that is bound to a subdomain using an alternate domain name (CNAMEs). Here’s a quick refresher on DNS record types from CloudFlare. The bucket can get deleted and the CNAME not removed. This leaves a vulnerability because you could create an S3 bucket with the same name and have content of your choosing provided.

These issues can be identified when a 404 message includes the NoSuchBucket message. You can find these by using typical enumeration techniques and looking for errors. Find error, create bucket with the desired name, deploy desired content. This should give you a good reason to hash downloaded files (if you needed one). The hashes could be modified too, but at least it’s a check. This HackerOne report gives info about finding and disclosing this type of vulnerability. Obviously exploiting this type of situation relies on an unclaimed S3 bucket being called. While (hopefully) this isn’t a common situation, anyone who has had to manage anything with external resources can attest how easy it is to miss a link to a source.

While I’m not going to start scouring the web for instances where this occurs, it’s something to be aware of when you are working on a bug bounty (provided it would be in scope) or checking things for your job (again, provided it would be in scope).

Wrap Up

S3 buckets are a known pain point for security purposes. There seem to be data breaches almost weekly related to improperly secures S3 buckets, such as this one. AWS has made it more difficult to expose buckets, but exposure will continue to happen because some buckets legitimately need to be accessible by the public. There are many tools available to search for exposed buckets, but vet what you use. AWSBucketDump was new to me and is helpful for finding issues. The exploitation pieces were interesting even just as a walkthrough. The basic concepts are similar to exploiting web apps in general, so while finding the vulnerable buckets is specific to AWS, the exploitation concepts aren’t other than where the exploit gets stored.

S3 storage is incredibly cheap. The cost associated with the buckets for this section haven’t even been high enough to register.

Good section, relatively straight forward, few issues going through the lab pieces I did.

Author:

Lifelong paradox - cyber sec enthusiast - loves to learn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.