Book info – Hands-On AWS Penetration Testing with Kali Linux
Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.
Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc.
I decided to keep posting by chapter since that worked well for me in the last section. This chapter on logging will at least give an opportunity to setting up logging best practices, but the pentesting parts might be a little more difficult. Remember that depending on your setup, you may not have to specify
profile in the CLI commands. And make sure your user has some sort of CloudTrail permissions so you can check things out in the CLI.
Ch 15 – Pentesting CloudTrail
CloudTrail helps with GRC and auditing. Initial setup was pretty straightforward (I set up some basically logging back in the IAM section). Testing how well CloudTrail is catching things is beneficial to help determine if the logging set up is catching what you think it should be catching. This is an important aspect of pentesting in general. Setting up defenses is great, but you have to verify you are blocking and catching the things you want to be blocking and catching (and alerting, etc.). The way that AWS services are incorporated into CloudTrail can lead to gaps. As services are developed, the development team has to integrate CloudTrail for the API calls to be logged…and this (not surprisingly) does not always happen pre-release. The current list of unsupported services is here, and when I looked in June 2020 the only thing there was the AWS Trusted Advisor and the services without public API operations. That’s really pretty impressive when you compare it to the supported services. Also remember that like setting up logging in other environments, what you are able to log in CloudTrail may not cover everything you want to. Some services have their own logging like access logs for S3 buckets, etc. CloudTrail is just logging the API activity. You would likely need to combine multiple options to get a full picture of your AWS environment…just like you need multiple options to get a full picture of an on-premise environment.
Setup, best practices, and auditing
This is a good section to lab even if you don’t work through later sections. I’ll be comparing to what I set up earlier since I needed it for IAM (I am a little curious why this wasn’t presented earlier). Setup is pretty straightforward. I found that the only difference was the use of the KMS encryption key. The setup portion is being done in the console.
Some setup recommendations:
- Apply trail to all regions.
- Select “All” for Management events.
- Select Data events as desired, these incur a greater cost than Management events.
- Select “Yes” for Encrypt with SSE-KMS. Create a new key if needed. This provides some additional permission options.
- Select “Yes” for Enable log file validation.
Interestingly, the book recommended enabling “Log all current and future invocations” for Lambda functions. That would have been helpful setup info prior to the Lambda chapter. Multiple trails may be required in some environments. It’s also sometimes recommended to send logs to another account to keep them safer if a compromise occurs.
Now on to auditing and the CLI. check for active trails with:
aws cloudtrail describe-trails --include-shadow-trails --region <region> --profile <profile>
include-shadow-trails pulls in trails from other regions. It won’t pull region-specific trails, so you may need to check other regions individually. Check the
IsMultiRegionalTrail parameter to see if it’s across all regions. Check the
IncludeGlobalServiceEvents to check non-region-specific API service activity. Look at
LogFileValidationEnabled to verify validation is enabled. And see if a
KmsKeyId is present to determine if SSE-KMS is enabled. To see if data events are enabled, look at the
True and then use the
GetEventSelectors to check specifics.
aws cloudtrail get-event-selectors --trail-name <name> --region <region> --profile <profile>
Look through the data returned to see what is being logged. You want to see read and write events, management events, S3, and Lambda functions logged. Now verify it’s up and active:
aws cloudtrail get-trail-status --name <name> --region <region> --profile <profile>
Make sure the
IsLogging is true to verify logging is enabled. You can also check the
LatestDeliveryAttemptSucceeded values to make sure they match. If not, something is not right.
Your compromised user might not have full permissions, so you’ll have to see what you can do. If you have the
Cloudtrail:LookupEvents permission, you can check the CloudTrail Event history. This isn’t a permission that’s been specifically set-up, so you could set it up for this or use a broader CloudTrail permissions set. There is a lot of info in the Event history and downloading it can take a long time and be difficult to review. Look through the event history like you would typical logs – look for things like IP address, user agents, etc. These can give you some ideas for spoofing and provide info on what’s going on in the environment.
Some services aren’t supported in CloudTrail, but not so much as of June 2020. The authors listed AppStream 2.0, Amplify, and Cloud9. All of those are now supported in CloudTrail. It’s still something to keep an eye on, but right now, AWS Trusted Advisor was the only listed service with an API. You can pull a credential report for your user. This was a two-step process, unlike in the book.
aws iam generate-credential-report --region <region> --profile <profile>
aws iam get-credential-report --region <region> --profile <profile>
# That pulls it down in base64, copy the string and decode, output to a CSV if desired.
echo <string> | base64 -d > creds.csv
The format is a pain to read in the CLI, but it’s straightforward. In a pentest, I’d probably save it to a CSV for easy reference. You can download from the console in CSV format if you have that access. Next get the last service accessed:
aws iam generate-service-last-accessed-details --arn <arn> --region <region> --profile <profile>
Then use the job ID returned to pull details about the service:
aws iam get-service-last-accessed-details --job-id <jobID> --region <region> --profile <profile>
Dig through what’s returned for info. This will show if a resource has authenticated to a service and when. It can help find unlogged services for enumeration. Check out the Wired article to see research from Rhino Security Labs on the effectiveness of canary tokens in AWS. The authors say AWS doesn’t consider an ARN to be sensitive, but the services mentioned do seem to now be in CloudTrail. I’m going to consider that progress. Short section of notes, but a lot of output in the CLI. Take the time to look through what is returned because there really is a lot of potentially useful info that can be gathered.
Bypassing logging through cross-account methods
With the ability to dig around in IAM using an ARN, you can brute-force existing users. This is because of how trust policies work and that you can’t set a trust policy to allow access to a user who doesn’t exist. Pacu has a module for this:
run iam__enum_users --account-id <accountid> --role-name <role>
Make sure that your user has the
UpdateAssumeRolePolicy permission or this won’t work. You can set up a custom wordlist by adding the
--word-list flag. You used to be able to enumerate roles similarly, but AWS made a change that now prevents this. You can however enumerate on a cross-account basis.
run iam__enum_roles --account-id <accountid> --role-name <role>
This will also try to assume the roles and return what was found and how many would be assumed.
Messing with the logging can also be helpful. Might trigger alerts though, so proceed with caution. Turning off logging is an option.
aws cloudtrail stop-logging --name <name> --region <region> --profile <profile>
This does need to be run from the region the trail was created in. The Pacu module to do the same is:
run detection__disruption --trail <trail>@region
The Pacu module gives options to disable delete, minimize, or skip the specified trail. These would trigger an alert on GuardDuty.
You can also delete trails via the CLI using:
aws cloudtrail delete-trail --name <name> --region <region> --profile <profile>
Or in Pacu with the earlier command and using the delete option. You may opt to delete the S3 bucket trails are delivered to, which prevents logging. Also is noisy. You get the bucket using:
aws cloudtrail describe-trails --region <region> --profile <profile>
You could delete the bucket with:
aws s3api delete-buck --bucket <cloudtrail --region <region> --profile <profile>
There’s not a Pacu module for this, but the authors did suggest you could make a substitute S3 bucket in your account and provide cross-account write permissions for the CloudTrail logs to go to your bucket where the target account couldn’t access them. Deleting the trail or bucket will (not surprisingly) also generate a GuardDuty alert.
You can minimize logging using the AWS CLI or Pacu’s minimize option.
aws cloudtrail update-trail --name <name> --no-include-global-service-events --no-is-multi-region-trail --no-enable-log-file-validation --kms-key-id "" --region <region> --profile <profile>
The Pacu minimize option will remove SNS topics, global service events, make it a single-region, disable log file validation, remove associations with log groups and role, and remove log file encryption. Minimizing might trigger GuardDuty, but it’s not real clear. The authors basically say they haven’t seen it happen.
To modify S3 and Lambda settings, use the
PutEventSelectors option – so removing all data event logging and recording read events would be:
aws cloudtrail put-event-selectors --trail-name <trail> --event-selectors file://event_selectors.json
# event_selectors.json would be
Problem with disruption and partial solutions
In short – GuardDuty. You can get around this by figuring out what’s normal for the compromised user. You could also modify logs after delivery to the S3 bucket. And remember that changing the logs in S3 doesn’t get rid of them in CloudTrail – they are immutable there for 90 days.
Interesting chapter. Good info on logging. It looks like this is something AWS has improved since the time of writing. It’s encouraging to see how little CloudTrail doesn’t work with. It’s not the end all logging solution for AWS, but it collects a good amount of important data.