Posted in Blog, Resources

THP3 Ch 2 Review

Picking up with chapter 2 in The Hacker Playbook 3 by Peter Kim (ch 1 notes here). I’ll be doing most of this on the provided VM. Partly for the tools already being (mostly) there, partly because I don’t want to mess up the VMs I have set up to my liking. I’ll add the tools I like later, but I’m finding a lot of resources list tools that are no longer in Kali (like Vega). I’ve got a BlackArch VM (which comes with just about all of the things I’ve found that aren’t in Kali anymore), so I may use that to continue working with some of the tools.

Disclaimer: Criminal/unauthorized hacking bad. Jail bad. Read CFAA. IANAL. Don’t do things you shouldn’t.

Nmap diffing

This chapter starts off with Nmap, which is something I’m fairly comfortable with. But this is using it to keep track of changes on the network. Cool idea, but seems a bit noisy. The author provides a script for it. From a blue team perspective, this is something your SIEM should be able to do. I like the idea of watching a network for differences, maybe even setting up an alert to watch for a specific port or service to become available. But I also know using Nmap to scan some networks (like ICS networks) is not a good idea if you care about things staying online and functional, so looking into other options would be a good idea. Perhaps something like GRASSMARLIN would be a good option for diffing ICS and SCADA networks. I’ve used it in training, and it’s pretty slick.

Kim recommends improving on the provided Nmap script by checking different ports, doing some banner grabbing, etc. Depending on what’s on the network you are targeting, there is a good chance you may need to check something outside of the default Nmap scan, so adding some port specifics to the script would be helpful. I’ll file that away. Writing scripts is something I want to get better at, so that aspect of this this book will definitely be good for me.

Web screenshots

Some cool stuff here – HTTPScreenshot and Eyewitness are both pretty straightforward. I’m not convinced these offer a lot of utility over some of the tools discussed in Michael Bazzell’s Open Source Intelligence Techniques or available on his website, but I can see using all of the tools in different situations.

With HTTPScreenshot, I got a bad range spec for the URL I was scanning. I had it entered in the text file as specified, so that was a little frustrating. I ended up doing an nslookup to get the IP address. Shoved that into the networks file, and it worked fine. I’m not sure if you have to use an IP or CIDR notation to make it work with the ./masshttp.sh option, but that’s what I had to do. I tried doing just the httpscreenshot option, and got a permission error. I checked the permissions on the file, and I didn’t have permission to execute the file. I found that odd. I had to chmod the file, but then it worked fine.

EyeWitness worked like a charm. No issues there, but remember your scope when you are putting in your target info. This scan consists of OSINT, so it’s “ok”, but it’s something you should always be aware of.

Cloud Scanning

This is a cool section. There’s no denying the importance of cloud infrastructure. Kim noted that many tenants can use dynamic IPs, so servers may change IPs and aren’t necessarily even going to be in the same block since you can set up your instance all over the world. There are references provided for where the big cloud providers have their IP ranges (Amazon, Azure, and Google Cloud).

Network/Service Search Engines

Basically ways to find things. Shodan is cool. You do need an account to search things, and there are various levels available that allow you to do more things. I picked up a lifetime membership for $5 on Black Friday. It included an ebook on how to use Shodan, so I’ll be digging into that more. It’s a powerful tool that can provide a scary amount of info. I noticed with my initial playing around that searching by IP gives a lot of detailed info. The IP might change, but if the target is using a baseline image, getting this info on one IP may provide some good info even if the IP later changes.

Censys.io was a new one to me. It’s the same basic idea as Shodan – scan all the things. My VPS showed up here by domain quicker than it did in Shodan. Censys does not require a log in of any type, which is nice. Kim also identified a script tool available to use with Censys to find subdomains.

For both, I went through the domain in the book and my VPS domain. Then I spent some time searching different things to see what the different options are. I think the big thing with these is knowing what they are and what they can help you see so when you are working on a pen test or bug bounty you know your options.

Manually Parsing SSL Certificates

I got info on certs from both Shodan and Censys, but having a manual option is good. This was a Python scraping tool – sslScrape. It’s a neat little tool written by Kim and bbuerhaus. One thing I’m coming back to as I work through these tools is that I have to keep working on my coding skills so that I can develop my own tools.

Subdomain Discovery

Subdomains are more difficult to determine, understandably so. But the info is important because it can indicate server type, some servers don’t respond by IP, and it can give info about where servers are hosted.

Discover Scripts was the first tool. It combines all the recon tools in Kali. It has quite a few options and is still maintained as of November 2018. You can also add API keys for several tools (including Shodan) to improve the results from recon-ng. It’s not required, but would be advantageous. I’ll add that to my list of things to do… It does take the info found and dig deeper, so this is another tool where you’ll want to be mindful of scope. There is an ./update.sh script that needs to be run from within the Discover Scripts folder – be prepared to wait awhile while that runs. And by awhile, I mean start that sucker and go to lunch or something. It shouldn’t always take forever, but it can if you haven’t updated in awhile.

I found the syntax a little hard to follow from the ReadMe and the book, so I’m including it here in a way that makes sense to me. This is from download through a domain search:


git clone https://github.com/leebaird/discover /opt/discover
cd /opt/discover
./update.sh
./discover.sh
...a bunch of options
Choice:
...options for your choice
Choice:
...more options

Wait a minute and it will pop up a list of choices. They are relatively self-explanatory. Enter the number of your choice, and go through the choices as needed. For a domain passive domain scan:


cd /opt/discover
./discover.sh
Choice: 1
Choice: 1
Company:
Domain: 

Then let it do its thing. This will also take awhile (I’m sensing a theme here). You’ll get some red, generally related to not having API info. Then it pops out a report. You hit enter to open the report – be patient because (surprise!) it takes a bit of time to open the many, many associated Firefox windows. Pretty cool stuff. The report is saved to the /root/data folder for reference. I found a walkthrough from thegeeky.space that may be helpful as well.

Knock was straightforward and will accept a VirusTotal API key. This uses a wordlist to see if the subdomains resolve, so you need a comprehensive wordlist. Kim suggests one from jhaddix. There’s a SecLists listing of wordlists on github. And there are quite a few already in Kali, possibly including the SecLists one depending on your install. This one also takes awhile. You get status codes for the subdomains. I think this would be a good option to send to a file – you can save to csv (-c, –csv; -f, -csvfields to include fields in first row) or JSON (-j, –json), or use the standard > to send to a txt file. (Nice little refresher on sending output here.)

Sublist3r does some google dork style stuff to search for subdomains. I got an error about urllib3 and chardet, quick googling brought me to a solution from Gloria Palma Gonzalez, which got the job done. Skip the sudo if you are already root.


sudo pip uninstall requests
sudo pip install requests
sudo pip uninstall docopt
sudo pip install docopt

The next tool, subbrute has now been integrated into Sublis3r – just add the -b option. I found this option to be very, very slow. Sublist3r by itself ran in a minute or so, the subbrute option took longer than my patience would allow. Running it separately also took a good while. I do expect it to take time since it’s going through a huge word list, but keep in mind the run time. Whether pen testing or doing a bug bounty, time is money so pick your tools accordingly. I ended up running it in verbose mode to verify it wasn’t hanging, and that also outlasted my patience. I decided to run a time test, because I want to have an idea how long things will take. This was running on the THP3 VM as provided. I went with 100 threads and basically walked away to let it do it’s thing. It took roughly…forever. I think it hung after about 18 hours, and I ended up killing it because it wasn’t making progress. This looks like something that would be better to run from a server, but my AWS instance is (likely) not built in a way to do the job much faster. I didn’t get great results with subbrute, so I’ll be exploring other options. I found a write-up saying it took more than 15 min to scan using a Digital Ocean instance that I’m guessing based on the price given has 4 GB of memory and 2 vCPUs. If you’ve got a VPS with comparable or better specs, you may get better results.

While digging into some stuff on subbrute, I found a nice write-up on discovering subdomains by Shpend Kutishaj of Bug Crowd. It had some good background and info on other tools, including a cool script by Jason Haddix that I’ll be experimenting with.

Kim also mentions MassDNS. This was labeled as experimental in its ReadMe. It also notes that the resolvers are outdated and a lot are dysfunctional. And it may result in abuse complaints being sent to your ISP. I ran it to see what it would do, but at this point, it’s not a tool that I’m likely to keep in rotation for general labbing and playing around because of the risks. It is very fast though, so I’ll save it for specific use cases.

Github

Basically dig around on Github to see if anyone made an oops and pushed code or other sensitive data to a public repo. Search in Github or do a Google dork.

Some tools for searching Github

  • Truffle Hog – scans for high entropy keys; had to run a couple times to get it to work; probably would be a good idea to sent to file for review
  • git-all-secrets – combines open source git search tools to find all the things; requires an API token (Settings – Developer settings – Personal access tokens to generate); didn’t accept my token…

I tried several troubleshooting options with git-all-secrets. I got it to work with an API token with nothing allowed, but sometimes did have to start it a couple times. The key seems to be no spaces after the =.


docker run -it abhartiya/tools_gitallsecrets:v3 - repoURL=https://github.com/ -token= -output=results.txt

The default for output is results.txt. When I changed it I got the message panic: open : no such file or directory. Then when I tried to copy the data after completion, I got an error that the container path specified didn’t exist. I think this has to do with the container and where I was trying to put the output file. Using the default, it worked fine using Kim’s syntax:


docker cp :/data/results.txt .

But it didn’t work with the syntax from the README, which is:


docker cp :/root/results.txt .

I’m not sure if this is because of the provided VM or not. I’ll have to try on a different VM, but moving on for now. Either version would put the results file in the /opt/git-all-secrets/ directory because of how the command line interprets the . (a good primer on using the Linux CLI is The Linux Command Line by William E. Shotts, Jr. – easy to read and helps you understand what’s going on behind the scenes). If you want it elsewhere, specify your filepath.


docker cp :/data/results.txt 

For reference, info about Docker commands was helpful for understanding what’s going on if you aren’t familiar with Docker. It does seem you can use the name of the container rather than the ID in the commands, which is much easier to do. I’m not as familiar with it as I would like to be, but that’s not a squirrel I want to chase at this moment.

Cloud

When I went to use Slurp, it wasn’t on my version of the VM. Odd. So onto the google…apparently, slurp doesn’t exactly exist as linked anymore. The repo was removed by the owner. The account has been taken over by SweetRollBandit, who is using it to make a point about account takeover on Github. Checking the THP3 Updates, the repo has been copied and is now being maintained by nuncan. A bit of fiddling to get things going here based on the updates and the README.

</pre>
</div>
<div>git clone https://github.com/nuncan/slurp /opt/slurp mv slurp/vendor slurp/src export GOPATH= go build cd /opt/slurp</div>
<div>

Then search as indicated.


./slurp domain -t
./slurp keyword -t 

Bucket Finder was much more straightforward. You just have to make the list of words to search for.Take the S3 buckets found and pop in browser to see info.

For the next part, you need an AWS account. This is where things get a bit icky if you are trying to maintain op sec. To get one, you need to provide email, phone, address, and credit card info. You can sort of get around it, but that’s not something I’m comfortable with. But I’m also not going to use the security key from my AWS account for anything other than these labs without knowing a lot more about how things are tracked within AWS. Yes, it generally shouldn’t be an issue for ethical hacking, but for situations where op sec might be needed, this may be something to skip.

The tko-subs tools is interesting and straightforward. There are 2 other tools listed for domain takeovers – HostileSubBruteforcer and autoSubTakeover – but I didn’t get into either other than confirming their existence and taking a quick look at the README.

Kim ends the cloud section with a link to the flAWS challenge that is a CTF for AWS. This is something I’ll file for future usage.

Emails

Can we agree that publishing an online directory of employee emails that are the same as the username convention should be frowned upon? It’s one thing to get the format of emails – that can be helpful for social engineering and digging for more info, but having half of the login creds (provided two-factor isn’t being used), just sounds like a bad idea.

SimplyEmail worked well. No issues there. YAY! I did note that like several other tools, it’s now been tested with Docker, so Docker is definitely something to get more comfortable with as a pen tester.

Then Kim talks about past breaches. There’s a lot of stuff out there. There are good ways of accessing some of the info (like haveIbeenpwned) and not good ways (like buying the info). Keep it legal folks.

OSINT

The chapter ends with a brief mention of OSINT stuff. There’s a ton of stuff. I’m working through Michael Bazzell’s Open Source Intelligence Techniques 6th ed, which is great. I could do an entire post just on OSINT stuff that I’m finding useful, so I’ll leave that for later.

Lessons Learned

I got a ton out of this chapter. A few highlights…

  • Check syntax on a big screen where you can more easily identify spacing, etc.
  • Get both URL and IP info for targets – some tools will only accept one or the other
  • Be sure to add the URL and IP info to the hosts file if working on a test network
  • Look more into tools being presented so you understand the risks/benefits – check them out to make sure they are what they say they are (see Slurp earlier)
  • Check Kim’s updates as you go to keep an eye on changes
  • I’m amassing a ridiculous amount of info on various tools and what not and have to have a way to keep them organized and up to date
  • Taking notes in this format means I have to figure out why things aren’t working and fix the problem
  • Doesn’t look like there will be any “short” chapter notes for this book
  • Writing these in Markdown saves a stupid amount of time when posting

Next steps

  • Play around more with Shodan and Censys
  • Look into API keys for recon-ng: Bing, Builtwith, Fullcontact, GitHub, Google, Hashes, and Shodan
  • Look into API key for Knock: VirusTotal
  • Develop scheme for keeping track of tool info – check out RTFM and BTFM as potential sources, but will want something dynamic b/c Slurp
  • Figure out why WordPress didn’t like my code fences

Author:

Lifelong paradox - cyber sec enthusiast - loves to learn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.