Posted in Blog, Course Review, Resources

Course Review: Getting Started in Packet Decoding w/Chris Brenton (Antisyphon Training )

TL;DR – Solid content, a lot to take in for newer learners, well worth taking as an introduction or refresher

I’d been waiting to take this class and finally had a chance a few months ago. My goal was to refresh some fundamentals and fill in some holes that I felt when I took the intermediate threat hunting course. Plus check out the content to see if it would be a good course to recommend to people wanting to get into IT or infosec. It’s a pay what you can, so it’s very accessible pricewise. There’s a ton of content and labs that give you hands-on experience. A lot of the class was review given how much I’ve done with packet captures, but there were enough tips and tricks that it was well worth the 16 hours of class time. There was good coverage of tcpdump, tshark, and Wireshark. I think it’s important to have multiple options for packet captures since Wireshark really doesn’t do well with large captures.

The depth of material was quite good. I think if you were coming in with very little experience with packet captures and network traffic this would be drinking from the firehose. It’s a lot to take it. You do have access to the recordings for 6 months, the course VM, and course PowerPoints. I think if it was over your head, that set of resources would let you get a good grasp on things after the course concludes. Ideally you would take this before the threat hunting course linked above. If you are new to the content, be prepared to revisit the information to grasp it. It’s worth taking the time to go back through until you get it.

I took away a good refresher on tcpdump and tshark plus refreshing networking concepts. Plus some good reminders of deeper functionality in Wireshark. The labs were fun and related well to the material. No issues with the VM. The Discord channels for the class were helpful. I’d recommend the class to anyone wanting to review ICMP, TCP, and UDP.

For someone taking a DIY approach to learning infosec, this is gives a solid networking concepts foundation. It doesn’t cover setting up network sensors and such, but that’s not really something I think the target audience of this course would need to focus on. Even if you are looking at more cloud-based security, the content in this class is worth taking.

Posted in Blog, Resources, Review

Workshop Review – DevOps for Hackers with Hands-On Labs w/Ralph May (Black Hills Information Security)

I’ve been wanting to get some exposure to deployment options like Ansible and Terraform, so when a Black Hill Information Security (BHIS) workshop popped up in my LinkedIn feed taking about using both for hackers, I mashed the registration link as fast as I could.

My “why” for taking the workshop was to have a better idea of how I can use Ansible and Terraform to better manage my lab environments. Since I tend to pop up and destroy cloud resources, it made sense to learn more and see if it could help. Plus it’s not going to hurt to know the basics of either one. That the workshop used Digital Oceans was a bonus. It’s nice to get out of the AWS and Azure worlds to see something new.

The TL:DR, if you see a Black Hills Information Security, Wild West Hackin’ Fest , or ActiveCountermeasures webinar or workshop that covers something you are interested in, sign up. It will be a good use of your time.

Workshop Resources

Workshop YouTube Recording: DevOps for Hackers with Hands-On Labs w/ Ralph May (4-Hour Workshop) – YouTube

Workshop Website (looks like Ralph May turned this into a public Notion page, so I’m linking that rather than the original): DevOps for Hackers with Hands-On Labs w/ Ralph May (4-Hour Workshop) (

The original website was

Workshop Overview

This 4 hour (plus an hour for setup) workshop included 4 labs (Terraform, Ansible, Docker, and C2 Deployment). Ralph did an introduction of each topic before walking through the lab. A huge help was that he provided completed lab files. Using the completed files I was able to keep up with the labs. There’s no way I could have typed fast enough. I might have been able to if I were more familiar with the platforms, but this approach worked for me. My plan is to go through the workshop again at my own pace where I can build the lab files myself knowing I have functional files to check things against if needed. The initial hour for setup was helpful since I had a brain fart about unpacking the VM and didn’t put it in a specific folder prior to extracting. The BHIS Discord was very active during the setup time, and everyone I saw having issues was able to get moving in the right direction before things started. I really appreciate this extra time because labs don’t go well when your environment is wonky. This lab setup was same day, which I think may be a more effective method. An earlier workshop sent the lab files earlier, and I think that is more likely to get put off until it’s time for the workshop. But that was also a pretty large VM download, so there may have been a need to spread that traffic out. I think you would get a decent amount out of the workshop just following along and not working on the labs during the live portion. I prefer to do what I can hands-on during the live portion so I have a better idea of what I want to go back to.

Presentation slides and lab guides were available for download, and it looks like those will be available on the Notion site for at least a little while. They mentioned Ralph is developing this into a full 16 hour workshop, and I think for anyone who is managing infrastructure for pentesting or red teaming, it could be a good time investment. I could see using this approach to pop up custom infrastructure quickly for each engagement and easily keep things separated out. The BHIS team also built in breaks every hour, so you could have a few minutes to step out for a bio break, check in on work, or wander aimlessly for a bit. That approach is working well for their 4 hour workshops that I’ve been in.

My Takeaways

I wanted to get a good idea of what things were and how they were used – mission accomplished in that regard. These are my brief, extremely high level takeaways. There’s a lot more to it, but these are the things that I want to have stored in my head so I have an idea of what I might want to reference for different projects.

  • Terraform – infrastructure as code, manage infrastructure, fast and consistent, free/open source, great for cloud and API
  • Ansible – infrastructure as code, configuration management, Python and YAML, slower, OS config
  • Docker (this was what was most familiar to me in the workshop) – containers, CI/CD, runs on all the things, application isolation, clean up your images
  • C2 deployment – there are a lot of C2 options available (and a lot of fun logos), calling some just a C2 framework is underselling their capabilities
    • Mythic – Docker(!), cool but there’s a lot going on, need to research more if I want to effectively use this, can be deployed with Ansible
    • I need to look up the ones I’m not familiar with (not being a pentester these aren’t something I can justify a lot of time playing with) to keep up with what’s out there. I need to look at some of these for labs so I’m not just using Metasploit, Empire, etc. because those are the ones I’m most familiar with. But also beware of chasing shiny things.

Post-Workshop To Dos

I want to go back through and do the labs by creating the files myself. Spending that time will help internalize the capabilities of Terraform and Ansible. I’ll probably do this using Digital Ocean initially, but I think the next time I’m building labs in AWS or Azure, I want to at least try setting things up with Terraform or Ansible as appropriate.

I probably would not go for the 16 hour workshop right now just because what it would cover are not my primary responsibilities. If I were in a role where I could use this approach to be more efficient, I’d be jumping at the opportunity. BHIS and WWHF have some of the most reasonable training rates around. And they are offering even more with a cyberrange as part of their Antisyphon training stuff, so keep an eye on their training schedule.

Wrap Up

The content was well prepared and well presented. Labs worked and had files available so you could keep up if needed. I have an understanding of how Terraform and Ansible can be used. I know where I can go to find out more and ways to practice using them. I wouldn’t even call myself a beginner, but I know enough to learn more. That’s a big part of why I take things like this.

Bottom line, this was a good use of my time. I will continue to take advantage of the training from BHIS/WWHF/ACM as much as I can.

Posted in Blog, Resources

Let’s talk Terminal – Windows Terminal

First off, I really feel like there should be an apostrophe in there somewhere – Windows’ Terminal maybe? Regardless, I recently decided to give Windows Terminal a try after a colleague (thanks Kristy!) mentioned she’s been using it some. And then, I swear, I was seeing it everywhere. I could see some advantages, so I installed. Now, I think I might have a problem. I wanted a quick reference for myself and thought it would be a decent blog since I kept sharing them with colleagues (whether they wanted to know or not – I get excited/you’re welcome Chad!)

What is it and why I’m a little obsessed (skip if you’re just here for the tips/tricks)

I figured this was some random third party app before I started looking into it. Nope, it is from Microsoft – so that could be positive or negative. The big “selling points” for me were having multiple tabs and custom themes. Since I sometimes (always) have a questionable number of terminals open between various PowerShell, Command Prompt, and WSL options, being able to easily contain and differentiate them would be nice. And nice it is.

Terminal defaulted to PowerShell for me, which was fine. It will also pull in the other terminals you have, so if you are running PowerShell 7 alongside 5, it’ll show up. As will WSL distros, Azure Cloud Shell, etc. When I got some time to fiddle with it, I realized how well it fits into my workflow. The ability to have profiles for different tasks and access all the options without having a ton of Windows open improved my efficiency quite a bit. Not knowing which PowerShell window was IPPSSession versus ExchangeOnline versus general versus whatever made moving between them frustrating. You can change themes in the regular terminals, but it’s kind of a pain. I’m now happily down to usually just Windows Terminal and PowerShell ISE when I need that. Much of the time I’m down to Windows Terminal with multiple tabs.

What makes it powerful is the ability to set profiles, pass some commands when calling profiles, and starting with multiple tabs open. You can also specify the path to start in for a profile, which comes in handy. All can have different themes, tab titles, and tab icons. The ability to have clear visual indicators is incredibly helpful, particularly when you might be doing some IR and need to have access to multiple terminal options. For some reason, using the right commands in the right places is more efficient. Who knew? It also lets me more clearly separate which has admin permissions. I’m using different background colors and specific icons to make it easy to get where I need to be to do that next thing. And as silly it is, opening with the terminals I’m typically in all day every day without having to do anything makes me ridiculously happy. People like to tell me that’s me being efficient, but it feels kind of lazy to me. I guess it’s like writing a function to run a 1 line command in PowerShell – it may only save a few keystrokes each time, but the cumulative savings really adds up.

Set Up Tips

The kind of time consuming part is getting things setup for effectiveness. A lot of the options can be configured via the Settings GUI – Startup options, Appearance, Profile basics, etc. There are additional color schemes available by searching online, but I’ve been tweaking what already there because that’s a rabbit I don’t need to chase right now. Pick your profile name, icon, font, color scheme, background image, etc. to whatever makes you happy. Create custom color schemes in the Color Schemes section and apply to your profiles to help differentiate them.

Pass commands starting the profile

If you look at the profiles, you’ll notice there’s a “Command Line” spot with just the typical cmd.exe, powershell.exe, wsl.exe -d <distro>, etc. there. What is cool/useful is being able to pass commands here. So if you want to always start a profile to connect to a computer remotely because you do this ALL THE TIME, you can:

 #Include -NoProfile if you want to avoid having a profile loaded
 PowerShell.exe -NoExit -Command Enter-PSSession -ComputerName <computername>
 PowerShell.exe -NoExit -Command Connect-IPPSSession -UserPrincipalName <UPN>
 PowerShell.exe -NoExit -Command Connect-ExchangeOnline -UserPrincipalName <UPN>

You might also want to jump straight into Python:

 cmd.exe /k python #Or whatever you start Python with in Command Prompt
 wsl.exe -d <distro> python3 #Or whatever you start Python with in your various WSL distros

This was a game changer because of how “efficient” I like to be – not having the extra step of connecting or whatever is phenomenal. The ability to pass arguments starting profiles gives you a ton of options. You may need to do a little testing to determine if you need to tweak the syntax a bit, but it’s pretty straightforward.

Start with multiple tabs

This part moved Windows Terminal from nice to awesome…because apparently opening the extra tabs is really hard for me. You do need a more recent version as some of the older ones don’t allow for it. Make sure you determine whether you want to use a distro or a profile because that impacts the syntax. You can also use this to specify colors and other things, but I prefer to do that with color schemes.

All you need to do is open up the JSON file with the settings (which will conveniently tell you if you’ve forked and work off an older version while you troubleshoot) and add this line – I put it after the default profile line:

#Put profiles with spaces in quotes and set focus tab as desired, 0 is default profile 
"startupActions": "; new-tab -p <profile> ; new-tab -p <profile>; focus-tab -t 0",

Add as many as you would like and there you go.

Multiple Panes

You can also put things in different panes so you have multiple options visible at the same time. Look through the documentation to see your options. Here are a few handy things:

 # Open vertical or horizonal pane with default profile
 ALT+SHIFT+= (Vertical) ALT+SHIFT+- (Horizontal)
 # Open from profile menu
 ALT+(Click new tab or dropdown to select profile)
 # Move between panes
 # Resize

There’s not a great way to open split panes with different profiles from the keyboard yet, but a decent workaround is to either make a profile that runs the command or put in the command manually (I’d probably make this a PowerShell function if I wanted to use it a lot…yeah that happened, here’s the GitHub in case I develop it more. I would put this in the CurrentUserAllHosts profile version unless you want to keep it separated for some reason. If you create a profile and keep it in your first 9, you can open with CTRL+SHIFT+<#>. Pretty handy if there are 2 profiles that you need to split panes with frequently. Both of these will open in a new window, which is not that big of a deal. I’d rather deal with that than take hands off the keyboard.

 # Add options as desired and put profile names in quotes if they contain spaces
 # This will open in a new window either way
 wt -p <profile>; split-pane -p <profile>
 # PowerShell Function quick version - I might expand this more over time in my Github
 Function splitpanes ($profile1, $profile2, $type)
     wt -p $profile1`; split-pane -p $profile2 `-$type


The documentation is fairly good and a great place to start. It’s not always easy to find exactly what you are looking for though. Here are a few handy links to get started with:

Windows Terminal Startup Settings | Microsoft Docs

Windows Terminal command line arguments | Microsoft Docs

Windows Terminal Actions | Microsoft Docs

Launch Windows Terminal with multiple tabs (

cmd | Microsoft Docs

Posted in Blog, Resources

PrintNightmare Scanner – ItWasAllADream

PrintNightmare is causing quite a stir. This writeup from Kevin Beaumont is a great overview and intro if somehow you aren’t familiar with the issue. And the Huntress blog is also a good resource. From being patched in June, but not really that was another thing, to an OOB July patch that didn’t fully remediate, it’s been quite the adventure in infosec. There has been great work by cube0x0 and gentilkiwi to provide POC code to test systems and validate the July patch as well as a PowerShell implementation from Caleb Stewart and John Hammond. All kinds of fun. These are all exploit code, which is awesome, but maybe not something you want to run in your org. Enter byt3bl33d3r’s ItWasAllADream scanner. It works well and checks for the vulnerability without exploiting the hosts. Much better for testing. The ReadMe is great, but I know in my current state of dumpster fire, there were some brain farts. So writing up a quick guide to not forget. If you want to get some experience with containers, this is good practice with low overhead.

You may need to verify you are running WSL 2 if you want to route this through a WSL distro. Follow the documentation to get up and running with Docker and WSL 2. You may need restarts and to convert WSL 1 distros. Running WSL 1 and docker will make things cranky, so update before getting started. If you’ve installed Docker via apt, you will need to remove it (and remember it’s not just docker) to use the WSL 2 integration. Verify you have the right Docker by confirming the version. Setting up WSL2 wasn’t difficult, but it can be a little fidgety.

 docker --version

Windows recommends using Windows Terminal for the WSL 2 Docker integration. That may just be to push Terminal, but it’s got some advantages. So at least consider it.

Once you’ve got whatever you will be using for Docker functional, install and as directed. Super simple.

 git clone
 cd ItWasAllADream #This and next can be combined if desired, make sure you clone it where you want it
 docker build -t itwasalladream
 docker run -it itwasalladream -u <user> -p <password> -d <domain> <targetinCIDRnotation>

The output is a little verbose by default and is very clear what is found. This is great work by byt3bl33d3r.

The CSV output is dropped in the working directory in the CONTAINER. This was where I had a total brain fart. I do feel a little better that I’m not the only one based on the issues on Github. So, getting the report out of the container requires using docker cp. See the Docker documentation for details.

 docker copy <containername>:<reportname> .

If you don’t know the container name, get it by listing the containers available.

 docker ps -a

And clean up your scans when you are done.

 docker system prune

As much as I enjoy the generated container names Docker creates, they were a bit long to deal with effectively when using this to really check things. So name your container something useful and copy where you need it.

What is really great about the infosec community is from when the issue/question was posted, it took about 12 hours for details to be provided to the original poster with links and sample commands.

docker run --name <shortname> -ti itwasalladream -u <user> -p <password> -d <domain> <targetinCIDR>
# Get the report name from the output, adjust path to fit your needs
docker cp <shortname>:<reportname> /mnt/c/Users/<username>/<path>

So that’s ItWasAllADream in a nutshell. Easy to use scanner that in my testing has not caused issues and has returned accurate info. I suspect we’ll have a lot of people trying to scan systems who may not use Docker or WSL regularly, and hopefully this will help if they get stuck. And yes, this will probably be me here in a few months when I decided to re-check some things. Thus the writing it down.

I’m seeing a ton of questions about how to implement mitigations, and this testing is really helpful. Right now, it looks like the best option is shutting off the print spooler where that’s an option. Since that’s really unpractical in a lot of cases, the GPO disabling inbound remote printing also seems to be effective. Either way, I bet we’re going to be dealing with the fallout for some time.

Posted in Resources

Passing CASP+

I recently passed (as in yesterday) the CompTIA Certified Advanced Certified Security Practitioner (CASP+) exam. There are always questions about what you did to pass and why that cert, so here’s my breakdown of that type of info.

Why CASP+?

An obvious (and frequent) question is CASP+ or CISSP? Both have value, though to be honest, I think CISSP has such a head start and brand power that I’m not sure CASP+ will ever gain that level of popularity. They are sort of “equivalent”, but not really. I thought about doing CompTIA’s CySA+ next too. But for that skillset, I have opted for the eLearnSecurity’s Incident Handling and Threat Hunting certs (again, not exactly the same thing, but I wanted the hands-on components of the eLearnSecurity courses and exams). I also wanted a more managerial focused cert because I felt that was an important aspect of my skillset to develop. I also liked the more technical focus of CASP+ compared to CISSP.

But bottom line, the 5 year experience requirement for CISSP kind of takes it off the table for a bit longer. I know I can do the Associate of CISSP thing, but I don’t feel like that carries much weight (for my situation). I would rather focus on other things until I’m eligible for the CISSP. Plus I should be able to take CISSP at about the time I need to renew CASP+, so that will be a good renewal option.

I’m not going to get into the which is a better or harder cert. That’s just asking for aggravation. This cert was the right option for me at this time.


I was fortunate to be able to take a prep course with Global Knowledge last summer through my employer. I took it as an instructor-led online class and learned a lot. And learned enough to know I wasn’t ready to take the exam yet. The material was covered well, and reviewing my notes from the class was one of the last things I did to prep. I did the labs during the class and then read the provided book afterward. Good overall prep and probably enough for people with more experience than I had at the time. I picked this training for job-related professional development because I thought it made the most sense for the job I have. The certification was in some ways just a bonus.

Based on my assessment that I needed additional prep, I turned to Cybrary. I did the set of labs from Practice Labs – that had 30+ virtual labs covering the content of the exam. That was a LOT of time. I don’t know that it was really necessary for the exam, but it was great for skill development. I also did the CASP+ video course with Jim Hollis. I used that as kind of an audiobook more than a dedicated watch the videos and take notes thing. Basically a time efficient way to cover the material again. I thought the class was good. Not as in-depth as the Global Knowledge course, but a good review of the information. I also did the practice exams available from Kaplan and Practice Labs. Those were huge. Getting used to how the questions are asked is a really important part of prepping for this cert (and CISSP as well from what I understand). I have a tendency to overthink questions and bring in all kinds of what-ifs, so the practice exams and explanations were really helpful. Plus they work well for review. I have access to the labs and exams because of my TA work, but I would have paid for at least a couple months access to help prep if I didn’t have access. I will also say that even though my TA stuff wasn’t directly related to prepping, the stuff that I’ve done as a TA did help with preparation because it is related to professional development.

I also listened to the Linked-in-Learning CASP+ course by Jason Dion – same as the Cybrary course – audio review. Another way to get exposure to the content. This course coverage was probably in between Global Knowledge and Cybrary in terms of depth. It is interesting that different courses focus on different things. I thought it was a good course and the review questions were a little different. This was also something I was able to access because of my employer. I’m very fortunate to work somewhere that my boss values continued professional development and has some budget to support it.

This certification focuses more on application of concepts than memorization, so prep accordingly. I think the big question I have about prep for this one is how much was the Global Knowledge course “needed” since it was the most expensive piece. I’m really glad I took the course because I learned a lot from the instructor and other students, but it’s unlikely I could have afforded the class on my own. If you look at the costs for a Cybrary and Linked-in-Learning, you can get a lot of content for a pretty reasonable price. I am a little biased toward Cybrary since I am a TA with them, but I feel like if you look at the content available, you get a massive amount of stuff for the price. If you can’t afford a year, just getting the premium access for a few months of dedicated prep will serve you well. If I had to choose between Cybrary and Linked-in-Learning, I would opt for Cybrary because of the labs and practice exams. I think combining the Cybrary CASP materials and a good CASP+ book would put you in a pretty good position. I used the book from my Global Knowledge course, so I can’t recommend a specific text. Amazon has a couple of options from the publishers you expect to see. But the reviews (grain of salt needed) for both the All-in-One and Sybex are mixed.

The exam itself costs $450ish direct, so probably around $400 with discount you can usually get. Couple months of Cybrary, a prep book, and the exam, and you are looking at under a grand. That’s not cheap, but hopefully doable for most people looking at this cert. The Global Knowledge course highlights the importance of a training budget at work. It really was good training, but more expensive that I would likely have paid for out-of-pocket.


I took the Global Knowledge course about a year ago, so I took my time prepping. I think it can be done more quickly, but I was okay taking longer. I continued my habit of having too many irons in the fire. Working on the AWS pentesting book definitely took some prep time away. As did working on the eLearnSecurity incident handling course. You can argue those also are preparation since they are professional development, but I definitely could have shortened my prep time by focusing purely on CASP.

In the week leading up to the test, I reviewed my notes from the Global Knowledge course and drilled the practice exams on Cybrary a lot. I spent the morning working then took the test. I did take the test while my area was still under restrictions related to COVID – don’t generally recommend taking a certification exam in the middle of a pandemic, but it was scheduled when it was scheduled.

What Would I Do Differently

If I had the experience and wanted to just get the cert – do one of the video courses, read a book, and prep with the exams. I think depending on reading speed and other demands, you could be ready to go in a couple months (or less). Otherwise, I’m pretty happy with how I prepped. I could have been more focused, but I get so much value out of book club and other things that it’s not worth eliminating those things. I think scheduling the exam is a good idea early in the prep process. Having a deadline helps keep you focused. Given the cost of certification attempts, I’m likely going to continue to take my time preparing. I want to go in prepared and feel like I’ve done what I can to pass on the first attempt.

What’s Next

I’m still horrible about celebrating accomplishments, so I posted on LinkedIn again and will get around to posting on Twitter. I have already started planning out when I’ll get my incident handling course done plus working on the AWS pentesting book. I’ve got an Autopsy training that I picked up when they offered it free that I’m really looking forward to. And I’ve got a couple of really cool Black Hills 4 hour trainings that I need to work through. That sounds like a lot when I write it down…

For today, the day after passing, I’m going to enjoy the accomplishment and be happy with how far I’ve come.