Posted in Blog, Resources, Review

Workshop Review – DevOps for Hackers with Hands-On Labs w/Ralph May (Black Hills Information Security)

I’ve been wanting to get some exposure to deployment options like Ansible and Terraform, so when a Black Hill Information Security (BHIS) workshop popped up in my LinkedIn feed taking about using both for hackers, I mashed the registration link as fast as I could.

My “why” for taking the workshop was to have a better idea of how I can use Ansible and Terraform to better manage my lab environments. Since I tend to pop up and destroy cloud resources, it made sense to learn more and see if it could help. Plus it’s not going to hurt to know the basics of either one. That the workshop used Digital Oceans was a bonus. It’s nice to get out of the AWS and Azure worlds to see something new.

The TL:DR, if you see a Black Hills Information Security, Wild West Hackin’ Fest , or ActiveCountermeasures webinar or workshop that covers something you are interested in, sign up. It will be a good use of your time.

Workshop Resources

Workshop YouTube Recording: DevOps for Hackers with Hands-On Labs w/ Ralph May (4-Hour Workshop) – YouTube

Workshop Website (looks like Ralph May turned this into a public Notion page, so I’m linking that rather than the original): DevOps for Hackers with Hands-On Labs w/ Ralph May (4-Hour Workshop) (

The original website was

Workshop Overview

This 4 hour (plus an hour for setup) workshop included 4 labs (Terraform, Ansible, Docker, and C2 Deployment). Ralph did an introduction of each topic before walking through the lab. A huge help was that he provided completed lab files. Using the completed files I was able to keep up with the labs. There’s no way I could have typed fast enough. I might have been able to if I were more familiar with the platforms, but this approach worked for me. My plan is to go through the workshop again at my own pace where I can build the lab files myself knowing I have functional files to check things against if needed. The initial hour for setup was helpful since I had a brain fart about unpacking the VM and didn’t put it in a specific folder prior to extracting. The BHIS Discord was very active during the setup time, and everyone I saw having issues was able to get moving in the right direction before things started. I really appreciate this extra time because labs don’t go well when your environment is wonky. This lab setup was same day, which I think may be a more effective method. An earlier workshop sent the lab files earlier, and I think that is more likely to get put off until it’s time for the workshop. But that was also a pretty large VM download, so there may have been a need to spread that traffic out. I think you would get a decent amount out of the workshop just following along and not working on the labs during the live portion. I prefer to do what I can hands-on during the live portion so I have a better idea of what I want to go back to.

Presentation slides and lab guides were available for download, and it looks like those will be available on the Notion site for at least a little while. They mentioned Ralph is developing this into a full 16 hour workshop, and I think for anyone who is managing infrastructure for pentesting or red teaming, it could be a good time investment. I could see using this approach to pop up custom infrastructure quickly for each engagement and easily keep things separated out. The BHIS team also built in breaks every hour, so you could have a few minutes to step out for a bio break, check in on work, or wander aimlessly for a bit. That approach is working well for their 4 hour workshops that I’ve been in.

My Takeaways

I wanted to get a good idea of what things were and how they were used – mission accomplished in that regard. These are my brief, extremely high level takeaways. There’s a lot more to it, but these are the things that I want to have stored in my head so I have an idea of what I might want to reference for different projects.

  • Terraform – infrastructure as code, manage infrastructure, fast and consistent, free/open source, great for cloud and API
  • Ansible – infrastructure as code, configuration management, Python and YAML, slower, OS config
  • Docker (this was what was most familiar to me in the workshop) – containers, CI/CD, runs on all the things, application isolation, clean up your images
  • C2 deployment – there are a lot of C2 options available (and a lot of fun logos), calling some just a C2 framework is underselling their capabilities
    • Mythic – Docker(!), cool but there’s a lot going on, need to research more if I want to effectively use this, can be deployed with Ansible
    • I need to look up the ones I’m not familiar with (not being a pentester these aren’t something I can justify a lot of time playing with) to keep up with what’s out there. I need to look at some of these for labs so I’m not just using Metasploit, Empire, etc. because those are the ones I’m most familiar with. But also beware of chasing shiny things.

Post-Workshop To Dos

I want to go back through and do the labs by creating the files myself. Spending that time will help internalize the capabilities of Terraform and Ansible. I’ll probably do this using Digital Ocean initially, but I think the next time I’m building labs in AWS or Azure, I want to at least try setting things up with Terraform or Ansible as appropriate.

I probably would not go for the 16 hour workshop right now just because what it would cover are not my primary responsibilities. If I were in a role where I could use this approach to be more efficient, I’d be jumping at the opportunity. BHIS and WWHF have some of the most reasonable training rates around. And they are offering even more with a cyberrange as part of their Antisyphon training stuff, so keep an eye on their training schedule.

Wrap Up

The content was well prepared and well presented. Labs worked and had files available so you could keep up if needed. I have an understanding of how Terraform and Ansible can be used. I know where I can go to find out more and ways to practice using them. I wouldn’t even call myself a beginner, but I know enough to learn more. That’s a big part of why I take things like this.

Bottom line, this was a good use of my time. I will continue to take advantage of the training from BHIS/WWHF/ACM as much as I can.

Posted in Blog, Resources

Let’s talk Terminal – Windows Terminal

First off, I really feel like there should be an apostrophe in there somewhere – Windows’ Terminal maybe? Regardless, I recently decided to give Windows Terminal a try after a colleague (thanks Kristy!) mentioned she’s been using it some. And then, I swear, I was seeing it everywhere. I could see some advantages, so I installed. Now, I think I might have a problem. I wanted a quick reference for myself and thought it would be a decent blog since I kept sharing them with colleagues (whether they wanted to know or not – I get excited/you’re welcome Chad!)

What is it and why I’m a little obsessed (skip if you’re just here for the tips/tricks)

I figured this was some random third party app before I started looking into it. Nope, it is from Microsoft – so that could be positive or negative. The big “selling points” for me were having multiple tabs and custom themes. Since I sometimes (always) have a questionable number of terminals open between various PowerShell, Command Prompt, and WSL options, being able to easily contain and differentiate them would be nice. And nice it is.

Terminal defaulted to PowerShell for me, which was fine. It will also pull in the other terminals you have, so if you are running PowerShell 7 alongside 5, it’ll show up. As will WSL distros, Azure Cloud Shell, etc. When I got some time to fiddle with it, I realized how well it fits into my workflow. The ability to have profiles for different tasks and access all the options without having a ton of Windows open improved my efficiency quite a bit. Not knowing which PowerShell window was IPPSSession versus ExchangeOnline versus general versus whatever made moving between them frustrating. You can change themes in the regular terminals, but it’s kind of a pain. I’m now happily down to usually just Windows Terminal and PowerShell ISE when I need that. Much of the time I’m down to Windows Terminal with multiple tabs.

What makes it powerful is the ability to set profiles, pass some commands when calling profiles, and starting with multiple tabs open. You can also specify the path to start in for a profile, which comes in handy. All can have different themes, tab titles, and tab icons. The ability to have clear visual indicators is incredibly helpful, particularly when you might be doing some IR and need to have access to multiple terminal options. For some reason, using the right commands in the right places is more efficient. Who knew? It also lets me more clearly separate which has admin permissions. I’m using different background colors and specific icons to make it easy to get where I need to be to do that next thing. And as silly it is, opening with the terminals I’m typically in all day every day without having to do anything makes me ridiculously happy. People like to tell me that’s me being efficient, but it feels kind of lazy to me. I guess it’s like writing a function to run a 1 line command in PowerShell – it may only save a few keystrokes each time, but the cumulative savings really adds up.

Set Up Tips

The kind of time consuming part is getting things setup for effectiveness. A lot of the options can be configured via the Settings GUI – Startup options, Appearance, Profile basics, etc. There are additional color schemes available by searching online, but I’ve been tweaking what already there because that’s a rabbit I don’t need to chase right now. Pick your profile name, icon, font, color scheme, background image, etc. to whatever makes you happy. Create custom color schemes in the Color Schemes section and apply to your profiles to help differentiate them.

Pass commands starting the profile

If you look at the profiles, you’ll notice there’s a “Command Line” spot with just the typical cmd.exe, powershell.exe, wsl.exe -d <distro>, etc. there. What is cool/useful is being able to pass commands here. So if you want to always start a profile to connect to a computer remotely because you do this ALL THE TIME, you can:

 #Include -NoProfile if you want to avoid having a profile loaded
 PowerShell.exe -NoExit -Command Enter-PSSession -ComputerName <computername>
 PowerShell.exe -NoExit -Command Connect-IPPSSession -UserPrincipalName <UPN>
 PowerShell.exe -NoExit -Command Connect-ExchangeOnline -UserPrincipalName <UPN>

You might also want to jump straight into Python:

 cmd.exe /k python #Or whatever you start Python with in Command Prompt
 wsl.exe -d <distro> python3 #Or whatever you start Python with in your various WSL distros

This was a game changer because of how “efficient” I like to be – not having the extra step of connecting or whatever is phenomenal. The ability to pass arguments starting profiles gives you a ton of options. You may need to do a little testing to determine if you need to tweak the syntax a bit, but it’s pretty straightforward.

Start with multiple tabs

This part moved Windows Terminal from nice to awesome…because apparently opening the extra tabs is really hard for me. You do need a more recent version as some of the older ones don’t allow for it. Make sure you determine whether you want to use a distro or a profile because that impacts the syntax. You can also use this to specify colors and other things, but I prefer to do that with color schemes.

All you need to do is open up the JSON file with the settings (which will conveniently tell you if you’ve forked and work off an older version while you troubleshoot) and add this line – I put it after the default profile line:

#Put profiles with spaces in quotes and set focus tab as desired, 0 is default profile 
"startupActions": "; new-tab -p <profile> ; new-tab -p <profile>; focus-tab -t 0",

Add as many as you would like and there you go.

Multiple Panes

You can also put things in different panes so you have multiple options visible at the same time. Look through the documentation to see your options. Here are a few handy things:

 # Open vertical or horizonal pane with default profile
 ALT+SHIFT+= (Vertical) ALT+SHIFT+- (Horizontal)
 # Open from profile menu
 ALT+(Click new tab or dropdown to select profile)
 # Move between panes
 # Resize

There’s not a great way to open split panes with different profiles from the keyboard yet, but a decent workaround is to either make a profile that runs the command or put in the command manually (I’d probably make this a PowerShell function if I wanted to use it a lot…yeah that happened, here’s the GitHub in case I develop it more. I would put this in the CurrentUserAllHosts profile version unless you want to keep it separated for some reason. If you create a profile and keep it in your first 9, you can open with CTRL+SHIFT+<#>. Pretty handy if there are 2 profiles that you need to split panes with frequently. Both of these will open in a new window, which is not that big of a deal. I’d rather deal with that than take hands off the keyboard.

 # Add options as desired and put profile names in quotes if they contain spaces
 # This will open in a new window either way
 wt -p <profile>; split-pane -p <profile>
 # PowerShell Function quick version - I might expand this more over time in my Github
 Function splitpanes ($profile1, $profile2, $type)
     wt -p $profile1`; split-pane -p $profile2 `-$type


The documentation is fairly good and a great place to start. It’s not always easy to find exactly what you are looking for though. Here are a few handy links to get started with:

Windows Terminal Startup Settings | Microsoft Docs

Windows Terminal command line arguments | Microsoft Docs

Windows Terminal Actions | Microsoft Docs

Launch Windows Terminal with multiple tabs (

cmd | Microsoft Docs

Posted in Blog, Resources

PrintNightmare Scanner – ItWasAllADream

PrintNightmare is causing quite a stir. This writeup from Kevin Beaumont is a great overview and intro if somehow you aren’t familiar with the issue. And the Huntress blog is also a good resource. From being patched in June, but not really that was another thing, to an OOB July patch that didn’t fully remediate, it’s been quite the adventure in infosec. There has been great work by cube0x0 and gentilkiwi to provide POC code to test systems and validate the July patch as well as a PowerShell implementation from Caleb Stewart and John Hammond. All kinds of fun. These are all exploit code, which is awesome, but maybe not something you want to run in your org. Enter byt3bl33d3r’s ItWasAllADream scanner. It works well and checks for the vulnerability without exploiting the hosts. Much better for testing. The ReadMe is great, but I know in my current state of dumpster fire, there were some brain farts. So writing up a quick guide to not forget. If you want to get some experience with containers, this is good practice with low overhead.

You may need to verify you are running WSL 2 if you want to route this through a WSL distro. Follow the documentation to get up and running with Docker and WSL 2. You may need restarts and to convert WSL 1 distros. Running WSL 1 and docker will make things cranky, so update before getting started. If you’ve installed Docker via apt, you will need to remove it (and remember it’s not just docker) to use the WSL 2 integration. Verify you have the right Docker by confirming the version. Setting up WSL2 wasn’t difficult, but it can be a little fidgety.

 docker --version

Windows recommends using Windows Terminal for the WSL 2 Docker integration. That may just be to push Terminal, but it’s got some advantages. So at least consider it.

Once you’ve got whatever you will be using for Docker functional, install and as directed. Super simple.

 git clone
 cd ItWasAllADream #This and next can be combined if desired, make sure you clone it where you want it
 docker build -t itwasalladream
 docker run -it itwasalladream -u <user> -p <password> -d <domain> <targetinCIDRnotation>

The output is a little verbose by default and is very clear what is found. This is great work by byt3bl33d3r.

The CSV output is dropped in the working directory in the CONTAINER. This was where I had a total brain fart. I do feel a little better that I’m not the only one based on the issues on Github. So, getting the report out of the container requires using docker cp. See the Docker documentation for details.

 docker copy <containername>:<reportname> .

If you don’t know the container name, get it by listing the containers available.

 docker ps -a

And clean up your scans when you are done.

 docker system prune

As much as I enjoy the generated container names Docker creates, they were a bit long to deal with effectively when using this to really check things. So name your container something useful and copy where you need it.

What is really great about the infosec community is from when the issue/question was posted, it took about 12 hours for details to be provided to the original poster with links and sample commands.

docker run --name <shortname> -ti itwasalladream -u <user> -p <password> -d <domain> <targetinCIDR>
# Get the report name from the output, adjust path to fit your needs
docker cp <shortname>:<reportname> /mnt/c/Users/<username>/<path>

So that’s ItWasAllADream in a nutshell. Easy to use scanner that in my testing has not caused issues and has returned accurate info. I suspect we’ll have a lot of people trying to scan systems who may not use Docker or WSL regularly, and hopefully this will help if they get stuck. And yes, this will probably be me here in a few months when I decided to re-check some things. Thus the writing it down.

I’m seeing a ton of questions about how to implement mitigations, and this testing is really helpful. Right now, it looks like the best option is shutting off the print spooler where that’s an option. Since that’s really unpractical in a lot of cases, the GPO disabling inbound remote printing also seems to be effective. Either way, I bet we’re going to be dealing with the fallout for some time.

Posted in Resources

Passing CASP+

I recently passed (as in yesterday) the CompTIA Certified Advanced Certified Security Practitioner (CASP+) exam. There are always questions about what you did to pass and why that cert, so here’s my breakdown of that type of info.

Why CASP+?

An obvious (and frequent) question is CASP+ or CISSP? Both have value, though to be honest, I think CISSP has such a head start and brand power that I’m not sure CASP+ will ever gain that level of popularity. They are sort of “equivalent”, but not really. I thought about doing CompTIA’s CySA+ next too. But for that skillset, I have opted for the eLearnSecurity’s Incident Handling and Threat Hunting certs (again, not exactly the same thing, but I wanted the hands-on components of the eLearnSecurity courses and exams). I also wanted a more managerial focused cert because I felt that was an important aspect of my skillset to develop. I also liked the more technical focus of CASP+ compared to CISSP.

But bottom line, the 5 year experience requirement for CISSP kind of takes it off the table for a bit longer. I know I can do the Associate of CISSP thing, but I don’t feel like that carries much weight (for my situation). I would rather focus on other things until I’m eligible for the CISSP. Plus I should be able to take CISSP at about the time I need to renew CASP+, so that will be a good renewal option.

I’m not going to get into the which is a better or harder cert. That’s just asking for aggravation. This cert was the right option for me at this time.


I was fortunate to be able to take a prep course with Global Knowledge last summer through my employer. I took it as an instructor-led online class and learned a lot. And learned enough to know I wasn’t ready to take the exam yet. The material was covered well, and reviewing my notes from the class was one of the last things I did to prep. I did the labs during the class and then read the provided book afterward. Good overall prep and probably enough for people with more experience than I had at the time. I picked this training for job-related professional development because I thought it made the most sense for the job I have. The certification was in some ways just a bonus.

Based on my assessment that I needed additional prep, I turned to Cybrary. I did the set of labs from Practice Labs – that had 30+ virtual labs covering the content of the exam. That was a LOT of time. I don’t know that it was really necessary for the exam, but it was great for skill development. I also did the CASP+ video course with Jim Hollis. I used that as kind of an audiobook more than a dedicated watch the videos and take notes thing. Basically a time efficient way to cover the material again. I thought the class was good. Not as in-depth as the Global Knowledge course, but a good review of the information. I also did the practice exams available from Kaplan and Practice Labs. Those were huge. Getting used to how the questions are asked is a really important part of prepping for this cert (and CISSP as well from what I understand). I have a tendency to overthink questions and bring in all kinds of what-ifs, so the practice exams and explanations were really helpful. Plus they work well for review. I have access to the labs and exams because of my TA work, but I would have paid for at least a couple months access to help prep if I didn’t have access. I will also say that even though my TA stuff wasn’t directly related to prepping, the stuff that I’ve done as a TA did help with preparation because it is related to professional development.

I also listened to the Linked-in-Learning CASP+ course by Jason Dion – same as the Cybrary course – audio review. Another way to get exposure to the content. This course coverage was probably in between Global Knowledge and Cybrary in terms of depth. It is interesting that different courses focus on different things. I thought it was a good course and the review questions were a little different. This was also something I was able to access because of my employer. I’m very fortunate to work somewhere that my boss values continued professional development and has some budget to support it.

This certification focuses more on application of concepts than memorization, so prep accordingly. I think the big question I have about prep for this one is how much was the Global Knowledge course “needed” since it was the most expensive piece. I’m really glad I took the course because I learned a lot from the instructor and other students, but it’s unlikely I could have afforded the class on my own. If you look at the costs for a Cybrary and Linked-in-Learning, you can get a lot of content for a pretty reasonable price. I am a little biased toward Cybrary since I am a TA with them, but I feel like if you look at the content available, you get a massive amount of stuff for the price. If you can’t afford a year, just getting the premium access for a few months of dedicated prep will serve you well. If I had to choose between Cybrary and Linked-in-Learning, I would opt for Cybrary because of the labs and practice exams. I think combining the Cybrary CASP materials and a good CASP+ book would put you in a pretty good position. I used the book from my Global Knowledge course, so I can’t recommend a specific text. Amazon has a couple of options from the publishers you expect to see. But the reviews (grain of salt needed) for both the All-in-One and Sybex are mixed.

The exam itself costs $450ish direct, so probably around $400 with discount you can usually get. Couple months of Cybrary, a prep book, and the exam, and you are looking at under a grand. That’s not cheap, but hopefully doable for most people looking at this cert. The Global Knowledge course highlights the importance of a training budget at work. It really was good training, but more expensive that I would likely have paid for out-of-pocket.


I took the Global Knowledge course about a year ago, so I took my time prepping. I think it can be done more quickly, but I was okay taking longer. I continued my habit of having too many irons in the fire. Working on the AWS pentesting book definitely took some prep time away. As did working on the eLearnSecurity incident handling course. You can argue those also are preparation since they are professional development, but I definitely could have shortened my prep time by focusing purely on CASP.

In the week leading up to the test, I reviewed my notes from the Global Knowledge course and drilled the practice exams on Cybrary a lot. I spent the morning working then took the test. I did take the test while my area was still under restrictions related to COVID – don’t generally recommend taking a certification exam in the middle of a pandemic, but it was scheduled when it was scheduled.

What Would I Do Differently

If I had the experience and wanted to just get the cert – do one of the video courses, read a book, and prep with the exams. I think depending on reading speed and other demands, you could be ready to go in a couple months (or less). Otherwise, I’m pretty happy with how I prepped. I could have been more focused, but I get so much value out of book club and other things that it’s not worth eliminating those things. I think scheduling the exam is a good idea early in the prep process. Having a deadline helps keep you focused. Given the cost of certification attempts, I’m likely going to continue to take my time preparing. I want to go in prepared and feel like I’ve done what I can to pass on the first attempt.

What’s Next

I’m still horrible about celebrating accomplishments, so I posted on LinkedIn again and will get around to posting on Twitter. I have already started planning out when I’ll get my incident handling course done plus working on the AWS pentesting book. I’ve got an Autopsy training that I picked up when they offered it free that I’m really looking forward to. And I’ve got a couple of really cool Black Hills 4 hour trainings that I need to work through. That sounds like a lot when I write it down…

For today, the day after passing, I’m going to enjoy the accomplishment and be happy with how far I’ve come.

Posted in Logging, Resources

Graylog Homelab/POC: Part 2 – Searching, Streams, Alerts, and Dashboards

Now that you have data coming in (if you don’t, see Part 1), it’s time to do something with it. A good place to start is figuring out what the logs are saying. After looking over a couple of common log types, I think it gets to be relatively intuitive. Most vendors will have some sort of explanation of log structure, or you can refer to the RFC when appropriate. For our POC home lab, these resources give a solid overview of the logs we’re bringing in:

Once you have a general idea of what the logs look like, it’s time to start searching for things that might be interesting.


Honestly, if you can Google, you can do basic searches in most log management systems. I’ve found Graylog’s search language to be easy to jump into – typical Boolean search structure with an OR default for multiple words. Even though it’s straightforward, taking the time to read through the Graylog documentation on searching would be worth it. According to the Graylog Searching documentation, Graylog is close to Lucene syntax. Lucene is used in a lot of different places, including Azure search – so a good thing to get comfortable with. Microsoft has a guide for using Lucene with Azure that is pretty thorough. (Note – these are good things to notice to help identify transferable skills.)

  • Search for a keyword anywhere – keyword

  • Search for multiple keywords with OR – keyword1 keyword2

  • Search for exact phrase "keyword1 keyword2"

  • Search for a specific type – type:keyword

  • Search for a specific type or another specific type – type:(keyword1 OR keyword2)

You can also pull in regular expressions. If you aren’t familiar with regular expressions, the short Regex course on Cybrary is a demo heavy introduction (lots of examples, not a lot of lecture – I got a lot out of it) and regexr is a fantastic sandbox to practice in.

Graylog does have a time frame selector that defaults to the last 5 minutes. If you forget to adjust this, you may see nothing. And wonder what’s going on. It’s a handy feature though that quickly allows you to narrow things down.

The search page generates a histogram with the search that you can easily add to a dashboard. I like this feature and it was completely intuitive to understand. You can also do some additional analysis that can be popped into dashboards. One of the options I find nice is the “Quick Values” option – this gives you a pie chart and table of the results (again, easily popped into a dashboard). I found it really easy to search, make a visualization, and send it to a dashboard in Graylog. Easy enough that I was able to point and click through to make some dashboards without looking at the documentation. Looking over the documentation opens up a lot of additional possibilities.


The streams concept in Graylog took gave me pause for a moment, but reading about what the purpose is, made them make sense. Of course the documentation is helpful. I basically figured out what they were initially by trying to create an alert and the web console saying I had to set up a stream first. So the process is straightforward, but good to actual understand the architecture as we work on moving from a POC to a production deployment. I look at streams as a way to look at just messages of a certain type that I want to look at. So you could pull out logs from a specific machine or of a certain type. A couple of important things to note from the documentation is that the streams are in real-time and you can search for complex things using streams because the tag for the stream is applied as the message is processed. These aspects make streams a great way to keep on eye on specific things going on in the environment and quickly find messages of interest.

The easiest way to set up the stream is to set up a rule using the name of the machine sending in the desired logs. You can choose to send messages to multiple streams – for example, having a stream for a specific Windows machine and another tracking all failed logins.

For our homelab POC, we’ll walk through setting up a stream for the Parrot VM. In the Graylog web console, go to Streams then Create Stream.

  • Title: Parrot

  • Description: Logs from Parrot VM

  • Index: Default index set

  • Leave the box for “Remove matches from ‘All messages’ stream” unchecked

  • Save

Now to give the stream a rule to pull in the logs. Click on Manage Rules, select the Syslog UDP input created earlier. (If you have other things coming in on this input, go to the All Messages stream and copy the message ID for one of the Parrot logs to save a little time. Also make note of the index name. ) Pull up a message from the Parrot VM and use that to create a stream rule. (You can select the inverted option to exclude messages matching the rule.)

  • Field: source

  • Type: match exactly

  • Value: parrot

Verify that the message would be routed to the stream, then click I'm done. Once you’re back on the main Streams console, be sure to start the stream (click the Start Stream button).


Now that you’ve got a stream setup, you can create an alert to trigger off the stream. The simplest alert is probably setting up an alert to trigger when more than X messages occur in Y amount of time. Alerts can be either unresolved (trigger condition is true) or resolved (trigger condition is no longer true). Graylog also provides the option of setting a grace period so a period of time must pass before an alert is re-triggered. This is a high-level overview, so check out the documentation for more depth. You need to set up conditions to trigger alerts – this process is very similar to setting up a stream. A few alerts to get started with could be triggering when a login failure occurs on Windows (search for event id 4625) or when you get more than 5 alerts at level 3 or lower from your firewall. In a POC environment, playing around with the different streams and alerts is important. Try adjusting the sensitivity of the triggering conditions to the point where you get alerts when something needs attention. If you notice that you aren’t getting much going on in your lab environment, you can either create traffic that would trigger the alerts (an easy way is intentionally entering incorrect login credentials) or adjust the sensitivity to trigger on normal traffic in your environment.

Steps for setting up an alert:

  • Define the condition: Alerts > Conditions > Add New Condition

    • Choose which steam to alert on.

    • Choose condition type:

      • Field Content Alert Condition: At least one message containing the value specified.

        • Somewhat nuclear option – I want to know whenever this happens. For your own sanity, I would use these sparingly.

      • Field Aggregation Alert Condition: Computation on stream is higher/lower than a threshold.

        • Math – Use mean, SD, min, max, or sum to monitor a condition. Good for monitoring performance, or when you want to know when something goes out of range (set conditions for both min and max options).

      • Message Count Alert Condition: More than a specified number of messages in the specified time frame.

        • Frequency count – Let me know when something spikes. Easy to set up for some basic security monitoring.

  • Set up the notification: Alerts > Notifications > Add New Notification

    • Note: The notification will apply to all conditions for a stream.

    • Choose which stream to notify on.

    • Choose notification type (these are the defaults, more can be added):

      • HTTP Alarm Callback: Call an endpoint when the alert is triggered. Sends a POST request to the notification URL.

      • Email Alert Callback: Email, have to modify the server configuration file with the SMTP settings.

      • For the POC, either an SMTP server or a webpage to post alerts to will work. I found these walkthroughs from DigitalOcean useful for setting up a send only SMTP server and for installing the Apache Web Server. I think the easiest option is setting up the webpage, but building an SMTP server would definitely be a great learning exercise.


vehicle gauges
Photo by Craig Adderley on

I’m not going to go into a lot of depth here, but setting up basic dashboards is ridiculously easy. I mentioned earlier being able to point and click through searches to add things to the dashboard.

You will need to set up a dashboard: Dashboards > Create Dashboard. Then you can add searches or other items by clicking on the Add to dashboard button. Once you’ve added widgets, you need to click the Unlock/Edit button on the dashboard to move things around. Learn more about the available widget types in the Graylog Documentation.

Once you’ve set up a dashboard, it’s time to add elements to the dashboard. I’ve seen some amazing dashboards, and there is a learning curve. People often identify themselves as search people, dashboard people, etc. when talking about logging. The goal here is to be able to handle the basics to put yourself in a position where you can develop more in-depth skills as needed.

Some things you might want to track would be various Windows events, such as 4625 (failed login) or 4776 (audit failures), number of messages in a stream (for instance dropping to 0 meaning you probably have a storage shortage), or the quick values for a search (such as the number of password resets done in the last day). The one thing I’ve noticed so far is that I’m not able to do some of the more in-depth analysis in Graylog that I could in Splunk. It’s probably because I haven’t gotten into the search language deep enough.


A quick note on roles…the default is Admin (because, of course it is). You can easily add other roles. The other available default role is Reader. What you’ll want to do is create specific roles for your environment such as Analyst or Manager, and then assign permissions to view or edit the available streams and dashboards. Being able to give managers or other higher ups a single pane of glass (yeah, I went there) can be really helpful, and by giving them view only permissions, you can make it easy for them to keep an eye on things without having to get in the weeds of managing the system. “Single pane of glass” may be one of those eyeroll eliciting terms, but there are a lot of benefits to making the work you do visible.

What’s Next?

I think one of the most beneficial next steps is building Graylog out yourself, without the training wheels of using the VM. The documentation is fairly thorough, but I have run into some permissions issues getting Elasticsearch setup. Some searching should get that straightened out, but you may need to search related to Graylog, Elasticsearch, or MongoDB. However, to get Graylog beyond a small POC, you’ve got to learn how to build it out yourself.

It would also be helpful to learn some of the other options – Splunk, Elastic, LogRhythm, etc. Elastic is next on my list because of the free SIEM option. I think it’s important to get familiar with a variety of options because the solution that works for you now may not be the best option in the future.

Probably the most important thing you can do is spend time with your logs. Get to know what normal looks like. The event codes and other descriptors will eventually start to make sense without having to look up the code – though making a cheat sheet would be a good idea as well.

Graylog is a great way to learn the basics of centralized logging. In a small lab environment, you should be able to keep the VM running for quite awhile without running out of space. I have found it very quick and easy to get set up, which is important when you are trying to develop skills and want to maximize learning time.