Posted in Blog, Cons

DaC: The Good, The Bad, and The Ugly (HouSecCon 2022)

Here’s a rough summary of my talk at HouSecCon this year…mostly my thoughts on the advantages and disadvantages of detection as code (DaC) and why I think aspects of DaC would benefit most teams.

DaC is a part of the move toward “everything as code” (EaC). EaC is pretty hot right now, though low/no code solutions are gaining traction in some areas. There’s good reason for everything as code gaining popularity. The ease of managing everything with code can be very beneficial. Plus the versioning and consistency you can gain is valuable if your org is already managing code. Here’s a general overview of EaC if you aren’t familiar with the concept. While there is a learning curve, long-term benefits are worth looking into.

DaC in its most literal form is getting your detections into code and managing through a continuous integration/continuous deployment (CI/CD) pipeline. I can’t say that approach is going to be a great idea for most teams – if your org has a small or one person team, the overhead associated with going fully to code (infrastructure, reviews, etc.) is likely not worth it. But if you broaden the definition and look at DaC as a general approach or mindset, the benefits become easier to obtain. Anton Chuvakin advocated for taking a broader approach with several key characteristics. Roughly speaking:

  • Content versioning
  • Testing (QA)
  • Reuse/modularity
  • Cross-tool usage/detection
  • Metrics

If those things lead you to a CI/CD pipeline for detections, cool. If not, you can still benefit. Your detections may or may not get into a programming language depending on your toolset, but you can find a way to implement versioning and testing. I like taking the continuous improvement mindset and applying it to detections.

A few related concepts can help implement DaC regardless of how intense you want to be with it. The first is Palantir’s Alerting & Detection Strategy Framework. This framework specifies having goal, categorization (to MITRE ATT&CK or whatever you use), strategy abstract (high level overview), technical context (details, self-contained reference), blind spots and assumptions (stuff you know might be an issue), false positives (things you might not be able to suppress, try anyways), validation (testing), priority (severity), response (runbook), and additional resources for each detection. So that’s…a lot. It’s valuable documentation, but you can run the risk of spending too much time writing and not enough time doing. I think you can streamline the process and be efficient. At minimum, having the categorization, strategy abstract, and responses will bring you a long way. Detections without support lead to increased response time and make bringing new team members onboard more difficult. This info could live with your runbooks or wherever you keep your documentation. I like GitHub for this because of the versioning and ability to use Markdown to embed code and diagrams. You could go with Jupyter notebooks, but I’m still figuring out how to make that work well for more than a single person (JupyterHub looks promising though). The ADS Framework gives you the structure to implement versioning and testing.

Next up is the detection maturity Level from Haider Dost/Snowflake. This is specifically looking at your overall program. It’s important to be realistic about your current capabilities and not try to go from 0 to 90 overnight (something I’m frequently guilty of). The maturity levels are relatively standard: Ad-hoc, Organized, Optimized. The categories are Processes, Data, Tools, and Technology, Capabilities, Coverage, and People. I find this framework quite helpful to help identify areas that need attention. Knowing where you are at and where you want to go can help avoid the reactive posture that often exists in infosec. This Threat Detection Maturity Framework can help keep your program focused on appropriate next steps and with the different pieces, address the cross-tool usage/detection and metrics portions of the DaC approach. Understanding your data, tools, and tech is necessary to implement cross-tool functionality and helps facilitate reuse.

The last supporting concept for now is the Detection Development Lifecycle (from Haider Dost, Tammy Truong, and Michele Freschi/Snowflake – they are doing some great work over there). They break the overall lifecycle into Detection Creation (Requirements Gathering, Design, Development, Testing and Deployment) and Detection Maintenance (Monitoring, Continuous Testing). This works nicely with the ADS Framework and established the process for putting things into practice over time. I like to include the metrics piece in the Continuous Testing phase – going back and seeing how many alerts a detection has given you and the associated value of those over a given period of time is important. If nothing else, look at the alerts you have and figure out how to get as much pertinent information as possible into the alert title or ticket generated. Even informal “this alert is annoying me because I have to go here to see this” can be an early informal form of testing if you can adjust the detection to reduce that annoyance.

All of that is great in theory, but to be honest, going all in on DaC is not going to make sense for all, or possibly even most, security teams. If you have a single person team, there likely aren’t enough hours in the day to implement all of the DaC concepts. But there are pieces (like continuous improvement) that can be implemented. I think most detection engineers have a long and growing list of detection tuning they would like to do. Blocking off even an hour or two a week can add up. Pick the most annoying alert you have and see if you can make it a bit better. I’m not advocating for turning things off, though some detections may not be adding value in your environment, but for ensuring your exceptions are in place and bringing enough information in to let you know if it’s something that needs further investigation or potentially problematic but okay given the context at a glance.

I do think software companies with an existing CI/CD pipeline are best positioned to fully implement DaC. Some security products are better than others for this approach. You can see the FloQast series on implementing DaC here, here, and here (with more coming). And Panther is also putting out good info like this piece on writing detections with Python. The environment and tooling in place should inform how fully you implement DaC. I would like to see most teams at least implementing the mindset that DaC supports (continuous improvement, testing, versioning, etc.). But the bottom line it, you have to do what works for you.

I think I got all the things I referenced linked above except for Sigma and the website that will do the rule conversions. Hopefully you’ve come away with at least 1 piece of DaC you could implement to reduce alert fatigue and improve your detection program.



Lifelong paradox - cyber sec enthusiast - loves to learn

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.