AI D&R: AI (in Security) is Dead; Long Live AI (in Security)


Acronym Hell: _DR&AI
EDR, XDR, MDR, NDR, MLDR, ITDR, CDR, TLDR…
Okay, so that last one was a joke, but tl;dr: there are so many acronyms for different flavors of detection and response that even though D&R is important, it can be hell to talk about (let alone manually implement). Modern security programs require the capability to monitor threat signals within the enterprise and respond to/mitigate these threats more quickly than ever before. So, how do we navigate and improve D&R as we—and attackers—continue to advance?

At recent conferences and throughout the space, the question of how to incorporate AI into security processes like D&R going forward has been on everyone’s minds. However, historically, there has already been extensive use of AI in D&R tools, and in this article, we’ll discuss some of those historical use cases, some modern approaches, and some future applications of AI that we’re anticipating.
Historical Context
AI has a long history of use in D&R, with 2 primary approaches: profiling for anomaly detection and improving the accuracy of classical methods like signature detection.
💡 Note: For the purposes of this article, we use the “AI” (Artificial Intelligence) acronym to mean systems that can learn from data and operate autonomously. We are not referencing “data analytics,” which is a separate field where data is automatically analyzed to produce insights.
Profiling for anomaly detection is a simple yet effective technique in which a “profile” or set of “normal” behaviors/patterns is established and then compared to current behavior in order to detect anomalies. If there are significant deviations, the current behavior is flagged, and the AI system will trigger an alert. Following the alert, the AI is also fed information on whether or not the alert was actually significant, which allows it to continually learn and refine what kinds of deviations are meaningful.

Classical methods, like signature detection, were effective but had significant blind spots that could be improved via AI. For example, in a demonstration with McAfee in 2018, classical signature detection could be improved from 90% accuracy to ~100% accuracy by first using threat intelligence to determine “good” and “bad” signatures and then feeding the “unknown” results from the classical signature detection process into an AI algorithm. This kind of “amplification effect,” the working together of different threat capabilities, marked much of AI’s historical use with D&R and laid the stage for the approaches we now see in the space.
High-Level Modern Approaches
With recent advances in AI, particularly through Large Language Models (LLMs), many companies are rushing to apply the latest LLMs to aid Detection and Response. There are three high-level approaches we’ll discuss: 1) using LLMs to improve threat scores, 2) feeding LLMs large amounts of business data to produce a generalized security model that can be talked to in natural language, and 3) using semantic search (embeddings to encode semantics and then comparing the similarity of the embeddings) to improve malware detection.
Due to the large datasets used to train LLMs, it has become possible to prompt them to analyze a sequence of events or set of logic and produce an “opinion” which can be useful in a threat-scoring context. At a high level, this works by providing a sequence of events and, potentially, an analysis of their riskiness to the LLM. Then, the LLM is prompted to either “score” the input or provide an analysis which is then used to produce a final score indicating the level of risk associated with the sequence of events.

One of the strengths of LLMs is their ability to ingest large amounts of unstructured data and use it for inference. This enables an approach to D&R that involves feeding business data, like documents and emails, to the LLM as context. An interface is then provided to security analysts, who can work with the model using natural language during their day-to-day activities, providing feedback on tasks such as reverse engineering code, generating reports for incidents, and analyzing threat signals.
Finally, LLMs have made it possible to cheaply create vector representations of text data called “embeddings.” The power of these embeddings lies in their ability to provide a convenient method for comparing the semantics of embedded text. This enables rapid identification of subtle similarities in text that may look different on the surface, a notable improvement over signature-based analysis, which can often be fooled by making minor tweaks to the text. These methods have already started being implemented by D&R organizations IRL, like Crowdstrike to improve the accuracy of malware detection.
IRL AI D&R
Currently, there are a few notable companies that use these new approaches to augment their detection and response (D&R). Notably, these approaches are focused on the detection rather than the response side. These include Skyhawk Security, a cloud-based D&R platform; Microsoft, particularly their recently released Security Copilot; and Crowdstrike, specifically with their similarity search capabilities.
Skyhawk Security utilizes what they call “attack storylines” to create a timeline of security-sensitive events and assist D&R teams. These storylines aid in the understanding of ongoing attacks and anticipate potential malicious activity. Additionally, they’ve implemented ChatGPT in their threat detection process by using the LLM to provide another assessment of the storyline and aid in producing their threat score. According to Skyhawk, this has helped find threats earlier than the attack storyline without ChatGPT in 78% of cases.

The widely publicized Microsoft Security Copilot offers a prompt-centric approach to augmenting human analysis, making it easier for security analysts to identify and respond to potential threats. Security Copilot integrates with the rest of your Microsoft tooling to gather information and provide a centralized location to look up details, analyze ongoing incidents, and formulate actionable next steps.
Crowdstrike has also released a new set of functionality leveraging semantic similarity search to improve malware detection. This approach leverages new LLM functionality to easily create embeddings to find reused malware source code that may have been tweaked by the attacker(s).
It’s important to note that most of these D&R AI tools, while potentially helpful to human operators performing remediation, the AI itself will not autonomously take those actions (instead, it allows SOC analysts the information to go back to the AI and say: “Go do ____ to remediate”). Use cases like these are compelling, and the future of AI in detection and response is even more exciting.
Future for AI D&R
Although the future is uncertain, we predict that a few patterns emerge. These include the flourishing of GPT plugins, increased usage of LLMs to leverage context, the development of more specialized models, and clever/creative applications of agents.

We’ve already seen the fledgling development of several GPT plugins to common tools such as Splunk. We expect that this trend will continue as people discover new ways to apply GPT functionality to enhance our existing tools.
In detection, we predict the proliferation of tools like Security Copilot. Businesses generate enormous quantities of data daily, yet only a small portion of that data is currently utilized for detection and response. These recent advances in AI will enable increased use of context to aid in detection and the contextualizing of alerts.
We also anticipate the development of specialized models. Initially, there will be domain-specific security models like Security Copilot. Then, we will see subdomain models that focus on specific areas within security, which will increase accuracy in use cases such as those encountered in the daily work of a SOC or IAM analyst.
Finally, one of the applications we’re most excited to see is the use of agents to implement mini-investigations. We imagine that we’ll see LLM agents capable of performing basic triage with the tools already at security analysts’ disposal but in a 24/7, automated fashion. These agents are likely to follow escalation pathways that will then loop in human analysts at a set point. We will also likely see agents that perform remediation themselves as humans increase trust in these models.
(Acronym) Wrap-up!

In summary, Detection and Response (D&R) is a critical part of modern security programs, and Artificial Intelligence (AI) has a significant historical relationship with the field. New additions, powered by advances in Language Model Models (LLMs), will allow us to build an entirely new generation of tools to improve our analysis, synthesize unstructured data, and search for similarity at scale.
Of course, we would be remiss not to mention the concerns the rapid expansion of AI brings. AI opens up entirely new classes of threat vectors, and we’re already seeing the proliferation of AI-based attacks. We will also need to work on building systems that are worthy of our trust since AI can be unreliable—LLMs, in particular, have had issues with hallucination (i.e. producing incorrect information). Finally, as we ingest increasingly large amounts of data, privacy concerns for businesses and employees will also increase, and these concerns cannot be taken lightly.
Whether we like it or not, enterprises will be pressed to adopt AI in their products and in their workflows. Attackers have already adopted AI into their methodologies, and ignoring this technology in D&R just makes organizations more vulnerable to emerging threats. There are many ways a modern enterprise can integrate AI into threat detection and response—buying tools, doing research, building internally, etc.—but one thing is certain: AI will be a part of the modern threat detection and response toolkit.
To stay up to date with Crosswire as we navigate this evolving field, sign up to receive our email updates below!
More from our blog

Subscribe to our blog
Get Crosswire's security insights delivered straight to your inbox. No frills, no spams, unsubscribe anytime!