AI D&R: AI (in Security) is Dead; Long Live AI (in Security)

Hannah Young
Hannah Young
.
May 31, 2023
8 min
 read

Acronym Hell: _DR&AI

EDR, XDR, MDR, NDR, MLDR, ITDR, CDR, TLDR…

Okay, so that last one was a joke, but tl;dr: there are so many acronyms for different flavors of detection and response that even though D&R is important, it can be hell to talk about (let alone manually implement). Modern security programs require the capability to monitor threat signals within the enterprise and respond to/mitigate these threats more quickly than ever before. So, how do we navigate and improve D&R as we—and attackers—continue to advance?

Joke comic titled "Easily-Confused Acronyms Cheat Sheat" that shows LASER as standing for "Light Amplification by the Stimulated Emission of Radiation" and MASER, SONAR, RADAR, and LIDAR as joking variations on this, like "SONAR=Sound O-mplification by the N-imulated A-mission of Radiation"
Image by xkcd

At recent conferences and throughout the space, the question of how to incorporate AI into security processes like D&R going forward has been on everyone’s minds. However, historically, there has already been extensive use of AI in D&R tools, and in this article, we’ll discuss some of those historical use cases, some modern approaches, and some future applications of AI that we’re anticipating.

Historical Context

AI has a long history of use in D&R, with 2 primary approaches: profiling for anomaly detection and improving the accuracy of classical methods like signature detection.

💡 Note: For the purposes of this article, we use the “AI” (Artificial Intelligence) acronym to mean systems that can learn from data and operate autonomously. We are not referencing “data analytics,” which is a separate field where data is automatically analyzed to produce insights.

Profiling for anomaly detection is a simple yet effective technique in which a “profile” or set of “normal” behaviors/patterns is established and then compared to current behavior in order to detect anomalies. If there are significant deviations, the current behavior is flagged, and the AI system will trigger an alert. Following the alert, the AI is also fed information on whether or not the alert was actually significant, which allows it to continually learn and refine what kinds of deviations are meaningful.

Bart Simpson writing "anomaly detection is not incident detection" on a chalkboard over and over again

Classical methods, like signature detection, were effective but had significant blind spots that could be improved via AI. For example, in a demonstration with McAfee in 2018, classical signature detection could be improved from 90% accuracy to ~100% accuracy by first using threat intelligence to determine “good” and “bad” signatures and then feeding the “unknown” results from the classical signature detection process into an AI algorithm. This kind of “amplification effect,” the working together of different threat capabilities, marked much of AI’s historical use with D&R and laid the stage for the approaches we now see in the space.

High-Level Modern Approaches

With recent advances in AI, particularly through Large Language Models (LLMs), many companies are rushing to apply the latest LLMs to aid Detection and Response. There are three high-level approaches we’ll discuss: 1) using LLMs to improve threat scores, 2) feeding LLMs large amounts of business data to produce a generalized security model that can be talked to in natural language, and 3) using semantic search (embeddings to encode semantics and then comparing the similarity of the embeddings) to improve malware detection.

Due to the large datasets used to train LLMs, it has become possible to prompt them to analyze a sequence of events or set of logic and produce an “opinion” which can be useful in a threat-scoring context. At a high level, this works by providing a sequence of events and, potentially, an analysis of their riskiness to the LLM. Then, the LLM is prompted to either “score” the input or provide an analysis which is then used to produce a final score indicating the level of risk associated with the sequence of events.

Four panel comic: first panel is Person A at a computer (that reads "I am not a robot. I'm a unicorn.") and Person A says "did you see the Cleverbot-Cleverbot chat?" The second panel shows Person A turned to Person B and Person B says "Yeah. It's hilarious, but it's just clumsily sampling a huge database of lines people have typed. Chatterbots still have a long way to go." The third panel is a close up of Person A saying "So... computers have mastered playing chess and driving cars across the desert, but can't hold five minutes of normal conversation?" and Person B saying "pretty much" off-screen. The fourth and final panel is Person A turned towards Person B and Person A says "Is it just me, or have we created a Burning Man attendee?"
Image by xkcd

One of the strengths of LLMs is their ability to ingest large amounts of unstructured data and use it for inference. This enables an approach to D&R that involves feeding business data, like documents and emails, to the LLM as context. An interface is then provided to security analysts, who can work with the model using natural language during their day-to-day activities, providing feedback on tasks such as reverse engineering code, generating reports for incidents, and analyzing threat signals.

Finally, LLMs have made it possible to cheaply create vector representations of text data called “embeddings.” The power of these embeddings lies in their ability to provide a convenient method for comparing the semantics of embedded text. This enables rapid identification of subtle similarities in text that may look different on the surface, a notable improvement over signature-based analysis, which can often be fooled by making minor tweaks to the text. These methods have already started being implemented by D&R organizations IRL, like Crowdstrike to improve the accuracy of malware detection.

IRL AI D&R

Currently, there are a few notable companies that use these new approaches to augment their detection and response (D&R). Notably, these approaches are focused on the detection rather than the response side. These include Skyhawk Security, a cloud-based D&R platform; Microsoft, particularly their recently released Security Copilot; and Crowdstrike, specifically with their similarity search capabilities.

Skyhawk Security utilizes what they call “attack storylines” to create a timeline of security-sensitive events and assist D&R teams. These storylines aid in the understanding of ongoing attacks and anticipate potential malicious activity. Additionally, they’ve implemented ChatGPT in their threat detection process by using the LLM to provide another assessment of the storyline and aid in producing their threat score. According to Skyhawk, this has helped find threats earlier than the attack storyline without ChatGPT in 78% of cases.

Microsoft Security Hub booth from RSA Conference 2022.
Microsoft Security Copilot at RSAC

The widely publicized Microsoft Security Copilot offers a prompt-centric approach to augmenting human analysis, making it easier for security analysts to identify and respond to potential threats. Security Copilot integrates with the rest of your Microsoft tooling to gather information and provide a centralized location to look up details, analyze ongoing incidents, and formulate actionable next steps.

Crowdstrike has also released a new set of functionality leveraging semantic similarity search to improve malware detection. This approach leverages new LLM functionality to easily create embeddings to find reused malware source code that may have been tweaked by the attacker(s).

It’s important to note that most of these D&R AI tools, while potentially helpful to human operators performing remediation, the AI itself will not autonomously take those actions (instead, it allows SOC analysts the information to go back to the AI and say: “Go do ____ to remediate”). Use cases like these are compelling, and the future of AI in detection and response is even more exciting.

Future for AI D&R

Although the future is uncertain, we predict that a few patterns emerge. These include the flourishing of GPT plugins, increased usage of LLMs to leverage context, the development of more specialized models, and clever/creative applications of agents.

Comic: first panel hass two people walking with one saying "Welcome to the future where AI makes everything." The second panel is a close up of this person saying "Even this comic." The third panel hass the other person say "Is that why it's not funny?" and the final/fourth panel shows a "Generate" screen where the text bar says "meta comic about ai apocalypse, 4-panel, no pants, relatable"

We’ve already seen the fledgling development of several GPT plugins to common tools such as Splunk. We expect that this trend will continue as people discover new ways to apply GPT functionality to enhance our existing tools.

In detection, we predict the proliferation of tools like Security Copilot. Businesses generate enormous quantities of data daily, yet only a small portion of that data is currently utilized for detection and response. These recent advances in AI will enable increased use of context to aid in detection and the contextualizing of alerts.

We also anticipate the development of specialized models. Initially, there will be domain-specific security models like Security Copilot. Then, we will see subdomain models that focus on specific areas within security, which will increase accuracy in use cases such as those encountered in the daily work of a SOC or IAM analyst.

Finally, one of the applications we’re most excited to see is the use of agents to implement mini-investigations. We imagine that we’ll see LLM agents capable of performing basic triage with the tools already at security analysts’ disposal but in a 24/7, automated fashion. These agents are likely to follow escalation pathways that will then loop in human analysts at a set point. We will also likely see agents that perform remediation themselves as humans increase trust in these models.

(Acronym) Wrap-up!

Crosswire logo on white, blue, pink, orange gradient background

In summary, Detection and Response (D&R) is a critical part of modern security programs, and Artificial Intelligence (AI) has a significant historical relationship with the field. New additions, powered by advances in Language Model Models (LLMs), will allow us to build an entirely new generation of tools to improve our analysis, synthesize unstructured data, and search for similarity at scale.

Of course, we would be remiss not to mention the concerns the rapid expansion of AI brings. AI opens up entirely new classes of threat vectors, and we’re already seeing the proliferation of AI-based attacks. We will also need to work on building systems that are worthy of our trust since AI can be unreliable—LLMs, in particular, have had issues with hallucination (i.e. producing incorrect information). Finally, as we ingest increasingly large amounts of data, privacy concerns for businesses and employees will also increase, and these concerns cannot be taken lightly.

Whether we like it or not, enterprises will be pressed to adopt AI in their products and in their workflows. Attackers have already adopted AI into their methodologies, and ignoring this technology in D&R just makes organizations more vulnerable to emerging threats. There are many ways a modern enterprise can integrate AI into threat detection and response—buying tools, doing research, building internally, etc.—but one thing is certain: AI will be a part of the modern threat detection and response toolkit.

To stay up to date with Crosswire as we navigate this evolving field, sign up to receive our email updates below!

More from our blog

Identity Governance Best Practices for Security Leaders

Explore essential identity governance best practices for security leaders, ensuring robust security frameworks and compliance adherence. Learn more today.

Johnny Wang
.
4 min
 read
UPDATE: Customer Impact in the Okta Salesforce Breach

An update on Crosswire and the September 2023 breach of Okta’s Salesforce instance.

Crosswire Security Team
.
1 min
 read
Breaking Down the October 2023 Okta Breach

A comprehensive timeline and breakdown of the October 2023 Okta Support Case Management System breach.

Hannah Young
.
7 min
 read
October 2023 Okta Compromise Guidance

In light of October 2023 Okta support compromise, Crosswire sent the following message to its customers.

Crosswire Security Team
.
5 min
 read
What is ITDR?

The term Identity Threat Detection and Response (ITDR) has gained significant popularity this year, but what is ITDR, actually?

Hannah Young
.
5 min
 read
CISOs on Identity Security Maturity in the Enterprise

CISOs Chris Castaldo and Tanner Randolph share insights on security maturity and identity in the enterprise.

Hannah Young
.
5 min
 read
Black Hat Guide for Conference Veterans

Whether this is your 1st or 21st time at Black Hat, these tips can help you weather a jam-packed and intense week.

Hannah Young
.
10 min
 read
You Should Feel ‘Positively’ About Your Security Tools: How We’re Mitigating False Positives in Identity Security

False positives are a huge problem in security: see what Crosswire is doing to prevent them and mitigate their effects.

Hannah Young
.
5 min
 read
Decoding the (Broken) Modern Identity Stack

We've made the modern identity stack entirely too convoluted and broken, but not for the reasons you think.

Hannah Young
.
10 min
 read
The Secret Third Step to Threat Detection and Response: Protection

How are you protecting your accounts before an incident can occur (or slowing an incident down before it really ramps up)?

Hannah Young
.
6 min
 read
How to Detect and Remediate Identity Threats; Solution 2: Remediate

This is Solution 2: Remediate of a two-part series on how to detect and remediate evolving identity threats.

Hannah Young
.
5 min
 read
How to Detect and Remediate Identity Threats; Solution 1: Detect

This is Solution 1: Detect of a two-part series on how to detect and remediate evolving identity threats.

Hannah Young
.
5 min
 read
Quick RSAC 2023 Recap: We’re Back (and Stronger Together)

From Armisen to AI/ML, catch up on what you missed from RSA Conference 2023 with Crosswire!

Hannah Young
.
4 min
 read
Defending Against Threats in Identity Security; Part 2: Remediate

This is Part 2: Remediate of a two-part series setting up emerging problems in identity security.

Hannah Young
.
6 min
 read
Why Now’s the Perfect Time to Join an Early-Stage Startup

If you’re looking for the right time to join a high-risk, high-reward venture, we’d argue that there’s never been a better opportunity.

Hannah Young
.
3 min
 read
It’s Not Just You: IT Security Audits are Stressful

IT security audits can be a pain for everyone involved: check out our solutions to make this auditing season just a little bit easier.

Hannah Young
.
5 min
 read
Why RBAC is obsolete

RBAC lacks sophistication and flexibility, failing to address the access needs of the modern company.

Hannah Young
.
3 min
 read
Defending Against Threats in Identity Security; Part 1: Detect

This is Part 1: Detect of a two-part series setting up emerging problems in identity security.

Hannah Young
.
5 min
 read
Identity Is a Co-owned Problem Between Security and IT

Who owns identity at your org? Identity is (and should be treated as) a co-owned problem between security and IT.

Hannah Young
.
5 min
 read
Your Okta Groups Should Be (Mostly) Empty

Yep, you heard that right; we at Crosswire believe that your Okta groups should be as empty as possible.

Hannah Young
.
2 min
 read
The Founding of Crosswire as Told by Its Values

Crosswire, and its co-founders Johnny and Nick, are building the future of enterprise identity in new and exciting ways.

Hannah Young
.
7 min
 read
RSA Conference™ 2023: Stronger Together

The theme for 2023’s RSA Conference™ is “Stronger Together.” When info security is more important than ever, so is collaboration.

Hannah Young
.
6 min
 read
6 Early Warning Signs of an Under-Resourced IT Organization

It’s no secret that your IT organization is crucial to your company. But are they getting all of the resources they need?

Hannah Young
.
5 min
 read
Cybersecurity Is More Critical Than Ever, and You (Yes, You) Can Do Something About It Now

Why cybersecurity is more crucial than ever and what you can do to make your organization more secure, no matter your role.

Hannah Young
.
7 min
 read
Understanding Automation: How To Do More Than You Have the Resources For

Five significant ways to improve your workflows with automation and get more results than your resources permit.

Hannah Young
.
5 min
 read
Google Workplace Organizational Units (OUs) according to Parks and Rec

What are Google Workplace Organizational Units, and how do they work (according to Parks and Rec)?

Hannah Young
.
5 min
 read
Practical Survival Guide to Okta Lifecycle Management

Crosswire’s technical usability guide to Okta Lifecycle Management (LCM), from onboarding to offboarding.

Hannah Young
.
6 min
 read
Authorization (AuthZ) and Authentication (AuthN): A Brief History

Authentication is who you are, and authorization is what you can do. Here, we dive into the history of these terms.

Hannah Young
.
5 min
 read