Context Is the Real Architecture Behind AI-Powered SOC

.avif)
.avif)
Most SOC teams don’t have an AI problem. They have a context problem.
Over the last year, nearly every security conference has showcased some version of the “AI SOC.” Autonomous investigations. Self-driving analysts. LLMs replacing tier‑1 queues. Many teams experimented. Some saw incremental efficiency gains. Most eventually hit the same ceiling and walked away with the same conclusion: the technology isn’t ready.
From an engineering standpoint, that diagnosis is wrong.
Modern LLMs are already capable of complex reasoning across multiple variables. They can correlate signals, evaluate risk, and explain conclusions in plain language. What they cannot do is reason over context that is incomplete, unstructured, or contradictory. The bottleneck is not intelligence. It is architecture.
At Daylight, we’ve found that the word “context” is often used loosely, as if it were a single input you either have or don’t. In reality, context is made of three distinct layers. Each one behaves differently. Each one breaks differently as organizations scale. And each one requires a different process if you expect AI to use it reliably during investigations.
Context Turns Alerts Into Explanations
A raw alert is not an investigation. It is a data point.
Take a simple example: “Failed login attempt on server X.” On its own, that statement is almost meaningless. It could be a mistyped password. It could be brute force. It could be a monitoring glitch.
Now add context. Fifty failed login attempts against a production database within sixty seconds. Followed by a successful login from a known malicious IP. Initiated by an employee account that normally authenticates from a different geography.
Same event stream. Completely different meaning.
Context answers the real investigative questions: who did this, from where, against what asset, under which policy, and in what historical pattern. Without that surrounding information, every alert feels ambiguous. With it, many investigations become deterministic.
But not all context is created equally.
The Three Types of Context Every SOC Must Manage
1. Telemetry Context (Hard Context)
Telemetry context is the machine-generated evidence collected from your environment. Asset tags from the CMDB. User roles and behavioral baselines from identity systems. EDR process trees. Firewall logs. Cloud metadata. Threat intelligence enrichment.
It is structured. It is factual. It is abundant.
And when handled incorrectly, it becomes noise.
In large environments, a single alert can easily accumulate over one hundred fields once SIEM correlations, enrichment services, and identity metadata are layered in. The instinct is understandable: give the model everything and let it figure it out. More data should mean better decisions.
In practice, the opposite happens.
When an LLM is flooded with irrelevant fields, hallucination risk increases. The model may anchor on a secondary signal that looks important but is not. Different runs emphasize different fields. Verdicts become inconsistent. Token consumption increases without improving accuracy. Instead of clarity, you get variability.
Telemetry context rarely suffers from lack of collection. Most enterprises already ingest more data than they can meaningfully analyze. The real challenge is curation.
For each alert type, someone must define which fields actually matter. Which combinations represent risk. Which data sources are irrelevant for this scenario. That definition is not a prompt. It is an architectural decision based on investigative experience.
Without disciplined curation, telemetry context degrades as scale increases. With it, AI reasoning becomes significantly more stable and predictable.
2. Organizational Context (Soft Context)
Organizational context is different. It is not generated by machines. It is generated by people.
This is the institutional knowledge that shapes how alerts should be interpreted inside your company. Policies that are enforced socially rather than technically. Exceptions that were approved in hallway conversations. Awareness that a specific executive is under active phishing targeting. Knowledge that only a certain team should ever access a specific production bucket.
This context often lives in Slack threads, internal wiki pages, onboarding documents, and the memories of senior analysts. It evolves over time. It contains exceptions. It may even contradict itself across sources.
There is a persistent myth that you can simply point an LLM at your entire documentation corpus and expect it to “learn” your organization automatically. In reality, models cannot reliably distinguish official policy from personal opinion in a thread. They struggle to differentiate current guidance from outdated documentation. They cannot infer what should happen versus what actually happens in practice.
Organizational context must be deliberately extracted and structured. Policies relevant to investigations need to be clearly defined. Exceptions must be documented explicitly. Contradictions between theory and practice must be resolved.
AI is exceptionally good at applying clear rules consistently. It is not good at inventing policy from ambiguous human conversations. If you want AI to enforce your security posture at scale, humans must first make that posture explicit.
3. Historical Context (Derived Context)
Historical context is the accumulated memory of past investigations. It is not telemetry, and it is not policy. It is what your team learned the last time a similar situation occurred, whether that situation turned out to be malicious, benign, or a tooling issue.
This context captures lessons such as: the last time these three alerts appeared together, we treated them separately and later realized they were part of a coordinated attack. The last time we followed the standard containment playbook, it failed, and only a manual token revocation stopped the activity. The last time this alert triggered, it was caused by a broken collector and resetting the integration resolved it immediately.
Historical context is about procedural memory. It records what was misunderstood, what signals ended up mattering, which response paths failed, and which actions actually worked. It encodes the outcome of prior investigations so the team does not repeat the same analysis or the same mistakes.
Unfortunately, most SOC workflows are not designed to preserve that learning.
Tickets close with “benign activity” or “no action required.” The reasoning behind the decision is rarely captured in structured form. The analyst may have called the user to confirm travel. They may have recognized a subtle pattern based on prior incidents. None of that context survives beyond the individual.
When analysts rotate shifts or leave the organization, the knowledge leaves with them.
LLMs cannot learn from reasoning that was never recorded. Historical context must be captured intentionally. Analysts need mechanisms to document why a verdict was reached, what signals were decisive, and what would change the outcome in the future.
When structured properly, historical context compounds. Each investigation strengthens the system. Without structure, the SOC operates in a constant state of amnesia.
Why Context Breaks as You Scale
As companies grow, alert volume increases faster than security headcount. Two hundred alerts per day become eight hundred. Four analysts become twelve. Complexity multiplies.
Telemetry context expands in volume and diversity. Organizational context fragments across new teams and tools. Historical context degrades because analysts under pressure prioritize closure over documentation.
At that point, leadership inevitably asks whether AI can absorb the load.
AI can reason. What it cannot do is compensate for incomplete inputs.
Many investigations that appear inherently ambiguous are simply missing one of the three context layers. When the relevant telemetry is curated, the policy is clear, and prior patterns are accessible, the number of true gray-area decisions shrinks dramatically.
Complete context does not eliminate uncertainty entirely, but it converts much of what feels like judgment into structured reasoning.
LLMs Change the Investigation Model - If Context Is Prepared
Traditional SOAR automation relied on rigid, pre-defined logic. If condition A and condition B occur within ten minutes, escalate. That model works well for deterministic enrichment steps but fails when attackers change timing, sequence, or tooling.
LLMs introduce something fundamentally different: dynamic reasoning across variables. They can evaluate combinations you did not explicitly encode. They can generalize patterns even when minor details shift.
However, LLMs operate on what you provide and on what they were trained on. That means that if telemetry is noisy, policy is undocumented, and historical reasoning is absent, the model will produce unstable or low-confidence conclusions. If context is curated, structured, and relevant, the same model can produce consistent, explainable verdicts.
The difference is not the intelligence layer. It is the preparation layer.
The Daylight POV: Separate the Context, Separate the Process
Treating context as a single input leads to fragile automation. Each type requires a distinct process and ownership model.
Telemetry context benefits from broad source coverage combined with surgical extraction. Access many systems, but retrieve only the fields relevant to the specific alert. More sources, less data per investigation.
Organizational context requires human ownership. Security leaders must define the policies and exceptions that matter for investigations. Once structured, AI can apply them consistently at scale.
Historical context depends on disciplined documentation. Investigation reasoning should be captured as part of the workflow and indexed in a way that future alerts can reference similar cases automatically.
When these three layers are separated and engineered intentionally, AI stops behaving like a probabilistic assistant and starts operating like a deterministic investigator within defined boundaries.
From Ticket Handling to Context Engineering
If the goal is high levels of autonomous resolution, the focus cannot be limited to model selection or prompt design.
The focus must shift toward context engineering.
Senior analysts should not spend the majority of their time validating repetitive alerts. Their highest leverage contribution is defining which telemetry signals matter, clarifying organizational policy, and documenting investigative reasoning in reusable form.
In other words, building and maintaining the context architecture that enables AI to reason effectively.
AI is ready to reason. The question is whether your SOC has built the context foundation it needs.
Until that foundation exists, automation will always plateau. Once it does, ambiguity decreases, consistency increases, and autonomous investigations move from marketing promise to operational reality.



