Sit with a Tier 1 analyst for an hour and you'll notice something the dashboards don't show. The first alert they touch gets the full ritual: pivot to the EDR, pull the parent process, check the user against the directory, glance at the network telemetry. By the third alert, the ritual is shorter. By the tenth, it's a glance at the source IP and a verdict. The queue is winning.
This isn't about lazy analysts. It's about decay. Every alert in a SOC queue has a context budget, which is the time available to gather the surrounding evidence before a decision has to be made. That budget is set by the rate of incoming alerts, not by the analyst's discipline. When the rate goes up, the budget per alert goes down. There's no other lever to pull.
The 90-second number
We've watched this in eleven SOCs over the last year. The pattern is consistent enough to be boring: past about 90 seconds per alert, the marginal value of additional investigation drops below zero. The analyst is still generating fatigue and burning cognitive load, but verdict accuracy isn't improving anymore, and it starts to drift. They over-fit to the last alert they saw.
The 90 seconds isn't magic. It's roughly the point at which short-term recall of the previous alert's context begins to interfere with the current one. Below it, you're working. Above it, you're being interrupted by your own recent memory.
Every analyst I've watched gets faster as the queue gets longer. They don't get better.
Why this matters for automation
The standard pitch for triage automation is "do the boring stuff so analysts can focus on the interesting stuff." That framing misses the point. The boring stuff is what preserves the context budget for the interesting stuff. Automating the first 90 seconds of every alert isn't about saving labor. It's about making sure the analyst walks into the queue with a full tank instead of the residue of the last forty triages.
A few things follow from this. The automated layer should resolve the alerts that don't need a human, instead of pre-chewing the ones that do; pre-chewed evidence still consumes context when the human sits down to read it. When a human does review an autonomous decision, they should see the conclusion first, the reasoning second, and the raw evidence third, in that order. And the right metric for the system isn't alerts-per-hour. It's how quickly the analyst can make a meaningful pivot on the cases they actually open.
What we built around it
TandemTrace's Tier 1 layer is built on this premise. Every alert hits the autonomous triage first. The ones with a clean verdict (benign, known-false-positive, policy allowed) close themselves with full evidence on record. The ones that need a human arrive at the analyst's queue with a 60-second verdict already attached, and the analyst spends their context budget where it actually matters.
Measured across deployments, the result isn't a faster queue. It's a shorter queue, and the alerts on it have more signal per item.
If you're a SOC lead and any of this rings true, we'd love to compare notes. Reply to this post by email, or say hello here.