Safety Suppose Tank: Cease shopping for AI, begin shopping for outcomes
‘AI-powered’ safety instruments are already in every single place, and 2026 will solely make that extra obvious. The query I hear from CISOs isn’t “Ought to we be utilizing synthetic intelligence?” a lot as “How do I inform the real accelerator from the costly toy?”
The arduous reality is that AI has change into a critical pressure multiplier for either side. In my very own work, I’ve been writing concerning the shift from human-operated intrusion units to AI orchestrated campaigns the place LLMs are successfully the first operator and the human turns into merely a prompter and supervisor. Ignoring AI is, after all, a selection, however it’s not a impartial one. It means falling behind in a race the place the opposite staff shouldn’t be planning on providing you with a break.
On the similar time, the market response has been predictable, following the identical path as cloud, XDR [Extended Detection and Response], or any variety of applied sciences earlier than it. All the things has an AI badge on it someplace; it even options proper up there within the names of lots of the corporations. Options that was referred to as analytics or correlation have been repackaged as in the event that they had been model new, however in the event you purchase on buzzwords you’ll find yourself paying for stuff you already had. The fact is we by no means see the boasts ‘computer-controlled’, ‘digital’ or ‘digital’ anymore, and I absolutely count on the advertising and marketing buzz round ‘AI-powered’ to go the identical approach. Over the following 12 to 24 months, AI will change into a baseline expectation, the silent engine that quietly rewires how expertise operates while not having to be explicitly named. Any organisation that goes all-in on advertising and marketing themselves as ‘AI’ is massively lacking the purpose. The worth shouldn’t be derived from the expertise itself, however the utility it supplies.
So what ought to patrons truly search for?
For me, the primary filter is easy. Begin with the work, not the mannequin. Ask your personal groups the place they’re drowning. In most organisations, that might be some mixture of alert triage, investigation donkey work, vulnerability noise, and reporting. The AI that’s price paying for is the AI that provides you time and readability again in these workflows.
There are three fundamental classes the place I see actual worth at present.
The primary is summarisation and rationalization. Generative fashions are superb at turning piles of technical context into one thing people can eat. In my very own world, we intentionally didn’t begin with one other chatbot bolted onto the aspect of the product. We began with the consumer. Meaning utilizing generative fashions behind the scenes to do issues like summarise a posh asset danger image, compress a loud incident into one thing an analyst can grasp shortly, or generate executive-ready reporting {that a} non-specialist can truly perceive. Nobody grew to become an analyst to churn out PowerPoint decks for the C-suite. If AI can take that burden away, that could be a real win.
The second is navigation. Fashionable environments generate an absurd quantity of telemetry. You have got logs, alerts, indicators and belongings with hundreds of attributes throughout hundreds of thousands of units. Traditionally, truly utilizing that knowledge has required studying a question language or relying on a specialist who has. Massive language fashions are well-suited to sit down in between the consumer and that knowledge as a translation layer. You need to have the ability to say “present me all Home windows Server 2022 techniques that aren’t working EDR” or “present me units that grew to become excessive danger this week”, or “solely present me units within the US with a danger rating above 8.5 previously month”, and get a wise reply with out studying one more syntax. You need to have the ability to add context with regular phrases, “embody the swap IP and swap port for every of those units” as an alternative of rewriting the entire question.
It is a approach of unlocking the worth of the asset intelligence you already accumulate, particularly in case your visibility actually spans all system varieties from IT to OT, IoT and medical. A pure approach to work together together with your knowledge is likely one of the most sensible makes use of of AI in safety right now.
The third is prioritisation. That is one other space the place what our clients ask for and what AI can do line up virtually completely. When an analyst sits down in entrance of a console stuffed with alerts, the true query is “The place do I begin?” When a vulnerability staff is looking at a listing of essential CVEs the true query is “Which of them matter right here?” Language fashions and different AI strategies can look throughout historic analyst behaviour, peer patterns and the reside state of your surroundings to say “Listed below are the alerts you need to take a look at first” or “Listed below are the vulnerabilities which are probably to harm you given your infrastructure”. That’s what we’re listening to immediately from safety practitioners. They discuss saving half an hour of firefighting simply by getting a wise place to begin as an alternative of a flat record.
Achieved proper this sort of AI doesn’t take management away from the human. It suggests. It highlights. It nudges you in direction of the needles within the haystacks. The consumer nonetheless decides whether or not to close down a plant or isolate a business-critical system. That steadiness issues, particularly in regulated and safety-critical industries the place a foul resolution has real-world penalties.
On the flip aspect, there are areas the place I’d advise warning.
The primary is absolutely autonomous response. There’s at present lots of justified curiosity in agentic AI the place techniques don’t simply reply questions however take actions towards a objective. Used effectively, these brokers can take drudgery away from people. Used badly, they change into an overconfident intern with root entry. I’m not saying by no means let AI take actions. I’m saying you want clear guardrails, least privilege, and human accountability for these actions. Deal with an AI agent like a brand new staff member who by no means sleeps and by no means will get bored but additionally by no means actually understands your corporation. You don’t give that particular person the keys to manufacturing on day one.
The second purple flag is magic considering. If a pitch feels like “purchase our AI and you may change your SOC”, stroll away. Any lifelike deployment of AI in safety over the following few years goes to appear to be augmentation. Higher triage, higher correlation, higher reporting, higher use of scarce experience. Not a sentient field that does safety for you whilst you deal with the enterprise.
The third is opacity round knowledge and danger. When you find yourself evaluating AI-backed instruments, spend at the very least as a lot time on the boring questions as on the demo. The place does the info reside? What’s used for coaching? How is entry managed? How do you defend the AI part itself in opposition to immediate injection, mannequin abuse or poisoning? There isn’t any level shopping for AI to defend your surroundings when you’ve got no thought how that AI is itself being defended and ruled.
So are AI-backed instruments price it? The reply is sure, for the precise issues, with the precise questions.
My recommendation to patrons in 2026 can be to maintain it grounded. Begin from one or two painful workflows the place your staff is burning hours. Search for distributors who can present, together with your knowledge, that they can provide you time again in these use circumstances. Ask for clear explanations of how the AI is used, what choices it influences and the way you keep in management. Insist on visibility that spans your entire surroundings, not only a skinny slice of IT, as a result of AI is barely nearly as good as the info it really works from.
We constructed our safety programmes for a world the place the attacker was all the time human. That has already modified. Utilizing AI on defence is now not non-compulsory, however you do get to decide on whether or not you purchase into advertising and marketing tales about autonomous cyber, or instruments that genuinely assist your folks see extra, perceive extra and act with precision.
The previous is hype, the latter is the place AI actually begins to earn its place on the finances line.

