Safety execs ought to put together for powerful questions on AI in 2026
For the final couple of years, many organisations have comforted themselves with a single slide or paragraph that reads alongside the strains of “We use synthetic intelligence [AI] responsibly.” That line might need been sufficient to get by way of casual provider due diligence in 2023 however it won’t survive the following critical spherical of tenders.
Enterprise patrons, notably in authorities, defence and demanding nationwide infrastructure (CNI), are actually utilizing AI closely themselves. They perceive the chance language. They’re making connections between AI, information safety, operational resilience and provide chain publicity. Their procurement groups will now not ask whether or not you utilize AI. They may ask the way you govern it.
The AI query is altering
In sensible phrases, the questions in requests for proposals (RFPs) and invites to tender (ITTs) are already shifting.
As a substitute of the comfortable “Do you utilize AI in your companies?”, you’ll be able to count on wording extra like:
“Please describe your controls for generative AI, together with information sovereignty, human oversight, mannequin accountability and compliance with related information safety, safety and mental property obligations.”
Beneath that line sit numerous very particular issues.
The place is consumer or citizen information going once you use instruments comparable to ChatGPT, Claude or different hosted fashions?
Which jurisdictions does that information transit or reside in?
How is AI assisted output checked by people earlier than it influences a crucial choice, a bit of recommendation, or a security associated exercise?
Who owns and may reuse the prompts and outputs, and the way is confidential or categorised materials protected in that course of?
The generic boilerplate now not solutions any of these factors. The truth is, it advertises that there isn’t a structured governance in any respect.
The uncomfortable actuality in lots of service suppliers is that if you happen to strip away the advertising and marketing language, {most professional} companies organisations are utilizing AI in a really acquainted sample.
Particular person employees have adopted instruments to hurry up drafting, evaluation or coding. Groups share suggestions informally. Some teams have written native steerage on what is suitable. A couple of insurance policies have been up to date to say AI.
What is usually lacking is proof
Only a few organisations can say with certainty which consumer engagements concerned AI help, what classes of knowledge have been utilized in prompts, which fashions or suppliers have been concerned, the place these suppliers processed and saved the knowledge, and the way evaluate and approval of AI output was recorded.
From a governance, threat and compliance (GRC) perspective, that could be a downside. It touches information safety, info safety, data administration, skilled indemnity, and in some sectors security and mission assurance. It additionally follows you into each future tender, as a result of patrons are more and more asking about previous AI associated incidents, close to misses and classes discovered.
Why this issues a lot in authorities, defence and CNI
In central and native authorities, policing and justice, AI is more and more influencing choices that have an effect on residents immediately. That may be in triaging circumstances, prioritising inspections, supporting investigations or shaping coverage evaluation.
When AI is concerned in these processes, public our bodies should be capable to present lawful foundation, transparency, equity and accountability. Meaning understanding the place AI is used, how it’s supervised, and the way outputs are challenged or overridden. Suppliers into that area are anticipated to show the identical self-discipline.
Within the defence and wider nationwide safety provide chain, the stakes are even larger. AI is already showing in logistics optimisation, predictive upkeep, intelligence fusion, coaching environments and choice help. Right here the questions aren’t nearly privateness or mental property. They’re about reliability beneath stress, robustness towards manipulation, and assurance that delicate operational information isn’t leaking into methods outdoors sovereign or accredited management.
CNI operators have an identical problem. Many are exploring AI for anomaly detection in OT environments, demand forecasting, and automatic response. A failure or misfire right here can rapidly flip right into a service outage, security incident or environmental affect. Regulators will count on operators and their suppliers to deal with AI as a component of operational threat, not a novelty software.
In all of those sectors, the organisations that can’t clarify their AI governance will quietly fall down the scoring matrix.
Turning AI governance right into a business benefit
The excellent news is that this image might be circled. AI governance, completed correctly, isn’t about slowing down or banning innovation. It’s about placing sufficient construction round AI use that you would be able to clarify it, defend it and scale it.
A sensible place to begin is an AI procurement readiness evaluation. At Creation IM, we describe this in quite simple phrases: are you able to reply the questions your subsequent main consumer goes to ask?
That includes mapping the place AI is used throughout your companies, figuring out which workflows contact consumer or citizen information, understanding which third celebration fashions or platforms are concerned, and documenting how people supervise, approve or override AI outputs. It additionally means how AI suits into your present incident response, information breach dealing with and threat registers.
From there, you’ll be able to develop a brief, evidence-based narrative that matches neatly into RFP and ITT responses, backed by insurance policies, course of descriptions and instance logs. As a substitute of hand waving about accountable AI, you’ll be able to current a transparent story about how AI is ruled as a part of your wider safety and GRC framework.
ISO 42001 because the spine for AI governance
ISO IEC 42001, the brand new commonplace for AI administration methods, provides this work construction. It gives a framework for managing AI throughout its lifecycle, from design and acquisition by way of to operation, monitoring and retirement.
For organisations that already function an info safety administration system (ISMS), high quality administration system or privateness info administration system, 42001 mustn’t really feel alien. It may be built-in with present ISO 27001, 9001 and 27701 preparations. Roles comparable to senior info threat proprietor (SIRO), info asset proprietor (IAO), information safety officer, heads of service and system homeowners merely acquire clearer obligations for AI associated actions.
Aligning with 42001 additionally indicators to purchasers, regulators and insurers that AI isn’t being handled informally. It reveals that there are outlined roles, documented processes, threat assessments, monitoring and continuous enchancment round AI. Over time, that alignment might be taken additional into formal certification for these organisations the place it makes business sense.
Bringing individuals, course of and assurance collectively
Insurance policies and frameworks are solely a part of the image. The actual check is whether or not individuals throughout the organisation perceive what’s permitted, what’s prohibited, and when they should ask for assist.
AI safety and governance coaching is subsequently crucial. Employees want to grasp the right way to deal with prompts that include private or delicate information, the right way to recognise when AI outputs may be biased or incomplete, and the right way to file their very own oversight. Managers must know the right way to approve use circumstances, log off threat assessments and reply to incidents involving AI.
Bringing all of this collectively provides you one thing quite simple however very highly effective. When the following RFP or ITT lands with a web page of questions on AI, you’ll not be scrambling for advert hoc solutions. It is possible for you to to explain an AI administration system that’s aligned to recognised requirements, built-in along with your present safety and GRC practices, and backed by coaching and proof.
In a crowded companies market, which may be the distinction between being seen as an fascinating provider and being trusted with excessive worth, delicate work.

