Rethinking identification within the age of AI impersonation
For many years, belief in enterprise hinged on easy human instincts. It was that once we noticed a well-recognized face or heard a trusted voice, we instinctively believed we had been coping with the true particular person. That assumption is now harmful.
Prior to now 18 months, deepfakes have moved from novelty to weapon. What began as clumsy web pranks has change into a mature cybercriminal toolset. Finance groups have been duped into wiring tens of millions after video calls with “executives” who by no means logged on. The secretary of state in Florida was impersonated to contact overseas ministers. Even the CEO of Ferrari was impersonated in a fraud try. These are usually not edge circumstances; they’re a glimpse of what’s to come back.
The price is measured not solely in cash, however within the erosion of confidence. Once we can not imagine what we see or hear, the very basis of digital belief begins to crumble.
Why now?
What’s modified isn’t intent; fraudsters have at all times been ingenious. What’s modified is accessibility. Generative AI (GenAI) has democratised deception. What as soon as required specialist labs and heavy computing energy can now be executed with an app and a laptop computer. A single audio clip scraped from a webinar, or a handful of selfies on social media, is sufficient to create a reputable voice or face.
We’re already seeing the fallout. Gartner analysis discovered that 43% of cyber safety leaders had skilled a minimum of one deepfake-enabled audio name, and 37% had encountered deepfakes in video calls. The standard is enhancing, the quantity is accelerating, and the barrier to entry has collapsed.
Know-how alone can’t save us
Distributors haven’t stood nonetheless. Voice recognition suppliers are embedding deepfake detection into their platforms, utilizing neural networks to attain the chance {that a} caller is artificial. Face recognition methods are layering in liveness checks, metadata inspection and machine telemetry to identify indicators of manipulation. These are needed developments, however they aren’t ample.
Detection is at all times reactive. Accuracy in opposition to final month’s fakes doesn’t assure safety in opposition to this week’s. And outcomes are probabilistic: methods return danger scores, not certainties. That leaves organisations making troublesome selections at scale: who to belief, who to problem, primarily based on indicators that may by no means be excellent.
The reality is that no detection device can carry the burden of defence by itself. The deepfake drawback is as a lot about individuals and processes as it’s about algorithms.
The human weak level
Know-how is just half the battle. The most expensive deepfake incidents thus far haven’t bypassed machines; they’ve tricked individuals. Workers, usually underneath strain, are requested to behave quick: “Switch the funds,” “Reset my MFA,” “Be a part of this unscheduled video name.” Add a reputable face or acquainted voice, and hesitation evaporates.
That is the place CISOs and safety and danger administration leaders have to get pragmatic. Workers ought to by no means be positioned ready the place a single telephone name or video chat can set off a catastrophic motion. If a request feels pressing, if it includes cash or entry, it have to be backed by extra proof.
This isn’t about slowing enterprise down. It’s about constructing resilience. Asking a query solely the true particular person would know, escalating delicate requests by means of unbiased channels, or mandating phishing-resistant multi-factor authentication earlier than approvals, these are the guardrails that cease a faux from turning into a fraud. Generally the only strategies are the simplest.
The battle for belief
The implications prolong past company losses. Deepfakes are actually fueling disinformation campaigns, spreading political falsehoods, and eroding belief in public establishments. In some circumstances, real footage is dismissed as “faux information”. Even authenticity is underneath suspicion.
Governments are starting to reply. Denmark and the UK have launched or are contemplating new legal guidelines to criminalise the creation and sharing of sexually express deepfakes. In the USA, new laws makes non-consensual deepfake media explicitly unlawful. These are essential steps, however the legislation alone can not maintain tempo with the pace of generative AI.
For companies, the accountability is quick and unavoidable. CISOs can not look ahead to an ideal regulatory answer. They should assume that deception might be a part of each interplay and design their organisation, accordingly.
Designing with deception in thoughts
So how ought to organisations act? The reply lies in combining layered technical safeguards with hardened enterprise processes and a tradition of wholesome scepticism. CISOs ought to:
- Use deepfake detection instruments, however don’t depend on them in isolation.
- Make sure that essential workflows similar to cash transfers, identification restoration, and government approvals are by no means reliant on a single level of belief.
- Equip workers with the coaching and confidence to problem even a well-recognized face on display if one thing feels off.
Take biometric methods for instance. A layered method: presentation assault detection (catching artefacts proven to a digicam), injection assault detection (recognizing artificial video streams), and context indicators from units or person behaviour, builds actual resilience. In observe, it might not be the deepfake itself that’s detected, however the uncommon patterns that include its use
On the finish of the day, CISOs and safety and danger administration leaders have to shift how they consider identification. It’s not one thing that may be assumed from sight or sound; it needs to be confirmed.
The larger image
We’re in an period the place seeing is not believing. Identification, the cornerstone of digital belief, is being redefined by adversaries who can fabricate it at will. The organisations that adapt shortly by layering technical safeguards with resilient enterprise processes will blunt the risk. People who don’t danger not simply fraud losses however a collapse in belief, each inside and outdoors their partitions.
Deepfakes gained’t be solved by one intelligent device or a procurement resolution. They demand a shift in mindset: assume the face or voice in entrance of you might be faux and design your safety accordingly.
The attackers are transferring quick. The query is that if defenders can transfer quicker.
Gartner analysts are exploring digital identification and belief on the Gartner Safety & Danger Administration Summit going down this week in London (22–24 September).
Akif Khan is a VP analyst at Gartner

