Technology

Podcast: RSA 2025 – AI’s danger floor and the function of the CISO


On this podcast, we discuss to Mathieu Gorge, CEO of Vigitrust, about key matters at RSA 2025 in San Francisco.

The influence of synthetic intelligence (AI) on compliance was big. Gorge discusses its unfold within the enterprise and the way this impacts the potential danger floor for organisations. In the meantime, he additionally notes the pattern amongst suppliers in direction of a extra consultative strategy primarily based round enterprise outcomes.

Lastly, and just about the influence of AI on organisations, compliance, and their knowledge, he talks concerning the dialogue at RSA concerning the function of the CISO – chief info safety officer – and whether or not they need to be (solely) accountable within the face of dangers posed by AI.

What had been the important thing matters of relevance to knowledge, storage and knowledge safety that got here up at RSA 2025?

I’ve been going to RSA within the US for about 20 years, and I’ve executed a number of in Europe. And customarily talking, yearly, there’s one single subject, whether or not it was blockchain, it was orchestration, then final yr was about AI deployment, AI adoption.

This yr, it was sort of exhausting to see one single pattern. Nevertheless, what we will say is that primarily based on the talks, and primarily based on what the distributors had been doing, compliance is at an all-time excessive. You could possibly really feel the power, you would really feel the innovation in compliance. There have been lots of distributors on the GRC [governance, risk, compliance] entrance, there have been distributors on particular areas of compliance and knowledge safety.

So, that was fascinating to see. The subsequent factor is we felt once we had been there with a few of my colleagues, that at the least on the seller showcase, the narrative had modified. It was extra concerning the enterprise end result of utilizing the precise merchandise.



So, whereas up to now, sometimes at RSA, it was like pure gross sales: purchase my encryption, since you want encryption; purchase my storage resolution, since you want correct storage. This yr, it actually felt like lots of work had been executed on the enterprise end result of choosing options. So, the enterprise end result being, effectively, you’ll be extra compliant, you’ll be capable to display you’re doing knowledge safety, you’ll be capable to at a click on of a button, know the place you’ve gotten knowledge points and the place you don’t.

After which there was additionally the function of CISOs. CISOs had been talked about bit and prolonged to go of danger, head of compliance, and speaking concerning the function of CISOs, particularly with reference to AI adoption.

Are the CISOs the precise individuals to be answerable for AI adoption? Are they not busy sufficient already coping with knowledge safety? Who else ought to work with the CISOs? Who else needs to be taking care of AI governance, which was additionally one of many massive themes within the organisation? And what does it imply for compliance and for knowledge safety? And there have been some very fascinating talks about that.

Might you broaden a bit of on how distributors are emphasising enterprise outcomes fairly than essentially their performance or what they’re notably providing?

I felt the distributors had been taking a extra consultative strategy, the place you would see that a few of them had case research, whitepapers on the advantages of doing compliance the precise method, versus “you must do compliance so whether or not you prefer it or not, you’re going to have to make use of us or our opponents”.

It was a case of, we’re now in a state the place with AI adoption, the chance floor goes up tremendously. It jogs my memory of cloud the place individuals might purchase providers and prolong the chance floor with bypassing safety and compliance.

And we see that occur with AI deployments as effectively. So, I felt there was a real course from the seller neighborhood and from the audio system to say, “Hey, we’re going to undertake AI, so let’s attempt to do it the precise method with out compromising the remainder of the safety that we’re doing. Let’s attempt to perceive what the precise AI governance is for several types of AI deployments. After which let’s give attention to how we will handle that in a neater method.”

After which got here the query I already talked about, which was who actually needs to be answerable for that? Is it simply the CISO, or is it the CISO and the chief AI officer, or do we want a chief AI safety officer? And what does it imply for compliance? Actually one of many key messages is that with AI, you simply have much more knowledge and you’ve got much less management on the brand new knowledge that’s being created.

And so you should have the precise frameworks. And while there are already many AI frameworks on the market to handle AI deployments and AI by way of knowledge classification, they’re not at all times well-known. Actually, even a number of the CISOs are usually not essentially conscious of them.

So, I feel as an trade, now we have an obligation to indicate up and make it simpler for them to do the precise factor as a result of the chance floor is unquestionably going up.