Technology

AI for Good: Sign president warns of agentic AI safety flaw


The president of safe messaging app Sign has warned of the safety implications of agentic AI, the place synthetic intelligence (AI) can entry methods to assist folks obtain sure duties.

Within the “Delegated choices, amplified danger” session on the United Nations’ AI for Good summit, Meredith Whittaker spoke about how the safety of Sign and different purposes could be compromised by agentic AI. She mentioned the business was spending billions on advancing AI and betting on growing highly effective intermediaries. 

For instance of the entry that AI brokers require, she mentioned: “To make a reserving for a restaurant, it must have entry to your browser to seek for the restaurant, and it must have entry to your contact listing and your messages so it could message your pals.

“Getting access to Sign would finally undermine our potential on the software layer to supply strong privateness and safety.”

Whittaker famous that for AI brokers to do their jobs autonomously with out person interplay, they require pervasive entry on the root degree to the person’s IT methods. Such entry, as Whittaker identified, goes in opposition to cyber safety finest practices “in a method that any safety researcher or engineer right here is aware of is precisely the form of vector the place one level of entry can result in a way more delicate area of entry”.

One other safety danger of agentic AI is that previous software program libraries and system parts will not be very safe. “Once you give an agentic AI system entry to a lot of your digital life, this pervasive entry creates a critical assault vector [to target] safety vulnerabilities,” she warned.

The Sign messaging app, like different purposes, runs on the software layer of the working system, and is particularly designed to not use “root” entry to keep away from cyber safety dangers.

“The Sign messenger app you’re utilizing is constructed for iOS or Android, or a desktop working system, however in none of these environments will it have root entry to all the system. It may well’t entry information in your calendar. It may well’t entry different issues,” mentioned Whittaker.

“Once you give an agentic AI system entry to a lot of your digital life, this pervasive entry creates a critical assault vector [to target] safety vulnerabilities”
Meredith Whittaker, Sign

“The place the place Sign can assure the kind of safety and privateness which governments, militaries, human rights employees, UN employees and journalists want is within the software layer,” she mentioned.

However AI brokers must work round these safety restrictions. “We’re speaking in regards to the integration of those brokers, typically on the working system degree, the place they’re being granted permissions up into the applying layer,” she warned.
 
For Whittaker, the way in which brokers are being developed needs to be a priority for anybody whose purposes run on the software layer in an working system, which is the case for almost all of non-system purposes.  

“I feel that is regarding, not only for Sign, however for anybody whose tech exists on the software layer,” she mentioned.

She used Spotify for example, saying it doesn’t need to give each different firm entry to all its information. “That’s proprietary data, algorithms it makes use of to promote you adverts. However an agent is now coming in by way of a promise to curate a playlist and ship it to your pals in your messaging app, and the agent now has entry to all that information.” 

Whittaker additionally warned governments of the dangers they face when deploying an AI system to entry geopolitically delicate data, which makes use of an software programming interface (API) from one of many massive tech suppliers.

“How is it accessing information throughout your methods? How is it pooling that information? We all know {that a} pool of information is a honeypot and generally is a tempting useful resource,” she mentioned. 

AI methods are probabilistic and draw on completely different units of coaching information to derive a believable reply to a person question.

“AI isn’t a magical factor,” Whittaker added. “It’s a handful of statistical fashions, and AI brokers are normally based mostly on quite a lot of several types of AI fashions wrapped in some software program.”

She urged delegates contemplating agentic AI to evaluate the information entry these methods require and perceive how they obtain their outcomes.

The best way AI fashions are skilled on enterprise content material was the subject of a latest Pc Weekly podcast with Gartner analyst Nader Heinen, who mentioned the necessity for entry management throughout the AI engine so it could perceive which datasets a person is authorised to see.



Heinen warned that except such entry management is constructed into the AI engine, there’s a very actual danger it would inadvertently reveal data to individuals who mustn’t have entry to this data.

One strategy Heinen sees as a doable approach to keep away from inner information leakage is to deploy small language fashions. Right here, completely different AI fashions are skilled and deployed, based mostly on subsets of enterprise information, which align with information entry insurance policies for classes of customers.

Henein mentioned such a coverage might be each extremely costly and extremely complicated, however added: “It could even be the way in which ahead for lots of instances.”

The main AI suppliers additionally promote a few of this expertise to the defence sector. It’s one thing one presenter on the AI for Good convention urged delegates to be cautious of.

How the information these suppliers accumulate each time an AI API is used is one thing each enterprise decision-maker and cyber safety skilled within the personal and public sector wants to think about.