Microsoft says you are asking the unsuitable query about AI
Abstract created by Good Solutions AI
In abstract:
- Microsoft’s Ram Shankar Siva Kumar argues that customers ought to concentrate on trusting AI builders quite than the AI fashions themselves when evaluating chatbot reliability.
- PCWorld studies that agentic AI, which capabilities as digital assistants, is gaining recognition however might lack essential safeguards for widespread deployment.
- The important thing concern entails unrestricted AI entry to customers’ digital lives, which may trigger important injury with out correct safety measures from reliable builders.
I ask mates on a regular basis in the event that they belief ChatGPT or Gemini, particularly once they inform me they feed these AI chatbots medical take a look at outcomes, deeply private ideas, and even delicate work points. However after speaking with Microsoft at RSAC’s 2026 cybersecurity convention, I noticed I’ve been approaching that query all unsuitable. You most likely have, too.
Ram Shankar Siva Kumar, Information Cowboy and AI Pink Workforce Lead at Microsoft, says most individuals ask the query of easy methods to belief an AI. That’s, asking easy methods to study sufficient about it and its inside workings in an effort to make a name on its dependability. Their focus is on the mannequin and its code.
As an alternative, Kumar suggests we should always ask: “Do I belief the developer?”
This new strategy to belief got here from a fast dialog round agentic AI, which is able to enchantment to most shoppers. As Kumar says, any such AI helps “with the drudgery of life.” It’s not laborious to see the attract of getting an AI agent as a digital assistant, in a position to deal with multi-step duties with little enter. However Kumar additionally expressed concern that almost all shoppers don’t know that some AI initiatives simply aren’t prepared for prime time but.
For instance, if an AI agent has unrestricted entry to your whole digital life, chances are you’ll not notice that safeguards aren’t correctly in place to forestall large errors and attainable irreparable injury. (Kumar and I ended up referring to this as a “YOLO mannequin” for AI.) There’s proof of this within the wild—we’ve already seen repeat tales of AI brokers deleting recordsdata they weren’t requested to, with a outstanding current instance being a Meta exec shedding 200 emails to OpenClaw.
After all, you’ll be able to strategy agentic AI use extra fastidiously, as my colleague Ben Patterson explains. However his strategies require dwelling and respiratory AI rather more than most individuals do—my mates, household, and acquaintances simply plop their questions right into a chatbot or use AI abstract buttons with out forming a recreation plan first.
What’s simpler is to easily ask your self when you imagine the developer of the AI mannequin will ship on the product’s advertising and marketing. Many AI growth groups can honestly say a brand new launch is their most superior mannequin but. However on an goal stage, is it really superior in each options and safety from exploits?
The reply to this query isn’t essentially to chop your self off from attention-grabbing new instruments. Moderately, train sharp judgement about what and who you belief—and to what diploma.

