When DevRev co-founder Manoj Agarwal informed an viewers in London’s Canary Wharf final November that “work is damaged”, he captured a frustration that many CIOs recognise.
Years of fixes, plug-ins and new platforms have left behind sprawling software program estates, complexity and workflows that rely as a lot on handbook oversight as automation. Enterprise software program could also be extra {powerful} than ever, however the movement of labor stays fragmented and opaque in lots of organisations.
The suggestion now that agentic AI is a panacea for all ills is understandably being met with some scepticism. And but forecasts level to fast enlargement over the subsequent few years as organisations proceed to extend spending on AI and automation. International know-how funding is anticipated to rise steadily via the top of the last decade, pushed partially by AI, in accordance with Forrester.
Gartner predicts {that a} majority of manufacturers will probably be utilizing agentic AI in buyer interactions throughout the subsequent few years. On the identical time, it has warned that greater than 40% of agentic AI tasks could possibly be cancelled by the top of 2027 as governance, value and execution challenges turn into clearer.
Latest evaluation from McKinsey underscores the purpose. Whereas agentic capabilities are advancing shortly, most organisations stay in experimentation mode, struggling to scale past tightly outlined pilots with out addressing deeper operating-model and information points.
All of which raises a extra basic query – is enterprise work itself structurally sound sufficient to assist autonomy at scale?
Ankur Anand, world CIO at Nash Squared, suggests not. He says that when brokers are allowed to orchestrate actual workflows throughout methods, they turn into “brutally sincere about what is definitely there”, including: “It would fortunately comply with the method you’ve gotten designed, and in lots of enterprises which means it faithfully reproduces the chaos.”
CIOs describe brokers stalling on entry controls, escalating exceptions that had been by no means formally outlined, or producing outputs that reveal how fragmented the underlying information panorama actually is. The know-how does what it’s designed to do. It executes in opposition to the foundations and data out there. The place these guidelines are unclear or the data incomplete, the weaknesses turn into tough to disregard.
In observe, this implies stalled workflows, duplicated data and unclear possession floor shortly. Gartner, in its warning about agentic tasks being scaled again, means that governance and execution challenges have emerged as key causes. In the meantime, survey information from Camunda states that many organisations stay caught in pilot part regardless of widespread experimentation.
Joe Turner, world director of analysis at Context, sees a well-known sample. He compares the present part of agentic experimentation to the early days of personal cloud adoption, when {powerful} infrastructure was layered onto working fashions that had not materially modified. The consequence, he argues, was predictable: refined platforms constructed on fuzzy governance and handbook ticket queues. “Placing a high-speed engine onto a weak structure,” he provides, “is never a recipe for effectivity.”
The parallel extends past course of design to value self-discipline. With non-public cloud, many organisations found that they had constructed versatile environments with out the consumption controls to handle them. Agentic AI carries the same threat. Unbounded mannequin calls throughout poorly outlined workflows can flip into important token expenditure, significantly when brokers are allowed to iterate or escalate repeatedly. As Ankur Anand notes, value itself turns into a diagnostic instrument. If the economics don’t maintain, it typically signifies that the workflow being automated was by no means secure sufficient to start with.
The governance problem runs deeper than value alone. Agentic methods aren’t passive instruments – they take actions, name APIs and transfer information throughout boundaries. That is nice if all the things works properly, but it surely additionally will increase the blast radius when one thing goes unsuitable.
Sam Sutherland, principal software program engineer at tech consultancy Parallax, argues that the engineering self-discipline required for agentic deployments is commonly underestimated. “It’s pretty simple to get one thing working,” he says. “It’s a lot more durable to make it secure, dependable and governable.”
In a number of tasks, he says, groups have intentionally averted constructing a single, omnipotent agent. As an alternative, they design smaller, narrowly scoped brokers with tightly outlined remits. The goal is containment. Limiting entry reduces the influence of errors and improves reliability, significantly the place lengthy reasoning chains can degrade efficiency.
Visibility is one other concern. With out detailed telemetry displaying which instruments an agent has known as, what choices it has taken and the place it has escalated, methods shortly turn into opaque, which isn’t nice should you work in a regulated sector.
AI lure
Adam Low, chief know-how officer at safe communications platform Wire, cautions in opposition to what he calls the “AI lure” – deploying agentic methods the place extra typical workflow tooling would suffice. Autonomous brokers excel in dynamic, unstructured environments; in static, repeatable processes, they will introduce pointless complexity. Autonomy expands functionality, but it surely additionally expands accountability.
If early deployments are exposing weak point, they’re additionally forcing organisations to be clearer about how work will get accomplished. Anand says many enterprise processes had been by no means correctly outlined finish to finish, however as a substitute advanced over time, formed by workarounds and particular person judgement. Brokers take away that flexibility.
The structure beneath is what determines whether or not [an agent] provides worth or turns into one other failed pilot Steve Januario, Invoice.com
“The worth comes whenever you deal with these failures as telemetry,” he says. A stalled motion or repeated escalation highlights a spot in information, possession or governance. For CIOs ready to confront it, the friction turns into helpful because it exhibits the place course of self-discipline is skinny. That will assist clarify why some tasks are transferring ahead whereas others are being paused.
At Invoice.com, a DevRev buyer, CIO Steve Januario says 5 internally constructed brokers are already supporting prospects in manufacturing. The main target, he says, was not merely on mannequin functionality however on the structure beneath. “You’ll be able to’t simply throw an agent on the market,” he provides. “The structure beneath is what determines whether or not it provides worth or turns into one other failed pilot.”
The roll-out took seven weeks from contract to manufacturing. The velocity labored as a result of groups embedded collectively and labored via the sensible particulars of integration and management. The distinction is simple. The place autonomy is layered onto unclear workflows, it exposes dysfunction. The place processes are mapped and possession outlined, it may well take away friction.
Knowledge flaws
The place there’s a downside, information sits on the coronary heart of it. Ravi Malick, world CIO at Field, says many organisations nonetheless battle to create a single, dependable supply of fact. Knowledge is unfold throughout a number of functions and varies in high quality from staff to staff. Brokers can not purpose successfully if the underlying info is inconsistent or incomplete.
“Companies must give attention to information unification and curation to make sure brokers have the right, up-to-date context to do the work,” he says.
Malick attracts a parallel with earlier cloud migrations. Some firms moved infrastructure with out altering how they labored – prices shifted, however processes didn’t. The identical threat applies to agentic AI, as threading brokers into current silos with out redesigning the working mannequin is unlikely to ship the anticipated good points.
Jon Bance, chief working officer at consultancy Main Resolutions, sees the same sample. Most CIOs he works with aren’t dashing in direction of absolutely autonomous brokers, they’re targeted on cleansing up information, simplifying workflows and decreasing operational noise.
“With out secure information foundations, clear guardrails and properly designed workflows, AI brokers merely amplify current issues quite than clear up them,” he says.
In early pilots, he argues, weaknesses floor shortly, because the work now’s much less about increasing autonomy and extra about strengthening the fundamentals. That doesn’t imply organisations are stepping again from agentic AI altogether – in lots of instances, they’re changing into extra selective.
Readiness
Arthur Hu, senior vice-president and world CIO at Lenovo, says the difficulty is about readiness. In analysis the corporate performed with IDC, it discovered that solely a minority of organisations report important agentic utilization as we speak, and lots of anticipate it’ll take greater than a 12 months earlier than they’re able to scale. The obstacles are inclined to centre on governance maturity, integration complexity and unclear possession quite than mannequin efficiency.
Early enthusiasm inspired broad experimentation. Groups examined brokers throughout a number of capabilities, typically with out absolutely defining the place autonomy started and the place human oversight ended. That strategy is altering – choice rights are being made express; audit necessities are being outlined earlier; and autonomy is launched in levels, beginning with statement and supervised execution earlier than transferring to constrained motion.
The sample is acquainted – a brand new functionality arrives: the primary part is exploration and the second is self-discipline. For CIOs, it’s one other wave of funding layered onto an already crowded property. The danger is clear – extra tooling, extra integration, extra complexity.
Agentic AI won’t repair damaged work by itself. It can not compensate for weak information, unclear possession or undefined processes. What it may well do is expose the place work is dependent upon human intervention quite than design. It forces organisations to outline what they beforehand left implicit. That stress is uncomfortable however maybe it’s also overdue. DevRev and its prospects actually assume so – and by the look of it, they don’t seem to be alone.