AI’s dumb genius downside | Pc Weekly
The AI debate proper now centres virtually completely on fashions – which LLM is smarter, whether or not they’ll be commoditised, whether or not OpenAI or Anthropic or Google wins the arms race. These are actual questions. However they don’t seem to be crucial ones. Crucial query is what sits between the mannequin and the end result. And proper now, that layer barely exists.
Name it the context engine.
Here is the issue with a genius in a room. Sam Altman and Dario Amodei have each used some model of this analogy – think about having 100 good minds working in your hardest issues. It is a compelling picture. However a genius with out context is only a sensible individual working in a vacuum. Hand them a authorized temporary with no background on the consumer, the jurisdiction, the negotiating historical past, the personalities concerned – and their output is generic at finest. The intelligence is actual. The usefulness is proscribed.
What adjustments all the things is not including extra geniuses. It is the briefing earlier than they stroll into the room.
That briefing – the situational consciousness, the organisational reminiscence, the understanding of how a particular person or firm operates on the planet – is what a context engine offers. And it is virtually completely lacking from how most individuals are utilizing AI right this moment. We’re basically handing good minds a process with no background and questioning why the outputs really feel spectacular however imprecise.
Classes from Google’s historical past
Take into consideration how Google developed. Within the early days, the metric everybody tracked was index measurement – what number of web sites had Google crawled. Extra pages meant higher search. That was the commodity race, and Google gained it. However analysts finally realised, that didn’t give Google a long-term sustainable benefit. That got here from the truth that Google knew you. It understood what you have been really in search of within the context of all the things else you’d ever looked for. The index was replicable. The person relationship wasn’t.
We’re within the index part of AI proper now. Everyone seems to be measuring parameters, benchmarks, reasoning scores. These matter. However they don’t seem to be the place the lasting worth will accumulate. The context layer is.
Take into account what context unlocks in apply. A legislation agency’s AI does not simply must know the legislation – it must know this consumer’s threat tolerance, this associate’s drafting model, twenty years of case historical past, and the way the opposing agency tends to barter. A software program group’s AI does not simply want to put in writing clear code – it wants to know the structure selections made three years in the past, the technical debt the group has chosen to dwell with, and what “achieved” means on this organisation. The uncooked intelligence of the underlying mannequin issues far lower than whether or not it is aware of the place it’s.
Here is why that is additionally a enterprise story. LLMs, for all their impressiveness, are in the end replicable. Given sufficient capital and expertise, you may prepare a aggressive mannequin. That is not a dismissal of what OpenAI, Anthropic, and Google have constructed – it is an commentary concerning the nature of the asset. The race between them is actual, and the end result issues. However it’s a race.
Why context issues in AI
Context is completely different. Context requires customers and organisations to actively select to share data – their workflows, their historical past, their preferences, their institutional data. That act of sharing creates switching prices. As soon as an organisation’s context lives inside a system, leaving that system means beginning over. The context does not switch. That is a bonus that compounds over time in a approach that mannequin efficiency alone doesn’t.
That is additionally why organisational context is extra invaluable than particular person context. A person person can rebuild their relationship with a brand new instrument comparatively rapidly. An organisation can not. The switching price is institutional – it lives throughout groups, processes, and years of accrued information. Whoever captures that first, and earns the belief required to carry it, is sitting on one thing that appears much less like software program and extra like infrastructure.
The LLM debate will proceed. It isn’t unimportant. However the subsequent part of AI worth creation will not be gained by whoever builds the neatest mannequin in isolation. Will probably be gained by whoever figures out the best way to make these fashions really situationally conscious – geared up not simply with what they’ve discovered, however with the place they’re, who they’re serving, and what really issues on this particular second.
The context engine is coming. The query is who builds it, and who owns what it learns.
Judah Taub is the founder and managing associate of Hetz Ventures, an Israeli early-stage enterprise capital agency specialising in cybersecurity, information, and AI infrastructure.

