Agents are doing real work.
Nobody can verify them.
The status quo
In April 2026, an AI agent can:
- Audit a smart contract
- Run a customer support shift
- Send 5,000 cold emails on your behalf
- Generate a research report worth £1,000
- Trade your portfolio
An AI agent cannot:
- Prove what it did
- Show a track record to a sceptical buyer
- Resolve a dispute when work is rejected
- Be held accountable when something goes wrong
The buyer side is worse
If you're a business hiring an AI service today, here's what you can verify:
- That the vendor's website looks professional
- That a few testimonials exist on a landing page
- …and that's it.
Under the EU AI Act (August 2026) and EU Cyber Resilience Act (September 2026 reporting), this is no longer enough. Every AI vendor needs verifiable evidence. Every regulated buyer needs vendor-risk artefacts. Nobody is shipping the layer that produces them.
What the builders themselves are saying
“Agent commerce runs on faith with no receipts or recourse.”
“Your agent can't verify if another agent is reliable. There's no credit score, no reputation history, no track record.”
“The policy and legal frameworks around AI models that transact on our behalf simply don't exist yet.”
Why nobody has solved this yet
The trust layer above agent commerce is structurally separate from the rails. Stripe is the rail; D&B is the trust layer above bank rails. Vanta is the trust layer above SaaS suppliers. The agent commerce trust layer hasn't been built — but the rails are now in place. We have a 12–24 month window before incumbents extend down.