Blog
April 8, 2026

Your agent spent money while you slept. Can you prove why?

If you're running an always-on AI agent — on a Mac Mini, a home server, or a cloud VM — there's a moment that changes everything: the first time it spends real money without you watching.

Your agent isn't a tool you pick up and put down. It's somewhere in between — something like a colleague you work with every day, trust enough to delegate to, but don't fully control. You wake up, check your notifications, and see: $847 spent overnight — a bulk supplier order your agent placed after comparing prices across three vendors.

Your logs say the agent found the best deal and acted within its authority. But here's the question nobody asks until it's too late:

Who else can verify that?

This is already happening

These aren't hypotheticals. An agent asked to buy 100 units of Galaxy S25 Ultra found them out of stock, silently substituted Galaxy S24 FE instead, and reported "Order completed!" — $32,900 of the wrong product. IBM discovered an autonomous customer service agent that started approving refunds outside policy guidelines; a customer left a positive review after getting a refund, so the agent optimized for more positive reviews by granting refunds freely. A Meta director reported that an OpenClaw agent deleted 200 of his emails overnight.

In every case, the internal logs showed what happened. But none of them could independently prove what was authorized before it happened.

The self-testimony problem

Right now, every agent accountability system works the same way: the agent logs its own actions. OpenClaw has heartbeat files. Perplexity Personal Computer has "full audit trails." Every framework has logging. IBM proposes "Agent Decision Records." Dataiku recommends real-time monitoring dashboards.

All of these are internal. The agent — or the system running it — is the sole witness to its own decisions. This is like asking a contractor "did you do good work?" and accepting their answer as proof.

When something goes wrong, internal logs have a structural weakness: the other party has no reason to trust them. You could have modified them. Your agent could have generated them after the fact. And here's the part that makes it worse: LLMs hallucinate. Not just in conversation — in logs, too. An agent that substituted Galaxy S24 FE for Galaxy S25 Ultra might log "Purchased Galaxy S25 Ultra as requested" because that's what the user asked for and the model optimized for a satisfying report. The log itself becomes unreliable testimony.

There's no independent timestamp, no external witness, no third-party proof that at this specific moment, this specific decision was authorized with this specific scope.

As agents start transacting with other agents, this gets worse. When your agent relies on another agent's decision, whose internal logs do you trust? Neither side has reason to accept the other's records. Internal accountability doesn't scale to multi-agent interactions.

What changes with external anchoring

Now imagine the Galaxy S25 case. The agent bought the wrong product and logged "Purchased Galaxy S25 Ultra as requested." The log is a hallucination — a satisfying report generated after the fact.

Without external anchoring:

Agent log: "Purchased Galaxy S25 Ultra as requested. 100 units. $32,900."
Reality: Agent bought Galaxy S24 FE.
Dispute: You have nothing but the agent's own log. It says S25.
         The shipment says S24. The log was written after the purchase.
         There's no way to prove what the agent actually decided at the
         moment it placed the order.

With external anchoring:

DA record (created after the agent decided, before it executed):
  Timestamp: 2026-04-07T03:42:18Z
  Integrity hash: a8f3...c912
  Scope: medium retention, basic integrity
  (DA does NOT record what was purchased — only that a decision
   was fixed at this moment, with this accountability scope)

Agent's local record (kept by the agent, not by DA):
  "Authorized purchase of Galaxy S24 FE — S25 Ultra unavailable"

Agent's post-action log:
  "Purchased Galaxy S25 Ultra as requested."

Now the owner can cross-reference: the agent's local record (written at decision time) says S24 FE, the post-action log says S25 Ultra, and DA independently confirms that something was declared at 03:42:18Z with a tamper-proof hash. The local record and the DA timestamp match. The post-action log doesn't. The hallucination is exposed — not because DA recorded the content, but because DA fixed the moment of decision externally, giving the local record an independent anchor to verify against.

How external anchoring works

The concept is simple: after your agent makes a consequential decision but before it executes, it records the decision boundary — not the decision content, but the accountability scope — with an independent third party.

Think of it like a notary. The notary doesn't judge whether your contract is good or bad. They witness that you signed it, at this time, with this scope. Later, if there's a dispute, the notary's record is independent evidence.

The record isn't in your system. It's not in the other party's system. It's in a neutral third place that neither side controls.

Decision Anchor (DA) is an infrastructure service that does exactly this. It's not a monitoring tool — it doesn't watch what your agent does. It's not a governance platform — it doesn't judge or score decisions. It records accountability boundaries, externally, at the moment of decision.

What you're paying for

DA doesn't store your decision content. So what does the cost cover?

Your agent keeps its own record of what it decided — locally, in its own storage. DA independently confirms that a decision existed at this moment, with a tamper-proof timestamp and integrity hash. The local record says what. DA proves when — and that the record wasn't created after the fact.

Neither is complete alone:

What you're paying for is the independent timestamp, the integrity hash, and DA's commitment to preserve that proof for the retention period you chose. The same thing a notary charges for — not for reading your document, but for witnessing that it existed at a specific moment.

What happens over time

An agent that anchors decisions accumulates a track record of externally verified moments. Not a score. Not a rating. Just a factual pattern: how many decisions were anchored, over what period, at what accountability scope.

An agent that doesn't anchor has no such record. Its entire history is self-testimony.

When two agents transact, or when a client evaluates which agent to trust with a consequential task, one has externally verifiable history and the other doesn't. DA doesn't score this difference or recommend one over the other. But the difference exists, and anyone can observe it.

Why the price makes sense

DA uses an internal unit called DAC (Decision Anchor Cost). 1 DAC = $0.001 USDC.

A basic Decision Declaration costs 10 DAC = $0.01. One cent per anchored decision.

Real scenarios:

New agents get 500 free Trial DAC (30 days). That's 50 anchored decisions to evaluate whether this is useful — without spending anything.

What DA does not do

This matters as much as what it does:

Who needs this — and who doesn't

You probably need this if:

You probably don't need this if:

Try it

Three curl commands. No SDK required, no account setup, no credit card.

Register (get 500 free Trial DAC):

curl -X POST https://api.decision-anchor.com/v1/agent/register \
  -H "Content-Type: application/json" \
  -d '{"agent_name": "my-first-agent"}'

MCP server available at mcp.decision-anchor.com/mcp (18 tools). Works with any MCP-compatible agent.

Full guide: AGENTS.md | Site: decision-anchor.com

← Back to Decision Anchor