Abstract trust depth — layered signal forming through violet and amber fields

Framework

Trust Economy for intention, attention, and governed action

A framework for deciding what deserves trust, reward, escalation, or resistance in AI-shaped systems.

The next trust problem is not just identity. It is action quality over time.

A system can know who is present and still fail to decide whether behavior deserves permission, incentive, trust, or review. That gap is where synthetic engagement, incentive farming, and AI-driven abuse now grow.

Most systems still suffer from event amnesia — they treat a user’s tenth action as if it were their first. Keigen’s framework replaces isolated snapshots with behavioral continuity.

51%of web traffic now automated 37%classified as bad bots

The Model

Five layers of governed trust

Each layer asks a different question. Together, they form a complete chain from observation to governed action.

What does the actor appear to want, and how credible is that direction of action?

The Intent Layer detects short-term demand for scarce attention. It does not declare final value; it observes whether a user or actor is actively seeking allocation now. Typical signals include dwell time, repeat visits, click sequence, completion behaviour, timing rhythm, effort expenditure, and task progression.

What are they actually spending, sustaining, or signaling through behavior over time?

Attention captures what the actor is spending, sustaining, or signaling through observable behavior. It transforms fleeting intent into measurable engagement: time invested, depth of interaction, consistency of return, and quality of participation.

Has the pattern of action earned greater permission, confidence, or reward?

The Trust Layer determines whether a demand signal deserves repeated financing. It estimates whether the actor is likely to remain stable, cooperative, authentic, and low-loss over time. Trust is not simply reputation or a moral score. It is a system-level estimate of future allocability — long-term access capital.

What thresholds, exceptions, routing rules, and escalation paths should be applied?

The Policy Layer governs the ecosystem. It decides intent qualification thresholds, trust qualification thresholds, when to prompt, review, quarantine, or recover, when to cap issuance, and how aggressive or conservative the system should be. Policy is the civilisational rule layer.

How should repeated action be managed across time, incentives, and changing system conditions?

Governance manages the system over time. It handles temporal pricing — persistence, cooling periods, decay, commitment strength — and risk estimation: fraud probability, bot exposure, coordinated manipulation, one-shot extraction. Governance ensures the rules themselves evolve as conditions change.

Why Both

Technical controls & economic resistance

AI is collapsing the marginal cost of fraud. The question stops being only “Can we block bots?” and becomes “Can we make abuse less profitable?”

Technical resistance

Detection, review, trust-state decay, quarantine, and escalation. Systems that observe, classify, and respond to behavioral patterns in real time. Discernment at speed.

Economic resistance

Better incentive alignment, more selective review, lower friction for trustworthy users, and higher cost for synthetic engagement. Making abuse less profitable, not just harder. Force through design.

Abstract layered wave forms — signal, trust, and governed action
AMS
Whitepaper
V4

Go Deeper

The AMS Whitepaper

The full theoretical and technical foundation — the Attention Monetary System, political economy of digital allocation, and the five-layer model in detail.

“The deeper aim is to direct trust and attention toward deserving and proven intent — through dynamic policy and governance while preserving vitality.” — Helen, Founder