Abstract trust depth — layered signal forming through violet and amber fields

AMS Architecture

AMS Architecture: Verification Architecture for AI-Mediated Systems

The AMS architecture is a structured way of deciding whether activity is real, legitimate, authorised, and fit to be counted in systems that are increasingly mediated by AI and automated actors.

It progresses from abuse detection to actor classification, then toward authorised delegated actors and policy-bounded accountability for AI-assisted decisions.

Definition

What is verification infrastructure?

Verification infrastructure is the layer of software and policy that decides whether observed behaviour in an AI-mediated system represents genuine commercial intent — and therefore whether value, permission, or resource allocation should move.

It sits above identity and authentication. Identity asks: who is this? Authentication asks: are they who they claim? Verification infrastructure asks a different question: does this pattern of action deserve commercial response?

In environments where signal can be produced by automated agents, AI-mediated workflows, or coordinated incentive farming, the cost of producing fake-but-credible behaviour has collapsed. Traditional trust infrastructure — built before AI mediation — was not designed to address this. Verification infrastructure is.

Keigen's verification infrastructure operates across five layers — Intent, Attention, Trust, Policy, Governance — each addressing a specific failure mode that emerges when participation, traffic, or work delivery becomes machine-mediated. The principle is simple: verification before value release. Trust as outcome, not assumption.

Stat 01

51%

of global web traffic is now automated. Imperva, 2024

Stat 02

37%

classified as bad bots — designed to mimic human signals. Imperva, 2024

Stat 03

5

layers in Keigen's verification infrastructure: Intent, Attention, Trust, Policy, Governance.

Stat 04

4

commercial surfaces verified: intent verification, work verification, activity authenticity, decision auditability.

The next trust problem is not just identity. It is action quality over time.

A system can know who is present and still fail to decide whether behavior deserves permission, incentive, trust, or review. That gap is where synthetic engagement, incentive farming, and AI-driven abuse now grow.

Most systems still suffer from event amnesia — they treat a user’s tenth action as if it were their first. Keigen’s framework replaces isolated snapshots with behavioral continuity.

51%of web traffic now automated 37%classified as bad bots

Source: Imperva / Thales, Sept. 2024.

AMS at a glance

3 actor classes — verified human, authorised delegated actor, unauthorised automation
5 verification layers — Intent, Attention, Trust, Policy, Governance
4 capability areas — Intent Verification, Attention Authenticity, Agent Accountability, Decision Auditability
4-stage process — how the architecture operates in practice, covered below

What this architecture does

From verifying humans to verifying delegated actors

AMS provides a way to answer progressively harder verification questions as systems become more AI-mediated.

Signal

Is this signal real?

Filter manipulated, inflated, or misleading activity before it influences reporting, spend, or operational decisions.

Work

Was this work completed correctly?

Verify that delegated work by offshore teams, vendors, or AI-assisted workflows meets the right actor, scope, and evidence thresholds before it is accepted or paid.

Actor

Is this actor authorised?

As AI agents and delegated automation participate in commercial and operational flows, verify whether they are authorised, operating within scope, and producing accountable outcomes.

Decision

Can this decision withstand review?

Support policy-bounded accountability for AI-assisted decisions by preserving enough evidence for explanation, escalation, and reversal where required.

The Model

Five layers of governed trust

Each layer asks a different question. Together, they form a complete chain from observation to governed action.

What does the actor appear to want, and how credible is that direction of action?

The Intent Layer detects short-term demand for scarce attention. It does not declare final value; it observes whether a user or actor is actively seeking allocation now. Typical signals: dwell time, repeat visits, completion behaviour, and task progression.

What are they actually spending, sustaining, or signaling through behavior over time?

Attention captures what the actor is spending, sustaining, or signaling through observable behavior. It transforms fleeting intent into measurable engagement: time invested, depth of interaction, consistency of return, and quality of participation.

Has the pattern of action earned greater permission, confidence, or reward?

The Trust Layer determines whether a demand signal deserves repeated financing. It estimates whether the actor is likely to remain stable, cooperative, authentic, and low-loss over time. Trust is not simply reputation or a moral score. It is a system-level estimate of future allocability — long-term access capital.

What thresholds, exceptions, routing rules, and escalation paths should be applied?

The Policy Layer governs the ecosystem. It decides intent qualification thresholds, trust qualification thresholds, when to prompt, review, quarantine, or recover, when to cap issuance, and how aggressive or conservative the system should be. Policy is the civilisational rule layer.

How should repeated action be managed across time, incentives, and changing system conditions?

Governance manages the system over time. It handles temporal pricing — persistence, cooling periods, decay, commitment strength — and risk estimation: fraud probability, bot exposure, coordinated manipulation, one-shot extraction. Governance ensures the rules themselves evolve as conditions change.

Why Both

Technical controls & economic resistance

AI is collapsing the marginal cost of fraud. The question stops being only “Can we block bots?” and becomes “Can we make abuse less profitable?”

Technical resistance

Real-time detection, trust-state decay, quarantine, and escalation. Systems that observe, classify, and respond to behavioural patterns as they unfold. Discernment at speed.

Economic resistance

Lower friction for trustworthy users, higher cost for synthetic engagement, selective review where it matters. Not just harder to abuse — unprofitable. Force through design.

Every month without this layer, wasted budget compounds against reported performance.

From framework to capabilities

Four capabilities the AMS framework enables

Each capability corresponds to a class of verification decisions that systems increasingly need to make as AI mediation deepens.

Intent Verification

Distinguishing genuine, action-worthy intent from noise, low-quality signals, and manufactured demand.

Attention Authenticity

Verifying that engagement, participation, and attributed outcomes reflect real activity rather than gamed or inflated behaviour.

Agent Accountability

Ensuring delegated actors — humans or AI agents — operate within authorised scope and produce outcomes that can be reviewed.

Decision Auditability

Providing enough evidence that AI-assisted decisions can be explained, defended, or reversed when required.

Category boundary

What AMS is — and what it is not

What AMS is not

  • not just bot detection
  • not just identity verification
  • not just fraud scoring
  • not just workflow monitoring

What AMS is

A verification architecture for determining whether actions should count, trigger, pass, or be trusted over time.

Applied across surfaces

Each product implements specific AMS layers

Intent Verification

BuyerRecon

B2B buyer intelligence before form fill.

AMS: Intention (primary), Attention (secondary)

Attention Authenticity

RealBuyerGrowth

E-commerce and retail media campaign integrity, attention authenticity.

AMS: Attention (primary), Trust (secondary)

Agent Accountability

Time-to-Point

Verification of delegated work and execution.

AMS: Trust (primary), Policy (secondary)

Decision Auditability

Fidcern

Gives retail media operators a credible way to demonstrate trustworthy inventory; gives sponsors and brand advertisers independent evidence to assess where campaign spend is going.

AMS: Policy (primary), Governance (secondary)

Abstract layered wave forms — signal, trust, and governed action
AMS
Core Papers2026

Core publications

AMS and BHF

Read the two core Keigen framework papers locally: the main AMS whitepaper and the BHF companion paper.

Companion paper

AMS Field Theory (BHF)

The operating condition behind governed allocation — substrate, container, and field logic for the AMS system.

“The deeper aim is to direct trust and attention toward deserving and proven intent — through dynamic policy and governance while preserving vitality.” — Helen, Founder

FAQ

Verification infrastructure: questions we hear most

How is verification infrastructure different from authentication or KYC?

Authentication and KYC ask whether a user is who they claim. Verification infrastructure asks whether observed behaviour represents serious commercial intent, regardless of identity status. A fully-authenticated logged-in user can still produce fake commercial signal — through automation, incentive farming, or AI-mediated workflows. Verification infrastructure addresses that signal layer specifically. Identity is a prerequisite. Verification is the decision about whether to act.

What is intent verification?

Intent verification is the layer that determines whether observed engagement signals — page visits, content consumption, form submissions, repeat returns — represent genuine, serious buyer behaviour. It is what BuyerRecon performs for high-ticket B2B sales. The question is not "is this a real person" but "is this person showing the behaviour of someone who is actually evaluating a purchase."

What is work verification?

Work verification is the layer that confirms whether delivered work was completed as specified — particularly for distributed teams, offshore workforce, and AI-assisted execution. It is what Time-to-Point performs. Where AI tooling can produce credible-looking deliverables that mask incomplete underlying work, verification infrastructure provides the evidence layer that closes the gap.

What does the AMS framework refer to?

AMS is Keigen's verification infrastructure framework. It operates across five layers — Intent, Attention, Trust, Policy, Governance — designed to verify human and AI-agent participation in commercial systems. Read the AMS Whitepaper for the full architectural specification, and the AMS Field Theory paper for the operating-condition design (Benevolent Holding Field).

What does "verification before value release" mean?

It is the operating principle of Keigen's verification infrastructure: no commercial value — incentive, credit, allocation, payment — should be released until the underlying behaviour has been qualified across the relevant verification layers. This inverts the default pattern in most commercial systems, which release value first and audit later — often after value has already been extracted by automated or coordinated bad-faith actors.

Two concrete examples:

E-commerce promotion (RealBuyerGrowth). When a brand runs a 20%-off new-buyer campaign, the question is not whether the discount was redeemed but whether it reached genuine new buyers — or was siphoned by bot farms cycling fresh accounts to capture the same incentive repeatedly. RealBuyerGrowth performs the buyer-authenticity check before promotional budget is released, not after the leak.

Retail media and sponsor activation (Fidcern). A retail-media network, loyalty programme, or football commercial team needs to know whether attention is genuine before campaign value is released. Fidcern helps the operator verify participation quality, traffic cleanliness, and activation evidence before spend, rewards, or sponsor value are counted. The positioning is empowerment: it gives the operator a stronger evidence base to prove trustworthy inventory to advertisers and commercial partners.

Who uses verification infrastructure?

Keigen works with B2B companies, e-commerce operators, retail and retail-media networks, sponsors, brand advertisers, employers of AI agents, and operators of offshore workforces — across the UK, Japan, and China. Typical use cases: high-ticket B2B sales teams reducing pipeline pollution with intent verification (BuyerRecon); e-commerce operators verifying promotion and growth signal authenticity (RealBuyerGrowth); retail-media networks demonstrating trustworthy inventory plus football sponsors and brand advertisers auditing campaign-driven spend (Fidcern); and operations leaders verifying offshore or AI-assisted work delivery (Time-to-Point).