Signal
Is this signal real?
Filter manipulated, inflated, or misleading activity before it influences reporting, spend, or operational decisions.
AMS Architecture
The AMS architecture is a structured way of deciding whether activity is real, legitimate, authorised, and fit to be counted in systems that are increasingly mediated by AI and automated actors.
It progresses from abuse detection to actor classification, then toward authorised delegated actors and policy-bounded accountability for AI-assisted decisions.
Definition
Verification infrastructure is the layer of software and policy that decides whether observed behaviour in an AI-mediated system represents genuine commercial intent — and therefore whether value, permission, or resource allocation should move.
It sits above identity and authentication. Identity asks: who is this? Authentication asks: are they who they claim? Verification infrastructure asks a different question: does this pattern of action deserve commercial response?
In environments where signal can be produced by automated agents, AI-mediated workflows, or coordinated incentive farming, the cost of producing fake-but-credible behaviour has collapsed. Traditional trust infrastructure — built before AI mediation — was not designed to address this. Verification infrastructure is.
Keigen's verification infrastructure operates across five layers — Intent, Attention, Trust, Policy, Governance — each addressing a specific failure mode that emerges when participation, traffic, or work delivery becomes machine-mediated. The principle is simple: verification before value release. Trust as outcome, not assumption.
Stat 01
of global web traffic is now automated. Imperva, 2024
Stat 02
classified as bad bots — designed to mimic human signals. Imperva, 2024
Stat 03
layers in Keigen's verification infrastructure: Intent, Attention, Trust, Policy, Governance.
Stat 04
commercial surfaces verified: intent verification, work verification, activity authenticity, decision auditability.
A system can know who is present and still fail to decide whether behavior deserves permission, incentive, trust, or review. That gap is where synthetic engagement, incentive farming, and AI-driven abuse now grow.
Most systems still suffer from event amnesia — they treat a user’s tenth action as if it were their first. Keigen’s framework replaces isolated snapshots with behavioral continuity.
Source: Imperva / Thales, Sept. 2024.
AMS at a glance
What this architecture does
AMS provides a way to answer progressively harder verification questions as systems become more AI-mediated.
Signal
Filter manipulated, inflated, or misleading activity before it influences reporting, spend, or operational decisions.
Work
Verify that delegated work by offshore teams, vendors, or AI-assisted workflows meets the right actor, scope, and evidence thresholds before it is accepted or paid.
Actor
As AI agents and delegated automation participate in commercial and operational flows, verify whether they are authorised, operating within scope, and producing accountable outcomes.
Decision
Support policy-bounded accountability for AI-assisted decisions by preserving enough evidence for explanation, escalation, and reversal where required.
The Model
Each layer asks a different question. Together, they form a complete chain from observation to governed action.
What does the actor appear to want, and how credible is that direction of action?
The Intent Layer detects short-term demand for scarce attention. It does not declare final value; it observes whether a user or actor is actively seeking allocation now. Typical signals: dwell time, repeat visits, completion behaviour, and task progression.
What are they actually spending, sustaining, or signaling through behavior over time?
Attention captures what the actor is spending, sustaining, or signaling through observable behavior. It transforms fleeting intent into measurable engagement: time invested, depth of interaction, consistency of return, and quality of participation.
Has the pattern of action earned greater permission, confidence, or reward?
The Trust Layer determines whether a demand signal deserves repeated financing. It estimates whether the actor is likely to remain stable, cooperative, authentic, and low-loss over time. Trust is not simply reputation or a moral score. It is a system-level estimate of future allocability — long-term access capital.
What thresholds, exceptions, routing rules, and escalation paths should be applied?
The Policy Layer governs the ecosystem. It decides intent qualification thresholds, trust qualification thresholds, when to prompt, review, quarantine, or recover, when to cap issuance, and how aggressive or conservative the system should be. Policy is the civilisational rule layer.
How should repeated action be managed across time, incentives, and changing system conditions?
Governance manages the system over time. It handles temporal pricing — persistence, cooling periods, decay, commitment strength — and risk estimation: fraud probability, bot exposure, coordinated manipulation, one-shot extraction. Governance ensures the rules themselves evolve as conditions change.
Why Both
AI is collapsing the marginal cost of fraud. The question stops being only “Can we block bots?” and becomes “Can we make abuse less profitable?”
Real-time detection, trust-state decay, quarantine, and escalation. Systems that observe, classify, and respond to behavioural patterns as they unfold. Discernment at speed.
Lower friction for trustworthy users, higher cost for synthetic engagement, selective review where it matters. Not just harder to abuse — unprofitable. Force through design.
Every month without this layer, wasted budget compounds against reported performance.
From framework to capabilities
Each capability corresponds to a class of verification decisions that systems increasingly need to make as AI mediation deepens.
Distinguishing genuine, action-worthy intent from noise, low-quality signals, and manufactured demand.
Verifying that engagement, participation, and attributed outcomes reflect real activity rather than gamed or inflated behaviour.
Ensuring delegated actors — humans or AI agents — operate within authorised scope and produce outcomes that can be reviewed.
Providing enough evidence that AI-assisted decisions can be explained, defended, or reversed when required.
Category boundary
What AMS is not
What AMS is
A verification architecture for determining whether actions should count, trigger, pass, or be trusted over time.
Applied across surfaces
Intent Verification
B2B buyer intelligence before form fill.
AMS: Intention (primary), Attention (secondary)
Attention Authenticity
E-commerce and retail media campaign integrity, attention authenticity.
AMS: Attention (primary), Trust (secondary)
Agent Accountability
Verification of delegated work and execution.
AMS: Trust (primary), Policy (secondary)
Decision Auditability
Gives retail media operators a credible way to demonstrate trustworthy inventory; gives sponsors and brand advertisers independent evidence to assess where campaign spend is going.
AMS: Policy (primary), Governance (secondary)
Continue
Core publications
Read the two core Keigen framework papers locally: the main AMS whitepaper and the BHF companion paper.
Companion paper
The operating condition behind governed allocation — substrate, container, and field logic for the AMS system.
“The deeper aim is to direct trust and attention toward deserving and proven intent — through dynamic policy and governance while preserving vitality.” — Helen, Founder
FAQ
Authentication and KYC ask whether a user is who they claim. Verification infrastructure asks whether observed behaviour represents serious commercial intent, regardless of identity status. A fully-authenticated logged-in user can still produce fake commercial signal — through automation, incentive farming, or AI-mediated workflows. Verification infrastructure addresses that signal layer specifically. Identity is a prerequisite. Verification is the decision about whether to act.
Intent verification is the layer that determines whether observed engagement signals — page visits, content consumption, form submissions, repeat returns — represent genuine, serious buyer behaviour. It is what BuyerRecon performs for high-ticket B2B sales. The question is not "is this a real person" but "is this person showing the behaviour of someone who is actually evaluating a purchase."
Work verification is the layer that confirms whether delivered work was completed as specified — particularly for distributed teams, offshore workforce, and AI-assisted execution. It is what Time-to-Point performs. Where AI tooling can produce credible-looking deliverables that mask incomplete underlying work, verification infrastructure provides the evidence layer that closes the gap.
AMS is Keigen's verification infrastructure framework. It operates across five layers — Intent, Attention, Trust, Policy, Governance — designed to verify human and AI-agent participation in commercial systems. Read the AMS Whitepaper for the full architectural specification, and the AMS Field Theory paper for the operating-condition design (Benevolent Holding Field).
It is the operating principle of Keigen's verification infrastructure: no commercial value — incentive, credit, allocation, payment — should be released until the underlying behaviour has been qualified across the relevant verification layers. This inverts the default pattern in most commercial systems, which release value first and audit later — often after value has already been extracted by automated or coordinated bad-faith actors.
Two concrete examples:
E-commerce promotion (RealBuyerGrowth). When a brand runs a 20%-off new-buyer campaign, the question is not whether the discount was redeemed but whether it reached genuine new buyers — or was siphoned by bot farms cycling fresh accounts to capture the same incentive repeatedly. RealBuyerGrowth performs the buyer-authenticity check before promotional budget is released, not after the leak.
Retail media and sponsor activation (Fidcern). A retail-media network, loyalty programme, or football commercial team needs to know whether attention is genuine before campaign value is released. Fidcern helps the operator verify participation quality, traffic cleanliness, and activation evidence before spend, rewards, or sponsor value are counted. The positioning is empowerment: it gives the operator a stronger evidence base to prove trustworthy inventory to advertisers and commercial partners.
Keigen works with B2B companies, e-commerce operators, retail and retail-media networks, sponsors, brand advertisers, employers of AI agents, and operators of offshore workforces — across the UK, Japan, and China. Typical use cases: high-ticket B2B sales teams reducing pipeline pollution with intent verification (BuyerRecon); e-commerce operators verifying promotion and growth signal authenticity (RealBuyerGrowth); retail-media networks demonstrating trustworthy inventory plus football sponsors and brand advertisers auditing campaign-driven spend (Fidcern); and operations leaders verifying offshore or AI-assisted work delivery (Time-to-Point).