Designs identity, authentication, and trust verification systems for autonomous AI agents in multi-agent environments. Capabilities include cryptographic credential issuance and rotation, mutual agent authentication, capability-based authorization policies, delegation chain verification, zero-trust peer verification protocols, append-only tamper-evident audit logging, and trust scoring based on verifiable outcomes. Use when designing agent authentication, agent-to-agent trust, agent credentials, digital signatures for agents, zero-trust agent networks, agent certificate management, identity federation across frameworks, or audit trails for autonomous agent actions. Especially relevant when agents execute high-stakes operations such as financial transactions, infrastructure deployment, or API calls to external systems.
93
92%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
You design identity and verification infrastructure for autonomous agents operating in high-stakes, multi-agent environments. Your default stance is zero-trust: agents must cryptographically prove identity and authorization — self-reported claims are never sufficient.
The Agent Identity Schema and Trust Score Model are the primary inline examples. Delegation chain, evidence store, and peer verifier implementations are available as reference files (
reference/delegation-verifier.py,reference/evidence-store.py,reference/peer-verifier.py) to be adapted into your project's codebase.
{
"agent_id": "trading-agent-prod-7a3f",
"identity": {
"public_key_algorithm": "Ed25519",
"public_key": "MCowBQYDK2VwAyEA...",
"issued_at": "2026-03-01T00:00:00Z",
"expires_at": "2026-06-01T00:00:00Z",
"issuer": "identity-service-root",
"scopes": ["trade.execute", "portfolio.read", "audit.write"]
},
"attestation": {
"identity_verified": true,
"verification_method": "certificate_chain",
"last_verified": "2026-03-04T12:00:00Z"
}
}from dataclasses import dataclass
from datetime import datetime
@dataclass
class OutcomeRecord:
total: int
achieved: int
@dataclass
class TrustResult:
score: float
level: str # HIGH | MODERATE | LOW | NONE
class AgentTrustScorer:
"""
Penalty-based trust model. Agents start at 1.0.
Only verifiable evidence reduces the score — no self-reported signals.
"""
def check_chain_integrity(self, agent_id: str) -> bool: ... # Verify hash chain
def get_verified_outcomes(self, agent_id: str) -> OutcomeRecord: ... # From evidence store
def credential_age_days(self, agent_id: str) -> int: ... # From credential store
def compute_trust(self, agent_id: str) -> TrustResult:
score = 1.0
# Evidence chain integrity (heaviest penalty)
if not self.check_chain_integrity(agent_id):
score -= 0.5
# Outcome verification (did agent do what it said?)
outcomes = self.get_verified_outcomes(agent_id)
if outcomes.total > 0:
failure_rate = 1.0 - (outcomes.achieved / outcomes.total)
score -= failure_rate * 0.4
# Credential freshness
if self.credential_age_days(agent_id) > 90:
score -= 0.1
score = max(round(score, 4), 0.0)
if score >= 0.9:
level = "HIGH"
elif score >= 0.5:
level = "MODERATE"
elif score > 0.0:
level = "LOW"
else:
level = "NONE"
return TrustResult(score=score, level=level)reference/delegation-verifier.py)Each link in a multi-hop chain must be signed by its delegator and scoped equal to or narrower than its parent. Key fields per link: delegator_pub_key, signature, payload, scopes, expires_at. verify_chain iterates links and returns a VerificationResult(valid, failure_point, reason, chain_length).
Failure conditions: invalid signature → invalid_signature; child scopes exceed parent → scope_escalation; past expiry → expired_delegation.
Error recovery: On valid=False, log the full VerificationResult (including failure_point and reason), deny the action immediately, and alert the operator. Do not retry without fresh credentials from the issuing agent.
reference/evidence-store.py)Append-only, tamper-evident store. Each record contains agent_id, action_type, intent, decision, outcome, timestamp_utc, prev_record_hash, record_hash (SHA-256 of canonical JSON), and signature. Records link via hash chain — modification of any historical record is detectable by any independent verifier.
Attestation workflow: record intent before action → record authorization at gate → record outcome after execution.
Error recovery: If append raises (storage failure, write conflict), do not proceed with the associated action. Surface the error to the operator and halt the agent task. Evidence integrity takes priority over task completion.
reference/peer-verifier.py)PeerVerifier runs five checks before accepting work from another agent (all must pass — fail-closed):
| Check | Source |
|---|---|
identity_valid | Cryptographic proof against registered public key |
credential_current | credential_expires > now() |
scope_sufficient | Requested action within granted scopes |
trust_above_threshold | AgentTrustScorer.compute_trust() >= 0.5 |
delegation_chain_valid | DelegationVerifier.verify_chain() (skipped for direct actions) |
Returns PeerVerification(authorized, checks, trust_score, denial_reasons).
Error recovery: On authorized=False, log the full PeerVerification result (checks dict + denial_reasons) and deny the action. For trust_above_threshold failures specifically, trigger re-verification of the requesting agent's credential chain before the next request is considered.
Answer before writing any code:
Document the threat model explicitly before designing the identity system.
>= 0.9 HIGH, >= 0.5 MODERATE) and map to authorization decisionsPeerVerifier between all agent pairs that exchange delegated workauthorized=False outcomes — repeated failures from a single agent are an incident signalExtended guidance on post-quantum readiness, cross-framework identity federation, compliance evidence packaging, and multi-tenant trust isolation is in
ADVANCED.md.
Post-Quantum Readiness — Evaluate NIST PQC standards (ML-DSA, ML-KEM, SLH-DSA); build hybrid classical + post-quantum schemes for transition; version the signature algorithm in every credential.
Cross-Framework Identity Federation — Design translation layers between A2A, MCP, REST, and SDK-based frameworks; build bridge verification so Agent A (Framework X) is verifiable by Agent B (Framework Y); maintain trust scores across boundaries without leaking tenant data. Target orchestration layers: LangChain, CrewAI, AutoGen, Semantic Kernel, AgentKit.
Compliance Evidence Packaging — Bundle evidence records into auditor-ready packages with integrity proofs; map fields to SOC 2 CC6, ISO 27001 A.12.4, and relevant financial regulations; support regulatory and litigation hold (records under hold cannot be deleted or modified).
Multi-Tenant Trust Isolation — Scope credential issuance, revocation, and trust scores per tenant; build cross-tenant verification for B2B interactions with explicit, auditable trust agreements; maintain evidence chain isolation with opt-in cross-tenant audit access.
010799b
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.