| Passport | Universal identity primitive for models, compute nodes, tools, datasets, and agents. Each passport has a type, metadata, and capabilities. See Passports. |
| Receipt | Cryptographic proof of an AI inference, containing input/output hashes, an Ed25519 signature, and a timestamp. See Receipts. |
| MMR | Merkle Mountain Range — append-only data structure used to batch receipts into epochs with efficient inclusion proofs. See MMR. |
| Epoch | A batch of receipts whose MMR root is anchored to the Solana blockchain as a single transaction. See Epochs. |
| Anchoring | The process of committing an epoch’s MMR root hash to a Solana on-chain account, providing blockchain-level immutability. |
| Session Signer | Server-side Ed25519 keypair that signs every receipt for cryptographic authenticity. See Session Signer. |
| Inclusion Proof | Merkle proof demonstrating that a specific receipt is part of an anchored epoch’s MMR. |
| TrustGate | OpenAI-compatible LLM proxy that routes inference requests via passport matching, with receipt generation. See TrustGate. |
| MCPGate | MCP tool gateway with 88+ builtin servers, credential management, session budgets, and semantic discovery. See MCPGate. |
| MCP | Model Context Protocol — open standard by Anthropic for connecting AI models to external tools and data sources. |
| Policy | Rules for model matching — defines required capabilities, cost constraints, latency targets, and provider preferences. |
| Matching Engine | The @raijinlabs/passport service that selects the optimal model or compute node based on a policy. |
| Credential Adapter | Pluggable interface for resolving service credentials in MCPGate (EnvVar, Database, or Composite adapters). |
| Session Budget | Per-session resource cap in MCPGate — limits tool calls, cost, or duration with hard/soft enforcement. |
| Agent Orchestrator | Service managing the plan → accomplish → execute → validate lifecycle for autonomous AI agents. |
| Payout | Revenue split calculation between model provider, compute provider, and platform for each inference. |
| BYOK | Bring Your Own Key — use your own LLM provider API keys through Lucid, paying only infrastructure fees. |
| iGas | Inference gas — Solana fee unit consumed when anchoring inference receipts on-chain. |
| mGas | Memory gas — Solana fee unit consumed when storing agent memory on-chain. |
| LiteLLM | Unified LLM provider interface used by TrustGate to route requests to OpenAI, Anthropic, Google, Mistral, and open-source models. |
| OpenMeter | Usage metering service that tracks API requests, tokens, receipts, and costs in real-time. |
| FlowSpec | Visual workflow specification format (n8n-compatible) for defining multi-step AI pipelines. |