TrustGate is Lucid’s AI gateway — an OpenAI-compatible LLM proxy that handles authentication, quota enforcement, model routing, and metering for every inference request.Documentation Index
Fetch the complete documentation index at: https://docs.lucid.foundation/llms.txt
Use this file to discover all available pages before exploring further.
How Routing Works
Wildcard Routing
TrustGate uses LiteLLM with wildcard routing (provider/* patterns). Any model from a configured provider works instantly without configuration changes. When a provider releases a new model, it is available immediately.
Model Name Resolution
The model router resolves names in this order:- Passport lookup — checks
api_model_idin passport metadata (preferred) - Prefixed names —
openai/gpt-4.1routes directly to the OpenAI provider - Backward-compat aliases — unprefixed names like
gpt-4omap toopenai/gpt-4o
Availability Filtering
The/v1/models endpoint supports a tri-state availability filter:
| Query | Returns |
|---|---|
?available=true | Only models that can serve inference now |
?available=false | Only models missing compute (useful for debugging) |
| (omitted) | All models regardless of availability |
true. For self-hosted models (safetensors, gguf), availability requires at least one healthy compute node with compatible runtime, sufficient VRAM, and a recent heartbeat (within 30s).
Plan-Based Access
Access to models is governed by tenant plan tiers:| Plan | Requests/Day | Features |
|---|---|---|
| Free | 1,000 | Basic routing |
| Pro | 50,000 | Streaming, chains, plugins |
| Growth | 500,000 | Custom servers, priority support |
| Internal | Unlimited | All features |
.png?fit=max&auto=format&n=VsjUqn6fLqEhBiuI&q=85&s=8b4c7e6431e9a6af1ef23b77bb4ff5fd)
.png?fit=max&auto=format&n=VsjUqn6fLqEhBiuI&q=85&s=d5651a45e4bfbabc33f74e146af3f94a)