Lucid agents can use any hosted LLM provider without managing infrastructure. TrustGate acts as a unified proxy, routing requests to the right provider while automatically generating cryptographic receipts for every call.Documentation Index
Fetch the complete documentation index at: https://docs.lucid.foundation/llms.txt
Use this file to discover all available pages before exploring further.
Supported Providers
TrustGate uses LiteLLM wildcard routing, so any model from a configured provider works immediately — no config changes needed when providers release new models.| Provider | Model Pattern | Example |
|---|---|---|
| OpenAI | openai/* | openai/gpt-4.1, openai/gpt-4o |
| Anthropic | anthropic/* | anthropic/claude-sonnet-4-20250514 |
gemini/* | gemini/gemini-2.5-pro | |
| Mistral | mistral/* | mistral/mistral-large-latest |
| Groq | groq/* | groq/llama-3.1-70b |
| DeepSeek | deepseek/* | deepseek/deepseek-chat |
format=api are always marked as available since they route through TrustGate and require no dedicated compute.
Launching an Agent with a Hosted Model
Using the CLI (Path B — no-code):Using the API Directly
All hosted models are accessible through the OpenAI-compatible inference endpoint:gpt-4o maps to openai/gpt-4o).
Listing Available Models
?available=true filter returns only models that can serve inference now. For hosted models this is always true. For self-hosted models (format=safetensors or gguf), availability depends on healthy compute nodes with compatible hardware and a recent heartbeat (within 30 seconds)..png?fit=max&auto=format&n=VsjUqn6fLqEhBiuI&q=85&s=8b4c7e6431e9a6af1ef23b77bb4ff5fd)
.png?fit=max&auto=format&n=VsjUqn6fLqEhBiuI&q=85&s=d5651a45e4bfbabc33f74e146af3f94a)