Documentation Index
Fetch the complete documentation index at: https://docs.lucid.foundation/llms.txt
Use this file to discover all available pages before exploring further.
Lucid Agent Runtime
The Lucid Agent Runtime is a pre-built Docker image designed for deploying AI agents without writing any code. It supports any OpenAI-compatible provider and automatically generates receipts when connected to the Lucid API.
Quick Start
To quickly deploy an AI agent using the Lucid Agent Runtime, you can use the following command:
lucid launch --runtime base --model gpt-4o --prompt "You are a helpful assistant" --target docker
Alternatively, you can manually run the Docker container:
docker run -p 3100:3100 \
-e LUCID_MODEL=gpt-4o \
-e LUCID_PROMPT="You are a helpful assistant" \
-e PROVIDER_URL=https://your-provider-url \
-e PROVIDER_API_KEY=your-key \
-e LUCID_API_URL=https://api.lucid.foundation \
-e LUCID_PASSPORT_ID=your-passport-id \
ghcr.io/lucid-fdn/agent-runtime:v1.0.0
Two Independent Concerns
The Lucid Agent Runtime separates concerns into two main areas:
Inference (PROVIDER_URL): This is where the LLM (Large Language Model) calls are directed. You can use any OpenAI-compatible endpoint.
Verification (LUCID_API_URL): This handles receipts, identity, and reputation through the Lucid API.
These two concerns operate independently, allowing you to use any provider while still participating in the verified network.
Inference Providers
You can use any OpenAI-compatible endpoint as your inference provider. Here are some examples:
# Lucid Cloud (sign up at lucid.foundation)
PROVIDER_URL=<your-lucid-cloud-url>
PROVIDER_API_KEY=lk_...
# Ollama (local, free)
PROVIDER_URL=http://localhost:11434/v1
# LiteLLM (self-hosted proxy, 100+ providers)
PROVIDER_URL=http://localhost:4000
# vLLM (self-hosted GPU)
PROVIDER_URL=http://localhost:8000/v1
# OpenAI direct
PROVIDER_URL=https://api.openai.com/v1
PROVIDER_API_KEY=sk-...
Verification (Receipts + Reputation)
To connect to Lucid for verification, use the following setup:
# Connected to Lucid — receipts flow, reputation builds
LUCID_API_URL=https://api.lucid.foundation
# Not connected — inference works, no verification
# (just don't set LUCID_API_URL)
| Setup | Inference | Receipts | Reputation |
|---|
| Provider + Lucid API | Yes | Yes | Yes |
| Provider only | Yes | No | No |
| Lucid API only | No (no provider) | N/A | N/A |
Endpoints
The Lucid Agent Runtime provides several endpoints:
| Endpoint | Method | Description |
|---|
/health | GET | Health check (returns passport, model, TrustGate status) |
/run | POST | Simple inference ({ prompt, stream? }) |
/v1/chat/completions | POST | OpenAI-compatible chat API |
/.well-known/agent.json | GET | A2A discovery (if LUCID_A2A_ENABLED=true) |
Environment Variables
Configure your deployment using the following environment variables:
| Variable | Required | Default | Description |
|---|
LUCID_MODEL | Yes | gpt-4o | Model identifier |
LUCID_PROMPT | Yes | Generic | System prompt |
PROVIDER_URL | Yes | Lucid Cloud | Any OpenAI-compatible inference endpoint |
PROVIDER_API_KEY | If needed | - | API key for inference provider |
LUCID_API_URL | Recommended | - | Lucid API for receipts + verification |
LUCID_PASSPORT_ID | Auto | - | Injected by deployer |
LUCID_TOOLS | No | - | Comma-separated tool passport IDs |
LUCID_A2A_ENABLED | No | false | Enable A2A protocol discovery |
PORT | No | 3100 | Server port |
What’s Automatic
When LUCID_API_URL is set:
- A cryptographic receipt is generated for every inference call.
- Receipts contribute to the reputation oracle.
- Identity is attached to every receipt.
Always:
- The
X-Lucid-Passport-Id header is included in every response.
- A health check is available at
/health (indicating receipts: true/false).
- An OpenAI-compatible API is available at
/v1/chat/completions.
- Structured error responses are provided.