Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lucid.foundation/llms.txt

Use this file to discover all available pages before exploring further.

Lucid Agent Runtime

The Lucid Agent Runtime is a pre-built Docker image designed for deploying AI agents without writing any code. It supports any OpenAI-compatible provider and automatically generates receipts when connected to the Lucid API.

Quick Start

To quickly deploy an AI agent using the Lucid Agent Runtime, you can use the following command:
lucid launch --runtime base --model gpt-4o --prompt "You are a helpful assistant" --target docker
Alternatively, you can manually run the Docker container:
docker run -p 3100:3100 \
  -e LUCID_MODEL=gpt-4o \
  -e LUCID_PROMPT="You are a helpful assistant" \
  -e PROVIDER_URL=https://your-provider-url \
  -e PROVIDER_API_KEY=your-key \
  -e LUCID_API_URL=https://api.lucid.foundation \
  -e LUCID_PASSPORT_ID=your-passport-id \
  ghcr.io/lucid-fdn/agent-runtime:v1.0.0

Two Independent Concerns

The Lucid Agent Runtime separates concerns into two main areas: Inference (PROVIDER_URL): This is where the LLM (Large Language Model) calls are directed. You can use any OpenAI-compatible endpoint. Verification (LUCID_API_URL): This handles receipts, identity, and reputation through the Lucid API. These two concerns operate independently, allowing you to use any provider while still participating in the verified network.

Inference Providers

You can use any OpenAI-compatible endpoint as your inference provider. Here are some examples:
# Lucid Cloud (sign up at lucid.foundation)
PROVIDER_URL=<your-lucid-cloud-url>
PROVIDER_API_KEY=lk_...

# Ollama (local, free)
PROVIDER_URL=http://localhost:11434/v1

# LiteLLM (self-hosted proxy, 100+ providers)
PROVIDER_URL=http://localhost:4000

# vLLM (self-hosted GPU)
PROVIDER_URL=http://localhost:8000/v1

# OpenAI direct
PROVIDER_URL=https://api.openai.com/v1
PROVIDER_API_KEY=sk-...

Verification (Receipts + Reputation)

To connect to Lucid for verification, use the following setup:
# Connected to Lucid — receipts flow, reputation builds
LUCID_API_URL=https://api.lucid.foundation

# Not connected — inference works, no verification
# (just don't set LUCID_API_URL)
SetupInferenceReceiptsReputation
Provider + Lucid APIYesYesYes
Provider onlyYesNoNo
Lucid API onlyNo (no provider)N/AN/A

Endpoints

The Lucid Agent Runtime provides several endpoints:
EndpointMethodDescription
/healthGETHealth check (returns passport, model, TrustGate status)
/runPOSTSimple inference ({ prompt, stream? })
/v1/chat/completionsPOSTOpenAI-compatible chat API
/.well-known/agent.jsonGETA2A discovery (if LUCID_A2A_ENABLED=true)

Environment Variables

Configure your deployment using the following environment variables:
VariableRequiredDefaultDescription
LUCID_MODELYesgpt-4oModel identifier
LUCID_PROMPTYesGenericSystem prompt
PROVIDER_URLYesLucid CloudAny OpenAI-compatible inference endpoint
PROVIDER_API_KEYIf needed-API key for inference provider
LUCID_API_URLRecommended-Lucid API for receipts + verification
LUCID_PASSPORT_IDAuto-Injected by deployer
LUCID_TOOLSNo-Comma-separated tool passport IDs
LUCID_A2A_ENABLEDNofalseEnable A2A protocol discovery
PORTNo3100Server port

What’s Automatic

When LUCID_API_URL is set:
  • A cryptographic receipt is generated for every inference call.
  • Receipts contribute to the reputation oracle.
  • Identity is attached to every receipt.
Always:
  • The X-Lucid-Passport-Id header is included in every response.
  • A health check is available at /health (indicating receipts: true/false).
  • An OpenAI-compatible API is available at /v1/chat/completions.
  • Structured error responses are provided.