Skip to main content
Lucid provides an OpenAI-compatible chat completions endpoint that routes to 100+ models with policy-based matching.

Chat Completions

const response = await lucid.chat.completions({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Explain quantum computing" }
  ],
  temperature: 0.7,
  max_tokens: 1000
});

Policy-Based Matching

Instead of specifying a model directly, let Lucid match the best model for your request:
const match = await lucid.match.match({
  requirements: {
    capabilities: ["chat", "function-calling"],
    maxLatency: 2000,
    costTier: "standard"
  }
});

// Use the matched model
const response = await lucid.chat.completions({
  model: match.passportId,
  messages: [...]
});

Match Explain

Get a detailed explanation of why a model was selected:
const explanation = await lucid.match.explain({
  requirements: { capabilities: ["vision"] }
});

console.log(explanation.reasoning);
console.log(explanation.alternatives);

Streaming

const stream = await lucid.chat.completions({
  model: "gpt-4o",
  messages: [...],
  stream: true
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Receipt Generation

Every inference automatically generates a cryptographic receipt:
  • Receipt ID — Unique identifier
  • MMR proof — Merkle Mountain Range inclusion proof
  • Signature — Cryptographic signature from the session signer
  • Epoch — Batch anchor reference