API Documentation
Provider-native integration patterns for OpenAI, Anthropic, and Google clients with Mnexium routing, header conventions, and parity-safe request shapes.
Choose Your Integration Style
Mnexium supports two integration approaches. Choose based on your needs:
OpenAI Connector (Recommended)
Use the OpenAI SDK for all providers (OpenAI, Claude, Gemini). Same code, same response format, just change the model name.
- Unified API across all providers
- Full
mnxsupport in request body - Consistent response format
- Lowest integration complexity
Native SDKs
Use each provider's official SDK with their native endpoints and response formats.
- Native SDK features and types
- Provider-specific response formats
mnxvia headers (SDKs strip body params)- Different base URLs per provider
Code Examples
Use the OpenAI SDK to call any provider through Mnexium's unified endpoint. Just change the model name and pass the appropriate provider key.
| Provider | Header | Example Models |
|---|---|---|
| OpenAI | x-openai-key | gpt-4o, gpt-4o-mini |
| Anthropic | x-anthropic-key | claude-sonnet-4-20250514 |
x-google-key | gemini-2.0-flash-lite |
import OpenAI from "openai";
const BASE_URL = "https://mnexium.com/api/v1";
// OpenAI client
const openai = new OpenAI({
baseURL: BASE_URL,
defaultHeaders: {
"x-mnexium-key": process.env.MNX_KEY,
"x-openai-key": process.env.OPENAI_API_KEY,
},
});
// Claude client (via OpenAI SDK)
const claude = new OpenAI({
baseURL: BASE_URL,
defaultHeaders: {
"x-mnexium-key": process.env.MNX_KEY,
"x-anthropic-key": process.env.CLAUDE_API_KEY,
},
});
// Gemini client (via OpenAI SDK)
const gemini = new OpenAI({
baseURL: BASE_URL,
defaultHeaders: {
"x-mnexium-key": process.env.MNX_KEY,
"x-google-key": process.env.GEMINI_KEY,
},
});
// All calls use the same API!
const openaiResponse = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What do you know about me?" }],
mnx: { subject_id: "user_123", recall: true },
});
const claudeResponse = await claude.chat.completions.create({
model: "claude-sonnet-4-20250514",
messages: [{ role: "user", content: "What do you know about me?" }],
mnx: { subject_id: "user_123", recall: true },
});
const geminiResponse = await gemini.chat.completions.create({
model: "gemini-2.0-flash-lite",
messages: [{ role: "user", content: "What do you know about me?" }],
mnx: { subject_id: "user_123", recall: true },
});Cross-Provider Memory Sharing
Memories learned with one provider are automatically available to all others. Use the same subject_id across providers to share context.
// Learn a fact with OpenAI
await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "My favorite color is purple" }],
mnx: { subject_id: "user_123", learn: "force" },
});
// Recall with Claude - it knows the color!
const claudeResponse = await claude.chat.completions.create({
model: "claude-sonnet-4-20250514",
messages: [{ role: "user", content: "What is my favorite color?" }],
mnx: { subject_id: "user_123", recall: true },
});
// Claude responds: "Your favorite color is purple!"This enables multi-model workflows where each task can use the most appropriate model while keeping user context consistent.