Documentation
Getting Started
Dashboard

API Documentation

Production-ready bootstrap path from API key issuance to first memory-aware response using POST /api/v1/chat/completions and mnx context controls.

Getting Started

Mnexium provides a proxy layer for OpenAI APIs with built-in support for conversation persistence, memory management, and system prompt injection. Use the HTTP API directly with cURL, or install an official SDK.

Installation

Node.js / TypeScript
npm
bash
npm install @mnexium/sdk
Python
PyPI
bash
pip install mnexium

No SDK required — you can also call the API directly with cURL or any HTTP client. Use the language switcher above to see examples in your preferred language.

Quick Example

A request to the Chat Completions API with history, memory extraction, and all Mnexium features enabled:

bash
curl -X POST "https://www.mnexium.com/api/v1/chat/completions" \  -H "x-mnexium-key: $MNX_KEY" \  -H "Content-Type: application/json" \  -H "x-openai-key: $OPENAI_KEY" \  -d '{    "model": "gpt-4o-mini",    "messages": [{ "role": "user", "content": "What IDE should I use?" }],    "mnx": {      "subject_id": "user_123",      "chat_id": "550e8400-e29b-41d4-a716-446655440000",      "log": true,      "learn": true,      "recall": true,      "history": true    }  }'

What happens:

  • log: true — Saves this conversation turn to chat history
  • learn: true — LLM analyzes the message and may extract memories (runs asynchronously after the response)
  • recall: true — Injects relevant stored memories into context (e.g., "User prefers dark mode", "User is learning Rust")
  • history: true — Prepends previous messages from this chat_id for context
  • memory_policy — Optional extraction policy override (explicit ID, false to disable, or omitted for scoped defaults)

Use learn: "force" to always create a memory, or learn: false to skip memory extraction entirely.

Get Started Repository

Clone our starter repo for working examples in Node.js and Python:

github.com/mariusndini/mnexium-get-started