Documentation
Responses API
Dashboard

API Documentation

Responses API reference with Mnexium context orchestration, provider pass-through semantics, and compatibility details for structured response workflows.

Responses API
POST/api/v1/responses

Proxy for OpenAI and Anthropic APIs with Mnexium extensions for history, persistence, and system prompts. Supports GPT-4, Claude, and other models.

Scope:responses:write
Request
bash
curl -X POST "https://www.mnexium.com/api/v1/responses" \
  -H "x-mnexium-key: $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-openai-key: $OPENAI_KEY" \
  -d '{
    "model": "gpt-4o-mini",
    "input": "What are some project ideas based on my interests?",
    "mnx": {
      "subject_id": "user_123",
      "chat_id": "550e8400-e29b-41d4-a716-446655440000",
      "log": true,
      "learn": true,
      "recall": true
    }
  }'
mnx Parameters
subject_id
string
User/subject identifier for memory and history.
chat_id
string
Conversation ID (UUID recommended) for history grouping.
log
boolean
Save to chat history. Default: true
learn
boolean | 'force'
Memory extraction: false (never), true (LLM decides), "force" (always). Default: true
history
boolean
Prepend chat history. Default: false
system_prompt
string | boolean
Prompt ID, true (auto-resolve), or false (skip). Default: true
memory_policy
string | boolean
Memory policy ID, false (skip), or omitted/true (auto-resolve). Default: true
Response
json
{
  "id": "resp_abc123",
  "object": "response",
  "created_at": 1702847400,
  "output": [
    {
      "type": "message",
      "role": "assistant",
      "content": [
        { "type": "output_text", "text": "Based on your interests in Rust and Python, here are some project ideas..." }
      ]
    }
  ],
  "usage": { "input_tokens": 12, "output_tokens": 45 }
}
Response headers include X-Mnx-Chat-Id and X-Mnx-Subject-Id
Show Claude (Anthropic) example

Use x-anthropic-key header and a Claude model name.

Request
bash
curl -X POST "https://www.mnexium.com/api/v1/responses" \
  -H "x-mnexium-key: $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-anthropic-key: $ANTHROPIC_KEY" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "input": "What programming language did I say I was learning?",
    "mnx": {
      "subject_id": "user_123",
      "recall": true
    }
  }'
Show streaming example

Set "stream": true to receive Server-Sent Events (SSE).

Request
bash
curl -X POST "https://www.mnexium.com/api/v1/responses" \
  -H "x-mnexium-key: $MNX_KEY" \
  -H "Content-Type: application/json" \
  -H "x-openai-key: $OPENAI_KEY" \
  -d '{ "model": "gpt-4o-mini", "input": "What do you remember about me?", "mnx": { "subject_id": "user_123", "recall": true }, "stream": true }'
Response (SSE)
data: {"type":"response.output_text.delta","delta":"Based"}
data: {"type":"response.output_text.delta","delta":" on"}
data: {"type":"response.output_text.delta","delta":" our"}
data: {"type":"response.output_text.delta","delta":" previous"}
data: {"type":"response.output_text.delta","delta":" conversations,"}
data: {"type":"response.output_text.delta","delta":" I know you..."}
data: {"type":"response.completed","response":{...}}
data: [DONE]

Parse each data: line as JSON. Collect delta values to build the full response.