gpt-5.5
OpenAI
GPT API gateway
Access supported OpenAI GPT models and agent workflows through one OpenAI-compatible gateway with quota control, sticky sessions, billing, and observability.
Model discovery
The catalog is intentionally tight: GPT and Codex routes only, with latency and capability tags exposed before you send traffic.
OpenAI
OpenAI
Codex
OpenAI
QUICKSTART
The easiest path: give your agent the base URL, API key, and default model. Let it patch the config for you.
Register, create a key on /keys, copy the sk-xxx once.
Open registerCopy the Agent Prompt into OpenClaw, Codex, Cursor, or your local runner.
Run the curl test and confirm the response returns model, output, and usage.
Set up this project to use xPHANTOM TROUPE as my OpenAI-compatible API provider.
Base URL: https://api.xphantomtroupe.com/v1
API Key: sk-YOUR_KEY_HERE
Default model: gpt-5.5
If this tool has a setup wizard, use its OpenAI-compatible or Custom Endpoint flow.
For OpenClaw, start with onboarding. For Hermes, run hermes model.
Then store the key securely, use gpt-5.5 by default, and run a quick test request.[model_providers.xphantom]
name = "xPHANTOM TROUPE"
base_url = "https://api.xphantomtroupe.com/v1"
env_key = "XPHANTOM_API_KEY"
wire_api = "responses"
[profiles.xphantom]
model_provider = "xphantom"
model = "gpt-5.5"
# then in shell:
export XPHANTOM_API_KEY="sk-YOUR_KEY_HERE"
codex --profile xphantomBase URL: https://api.xphantomtroupe.com/v1
API Key: sk-YOUR_KEY_HERE
Model: gpt-5.5{
"models": [{
"title": "xPHANTOM gpt-5.5",
"provider": "openai",
"model": "gpt-5.5",
"apiKey": "sk-YOUR_KEY_HERE",
"apiBase": "https://api.xphantomtroupe.com/v1"
}]
}from openai import OpenAI
client = OpenAI(
api_key="sk-YOUR_KEY_HERE",
base_url="https://api.xphantomtroupe.com/v1",
)
r = client.responses.create(model="gpt-5.5", input="hello")
print(r.output_text)import OpenAI from "openai";
const client = new OpenAI({
apiKey: "sk-YOUR_KEY_HERE",
baseURL: "https://api.xphantomtroupe.com/v1",
});
const r = await client.responses.create({ model: "gpt-5.5", input: "hello" });
console.log(r.output_text);curl https://api.xphantomtroupe.com/v1/responses \
-H "Authorization: Bearer sk-YOUR_KEY_HERE" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-5.5","input":"say ping"}'{
"id":"resp_...",
"model":"gpt-5.5",
"output":[{"type":"message","content":[{"type":"output_text","text":"ping"}]}],
"usage":{"input_tokens":...,"output_tokens":...}
}Supported agent
OpenClaw, Hermes Agent, Codex, local runners, and custom OpenAI-compatible agents can all point at the same API key and base URL.
Use xPHANTOM as the OpenAI-compatible backend for planning, code edits, and repo automation.
OpenAI-compatible base URLRun Hermes sessions through GPT routes with sticky sessions, quota, and balance controls.
GPT · sticky sessionPoint Codex-style coding flows at GPT and Codex routes without changing your agent surface.
GPT · CodexUse the same API key for local agent runs, reviews, refactors, and terminal workflows.
Local · OpenAI-compatibleConfigure xPHANTOM as the OpenAI-compatible backend for IDE chat and inline code work.
IDE · GPTAny runner that accepts an OpenAI base URL can join the same quota, billing, and routing layer.
Bring your own agentDeveloper controls
A gateway is useful when you can see what happened, what it cost, and which limits were applied.
OpenAI-compatible `/v1/chat/completions` endpoint for GPT models and agent clients.
Keep a conversation on the same account, key, or route lane when continuity matters.
Track spend, token usage, and remaining balance as requests move.
Set account, project, model, and agent limits before usage runs away.
Balance multiple OpenAI account pools behind one stable gateway.
See route status, latency, failover state, and account availability.
Pricing
Plans are powered by pricing.json, so quota and prices can be adjusted without touching layout code.
HK$50/mo
HK$200/mo
HK$500/mo
Contact us
The Spider never sleeps. Connect your agent and let the network handle the rest.