gpt-5.5
OpenAI
GPT API gateway
Access supported OpenAI GPT models and agent workflows through one OpenAI-compatible gateway with quota control, sticky sessions, billing, and observability.
Model discovery
The catalog is intentionally tight: GPT and Codex routes only, with latency and capability tags exposed before you send traffic.
OpenAI
OpenAI
Codex
OpenAI
QUICKSTART
Three fields: endpoint, key, model. Pick your tool tab below and paste — that's it.
Register, create a key on /keys, copy the sk-xxx once.
Open registerPick your tool tab below and paste sk-YOUR_KEY_HERE.
Send a test prompt. You should see a response in seconds.
endpoint https://api.xphantomtroupe.com/v1
api_key sk-YOUR_KEY_HERE
model gpt-5.5
# One-line test:
curl https://api.xphantomtroupe.com/v1/responses \
-H "Authorization: Bearer sk-YOUR_KEY_HERE" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-5.5","input":"ping"}'[model_providers.xphantom]
name = "xPHANTOM TROUPE"
base_url = "https://api.xphantomtroupe.com/v1"
env_key = "XPHANTOM_API_KEY"
wire_api = "responses"
[profiles.xphantom]
model_provider = "xphantom"
model = "gpt-5.5"
# then in shell:
export XPHANTOM_API_KEY="sk-YOUR_KEY_HERE"
codex --profile xphantomBase URL: https://api.xphantomtroupe.com/v1
API Key: sk-YOUR_KEY_HERE
Model: gpt-5.5Base URL: https://api.xphantomtroupe.com/v1
API Key: sk-YOUR_KEY_HERE
Model: gpt-5.5{
"models": [{
"title": "xPHANTOM gpt-5.5",
"provider": "openai",
"model": "gpt-5.5",
"apiKey": "sk-YOUR_KEY_HERE",
"apiBase": "https://api.xphantomtroupe.com/v1"
}]
}model:
default: gpt-5.5
provider: custom
base_url: https://api.xphantomtroupe.com/v1
api_key: sk-YOUR_KEY_HERE
api_mode: codex_responses # required for gpt-5.x
fallback_model:
provider: custom
model: gpt-5.4
base_url: https://api.xphantomtroupe.com/v1
api_key: sk-YOUR_KEY_HERE
api_mode: codex_responsesProvider name: xPHANTOM
Base URL: https://api.xphantomtroupe.com/v1
API Key: sk-YOUR_KEY_HERE
Default model: gpt-5.5API Base URL: https://api.xphantomtroupe.com/v1
API Key: sk-YOUR_KEY_HERE
# Save, then chat using any GPT model from the dropdownfrom openai import OpenAI
client = OpenAI(api_key="sk-YOUR_KEY_HERE", base_url="https://api.xphantomtroupe.com/v1")
r = client.responses.create(model="gpt-5.5", input="hello")
print(r.output_text)import OpenAI from "openai";
const client = new OpenAI({ apiKey: "sk-YOUR_KEY_HERE", baseURL: "https://api.xphantomtroupe.com/v1" });
const r = await client.responses.create({ model: "gpt-5.5", input: "hello" });
console.log(r.output_text);curl https://api.xphantomtroupe.com/v1/responses \
-H "Authorization: Bearer sk-YOUR_KEY_HERE" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-5.5","input":"say ping"}'Supported agent
OpenClaw, Hermes Agent, Codex, local runners, and custom OpenAI-compatible agents can all point at the same API key and base URL.
Use xPHANTOM as the OpenAI-compatible backend for planning, code edits, and repo automation.
OpenAI-compatible base URLRun Hermes sessions through GPT routes with sticky sessions, quota, and balance controls.
GPT · sticky sessionPoint Codex-style coding flows at GPT and Codex routes without changing your agent surface.
GPT · CodexUse the same API key for local agent runs, reviews, refactors, and terminal workflows.
Local · OpenAI-compatibleConfigure xPHANTOM as the OpenAI-compatible backend for IDE chat and inline code work.
IDE · GPTAny runner that accepts an OpenAI base URL can join the same quota, billing, and routing layer.
Bring your own agentDeveloper controls
A gateway is useful when you can see what happened, what it cost, and which limits were applied.
OpenAI-compatible `/v1/chat/completions` endpoint for GPT models and agent clients.
Keep a conversation on the same account, key, or route lane when continuity matters.
Track spend, token usage, and remaining balance as requests move.
Set account, project, model, and agent limits before usage runs away.
Balance multiple OpenAI account pools behind one stable gateway.
See route status, latency, failover state, and account availability.
Pricing
Plans are powered by pricing.json, so quota and prices can be adjusted without touching layout code.
HK$50/mo
HK$200/mo
HK$500/mo
Contact us
The Spider never sleeps. Connect your agent and let the network handle the rest.