QuickSilver Pro vs OpenRouter
For DeepSeek V3, DeepSeek R1, and Qwen3.5-35B-A3B, QuickSilver Pro lists the same models at 20% below OpenRouter's public per-token rates — same OpenAI-compatible API, two-line migration. For closed models (GPT-4, Claude) or the long tail, OpenRouter is still the right tool.
At a glance
| Feature | QuickSilver Pro | openrouter |
|---|---|---|
| Models in catalog | 3 (DeepSeek V3, R1, Qwen3.5-35B-A3B) | 300+ |
| Pricing on shared models | 20% below OpenRouter | Baseline |
| OpenAI-compatible surface | Yes | Yes |
| Streaming / tools / json_schema | Yes | Yes |
| usage.cost on responses | Yes (synthetic) | Yes |
| Per-key monthly spend limits | Yes | Yes |
| Closed models (GPT-4, Claude) | No | Yes |
| Free tier | $1 on signup | Limited free models |
| Minimum top-up | $5 | $10 |
Pricing (per million tokens, USD)
Public list prices as of April 2026.
| Model | QSP input | QSP output | openrouter input | openrouter output | Savings |
|---|---|---|---|---|---|
| DeepSeek V3 | $0.24 | $0.70 | $0.30 | $0.88 | ~20% |
| DeepSeek R1 | $0.40 | $1.70 | $0.50 | $2.15 | ~20% |
| Qwen3.5-35B-A3B | $0.13 | $1.00 | Compare at OpenRouter | Compare at OpenRouter | — |
Migration — two lines
from openai import OpenAI
client = OpenAI(
base_url="https://api.quicksilverpro.io/v1",
api_key=os.environ["QSP_KEY"],
)
r = client.chat.completions.create(
model="deepseek-v3",
messages=[{"role": "user", "content": "Hi"}],
)FAQ
Is QuickSilver Pro cheaper than OpenRouter?
Yes, on the shared open-source models: 20% below OpenRouter's public per-token rates for DeepSeek V3, R1, and Qwen3.5-35B-A3B. See the pricing table above for exact numbers.
How do I migrate from OpenRouter?
Two lines in your OpenAI SDK setup: change base_url from openrouter.ai/api/v1 to api.quicksilverpro.io/v1, swap the API key. Drop the provider/ prefix from model IDs.
When should I stay on OpenRouter?
If your workload needs closed models (GPT-4, Claude, Gemini), Llama, Mistral, or the long tail. QuickSilver Pro only serves three open-source models; OpenRouter serves 300+.
Same OpenAI features (streaming, tools, JSON schema)?
Yes for the shared models. Streaming, tool / function calling, json_schema strict mode, and standard usage accounting all work through the official OpenAI SDK. Each response also returns a synthetic usage.cost computed from the public per-token rate.