- Moved technical docs from subconscious/ to reference/ - Created system/technical/ for always-loaded summaries - Updated compass.md with new structure and update warnings - system/technical/infrastructure.md — always-loaded summary - system/technical/sam.md — always-loaded summary - reference/ holds full docs CRITICAL: system/technical/ files must be kept updated
1.6 KiB
1.6 KiB
description, limit
| description | limit |
|---|---|
| Synu and Synthetic API reference. Models, pricing, usage patterns. | 30000 |
Synu & Synthetic API
The Shell Context
Laptop (Casey): zsh — synu as zsh plugin Fedora .19 VM (Ani): fish — synu as fish function
I invoke explicitly: fish -c 'synu ...' on .19 VM
Quick Check
curl https://api.synthetic.new/openai/v1/models \
-H "Authorization: Bearer ${SYNTHETIC_API_KEY}"
Synu Usage
# Show quota (green/yellow/red bars)
synu
# Run agent with prompt
synu <agent> -p "prompt here"
# Interactive mode with flag selection
synu i <agent>
The Models I Use
High-Context / Reasoning
- Kimi-K2-Thinking — 262K context, $0.60/$2.50 per 1M, tools/json/reasoning
- Kimi-K2.5 — 262K context, $0.55/$2.19 per 1M, text+image/tools/reasoning
- Kimi-K2-Instruct — 262K context, $1.20/$1.20 per 1M, tools
- Qwen3-235B-A22B-Thinking — 262K context, $0.65/$3.00 per 1M, thinking mode
- Qwen3-Coder-480B — 262K context, $0.45/$1.80 per 1M, coding optimized
Standard
- GLM-4.7 — 202K context, $0.55/$2.19 per 1M, tools/reasoning
- DeepSeek-V3.2 — 162K context, $0.56/$1.68 per 1M
- Llama-3.3-70B — 131K context, $0.90/$0.90 per 1M
Vision
- Qwen3-VL-235B — 256K context, $0.22/$0.88 per 1M, text+image
Budget
- gpt-oss-120b — 131K context, $0.10/$0.10 per 1M (cheapest)
- MiniMax-M2/M2.1 — 196K context, $0.30/$1.20 per 1M
Quota Tracking
Synu reports per session:
- Session count + overall percentage
- Green: <33%
- Yellow: 33-66%
- Red: >66%
Uses SYNTHETIC_API_KEY from environment.
Source: https://git.secluded.site/synu