API Reference
GET/api/v1/llm-reference/models

LLM model catalogue

Resolve provider + model against the active platform catalogue. Unsupported pairs return 400 with LLM_MODEL_NOT_SUPPORTED.

Why a static doc can show fewer models than Settings

Settings → Model pricing always reflects the live platform catalogue—the same models the API will accept for new runs. This API reference site is a fixed snapshot in this documentation; the hosted environment may list more providers or models after platform updates. The source of truth for any deployment is GET /api/v1/llm-reference/models?active_only=true (same list as pricing).

Loading catalogue counts from the API…
http
GET https://api.tokensaver.fr/api/v1/llm-reference/models?active_only=true
# Filter by vendor: ?provider=openai | anthropic | google | mistral | groq | deepseek
# Chat vs embedding: ?model_kind=chat  |  ?model_kind=embedding

Chat pipelines require model_kind=chat. The embeddings-compatible HTTP route (POST /openai/v1/embeddings) resolves model against embedding-capable models only.