POST/openai/v1/embeddings
Create embeddings
Gateway to the internal embedding service. model must resolve to an active embedding catalogue row—not a chat model. Same Bearer TokenSaver key; quotas apply.
No Cache → RAG → LLM pipeline; calls are still metered and traced like other API usage.
If model_id is ambiguous across providers, set X-Tokensaver-Provider (same as chat).
bash
curl -sS "https://api.tokensaver.fr/openai/v1/embeddings" \
-H "Authorization: Bearer $TS_KEY" -H "Content-Type: application/json" \
-d '{"model":"openai/text-embedding-3-small","input":"Hello world"}'python
from openai import OpenAI
c = OpenAI(api_key="ts_...", base_url="https://api.tokensaver.fr/openai/v1")
e = c.embeddings.create(model="openai/text-embedding-3-small", input="Hello")
print(len(e.data[0].embedding))
