POST/api/v1/pipelines/run
Run pipeline
Primary native entry: prompt, provider, model, optional modules (cache, RAG, compression, PII), context layers, and streaming flag. Same contract as the console pipeline.
Required fields include a text prompt and a catalogue-backed provider/model pair. Optional booleans and thresholds mirror the Pipeline IA settings in the console.
Related
Python SDK: TokenSaver.ask() and run_pipeline() POST this endpoint. See sdk/python/pipelines for parameter tables.
bash
curl -sS "$API/api/v1/pipelines/run" \
-H "Authorization: Bearer $TS_KEY" \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello",
"provider": "openai",
"model": "gpt-4o",
"use_cache": true
}'
