API
Cortecs offers a simple, OpenAI-compatible API for serverless inference across multiple providers. It supports two endpoints:
🔁 POST /v1/chat/completions
POST /v1/chat/completions
Submit chat requests using any available model. Supports standard OpenAI parameters like messages
, temperature
, and max_tokens
. Use preference
to optimize for speed
, cost
, or balanced
.
📦 GET /v1/models
GET /v1/models
List all available models and their capabilities.
Last updated