# API Overview

Cortecs provides an **OpenAI-compatible API** that makes it simple to run serverless inference across multiple providers with no infrastructure setup.

The API supports **three** main capabilities:

#### 🔁  Chat Completions : `POST /v1/chat/completions`

Send chat requests using any available model. Supports standard OpenAI parameters like `messages`, `temperature`, and `max_tokens`.&#x20;

Use the `preference` parameter to optimize routing for `speed`, `cost`, or `balanced`.

#### 🧩 Embeddings : `POST /v1/embeddings`

Generate text embeddings using supported models. The API is compatible with OpenAI’s embedding request format and can be routed across multiple providers using the same preference-based selection.

#### 📦 Model Listing :  `GET /v1/models`

Retrieve the full list of available models, including their supported features, costs, context sizes, and providers.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortecs.ai/api-overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
