# Quickstart

### 1. Register

Register at [cortecs.ai](https://cortecs.ai) and follow these steps to set up your account:

* Fill out your billing address in the [profile page](https://cortecs.ai/userArea/userProfile) and press **Save**.&#x20;
* Enter your credit card details.&#x20;
* Increase your account balance. Press **Top up** to increase your account balance.&#x20;

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FhXBR0gR75sCcE5UdghdN%2Fimage.png?alt=media&#x26;token=d6dcfa66-680e-4647-86b3-8120c34461f6" alt="" width="563"><figcaption><p>Profile page after successfully adding 100€ to the account balance</p></figcaption></figure>

{% hint style="warning" %}
If your balance reaches zero, your instances will be discontinued. To avoid this, use **Auto top-up** to set an amount that is automatically transferred when your balance falls below a specified threshold.
{% endhint %}

### 2. Start a model&#x20;

To start a model, follow these steps:

* Select a model from our [catalog](https://cortecs.ai/models).&#x20;
* Start the model and wait until the status indicates it is running. This setup process can take a few minutes to complete.

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FCA1i3ZewxmBj35At4rMk%2FBildschirmfoto%202025-01-09%20um%2013.57.21.png?alt=media&#x26;token=4bfa730e-c57b-4515-88f8-dcff0140817a" alt="" width="563"><figcaption><p>Model is up and running.</p></figcaption></figure>

### 3. Query the model

Our endpoints are compatible with OpenAI API by default. We assume you have either Python or Node.js setup. This example is based on *cortecs/phi-4-FP8-Dynamic* but works for all models supported by **cortecs**.&#x20;

Accessing your model requires an API key, which you get on your [profile page](https://cortecs.ai/userArea/userProfile). Once we have a key we'll want to set our environment variables by running:&#x20;

```
export OPENAI_API_KEY="<YOUR_CORTECS_API_KEY>"
export OPENAI_BASE_URL="<YOUR_MODEL_URL>"
```

#### OpenAI Client

You can use popular libraries provided by OpenAI for Python or Node.js. First, install the library:

{% tabs %}
{% tab title="Python" %}

```
pip install openai
```

{% endtab %}

{% tab title="Node.js" %}

```
npm install openai
```

{% endtab %}
{% endtabs %}

Query the model by calling the completion endpoint. Don't forget to pass your API key and the model URL if you didn't set them as environment variable already.

{% tabs %}
{% tab title="Python" %}

```python
from openai import OpenAI

openai_api_key = "<API_KEY>"
openai_api_base = "<MODEL_URL>"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

completion = client.chat.completions.create(
    model="cortecs/phi-4-FP8-Dynamic",
    messages=[
        {
            "role": "user",
            "content": "Tell me a joke."
        }
    ]
)

print(completion.choices[0].message)
```

{% endtab %}

{% tab title="Node.js" %}

```javascript
import OpenAI from "openai";

const openai = new OpenAI({
    apiKey: '<API_KEY>',
    baseURL: '<MODEL_URL>'
});

async function main() {
  const completion = await openai.chat.completions.create({
    messages: [
    {
        role: "user",
        content: "Tell me a joke.",
    }],
    model: "cortecs/phi-4-FP8-Dynamic"
  });

  console.log(completion.choices[0].message);
}

main();
```

{% endtab %}
{% endtabs %}

#### LangChain Client

[LangChain](https://python.langchain.com/v0.2/docs/introduction/) is another powerful Python library for building LLM-based applications. It is popular for more complex use cases.

```
pip install langchain-openai
```

Query the model by calling the completion endpoint. Don't forget to pass your API key and the model URL if you didn't set them as environment variable already.&#x20;

```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model_name='cortecs/phi-4-FP8-Dynamic', base_url='<MODEL_URL>')

res = llm.invoke('Tell me a joke.')
print(res.content)
```

Optionally follow the [langchain docs](https://python.langchain.com/v0.2/docs/tutorials/#specialized-tasks) to:

* [Build an Extraction Chain](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
* [Generate synthetic data](https://python.langchain.com/v0.2/docs/tutorials/data_generation/)
* [Classify text into labels](https://python.langchain.com/v0.2/docs/tutorials/classification/)
* [Summarize text](https://python.langchain.com/v0.2/docs/tutorials/summarization/)

{% hint style="info" %}
For more advanced use cases and easier instance management, check out our client library [cortecs-py](https://docs.cortecs.ai/dedicated-inference/python-client).
{% endhint %}
