Quickstart

Get up and running in just a few clicks

1. Register

Register at cortecs.ai and follow these steps to set up your account:

  • Fill out your billing address in the profile page and press Save.

  • Enter your credit card details.

  • Increase your account balance. Press Top up to increase your account balance.

Profile page after successfully adding 100€ to the account balance

2. Start a model

To start a model, follow these steps:

  • Select a model from our catalog.

  • Start the model and wait until the status indicates it is running. This setup process can take a few minutes to complete.

Model is up and running.

3. Query the model

Our endpoints are compatible with OpenAI API by default. We assume you have either Python or Node.js setup. This example is based on cortecs/phi-4-FP8-Dynamic but works for all models supported by cortecs.

Accessing your model requires an API key, which you get on your profile page. Once we have a key we'll want to set our environment variables by running:

export OPENAI_API_KEY="<YOUR_CORTECS_API_KEY>"
export OPENAI_BASE_URL="<YOUR_MODEL_URL>"

OpenAI Client

You can use popular libraries provided by OpenAI for Python or Node.js. First, install the library:

pip install openai

Query the model by calling the completion endpoint. Don't forget to pass your API key and the model URL if you didn't set them as environment variable already.

from openai import OpenAI

openai_api_key = "<API_KEY>"
openai_api_base = "<MODEL_URL>"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

completion = client.chat.completions.create(
    model="cortecs/phi-4-FP8-Dynamic",
    messages=[
        {
            "role": "user",
            "content": "Tell me a joke."
        }
    ]
)

print(completion.choices[0].message)

LangChain Client

LangChain is another powerful Python library for building LLM-based applications. It is popular for more complex use cases.

pip install langchain-openai

Query the model by calling the completion endpoint. Don't forget to pass your API key and the model URL if you didn't set them as environment variable already.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model_name='cortecs/phi-4-FP8-Dynamic', base_url='<MODEL_URL>')

res = llm.invoke('Tell me a joke.')
print(res.content)

Optionally follow the langchain docs to:

For more advanced use cases and easier instance management, check out our client library cortecs-py.

Last updated