LLM Workers

Dedicated infrastructure on the fly

Accessing the API requires API credentials, which you get on your profile page. Once we have a key we'll want to set our environment variables by running:

export CORTECS_CLIENT_ID="<YOUR_CLIENT_ID>"
export CORTECS_CLIENT_SECRET="<YOUR_CLIENT_SECRET>"
export OPENAI_API_KEY="<YOUR_CORTECS_API_KEY>"

Alternatively you can pass them directly to the client:

from cortecs_py import Cortecs

client = Cortecs(client_id="<YOUR_CLIENT_ID>", client_secret="<YOUR_CLIENT_ID>") 
...

Example

from openai import OpenAI
from cortecs_py import Cortecs

cortecs = Cortecs()
my_model = 'cortecs/phi-4-FP8-Dynamic'

# Start a new instance
my_instance = cortecs.ensure_instance(my_model)
client = OpenAI(base_url=my_instance.base_url)

completion = client.chat.completions.create(
  model=my_model,
  messages=[
    {"role": "user", "content": "Write a joke about LLMs."}
  ]
)
print(completion.choices[0].message.content)

cortecs.stop(my_instance.instance_id)

Last updated