Quickstart
Get up and running in just a few clicks
Last updated
Get up and running in just a few clicks
Last updated
Register at cortecs.ai and follow these steps to set up your account:
Fill out your billing address in the profile page and press Save.
Enter your credit card details.
Increase your account balance. Press Top up to increase your account balance.
If your balance reaches zero, your instances will be discontinued. To avoid this, use Auto top-up to set an amount that is automatically transferred when your balance falls below a specified threshold.
To start a model, follow these steps:
Select a model from our catalog.
Start the model and wait until the status indicates it is running. This setup process can take a few minutes to complete.
Our endpoints are compatible with OpenAI API by default. We assume you have either Python or Node.js setup. This example is based on cortecs/phi-4-FP8-Dynamic but works for all models supported by cortecs.
Accessing your model requires an API key, which you get on your profile page. Once we have a key we'll want to set our environment variables by running:
export OPENAI_API_KEY="<YOUR_CORTECS_API_KEY>"
export OPENAI_BASE_URL="<YOUR_MODEL_URL>"
You can use popular libraries provided by OpenAI for Python or Node.js. First, install the library:
pip install openai
Query the model by calling the completion endpoint. Don't forget to pass your API key and the model URL if you didn't set them as environment variable already.
from openai import OpenAI
openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
completion = client.chat.completions.create(
model="cortecs/phi-4-FP8-Dynamic",
messages=[
{
"role": "user",
"content": "Tell me a joke."
}
]
)
print(completion.choices[0].message)
LangChain is another powerful Python library for building LLM-based applications. It is popular for more complex use cases.
pip install langchain-openai
Query the model by calling the completion endpoint. Don't forget to pass your API key and the model URL if you didn't set them as environment variable already.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model_name='cortecs/phi-4-FP8-Dynamic', base_url='<MODEL_URL>')
res = llm.invoke('Tell me a joke.')
print(res.content)
Optionally follow the langchain docs to: