Starting a Model
You can start models via the cortecs.ai/models page. For direct integration within your code, use the cortecs-py library.
Once the model is running, the assigned model URL can be found in your console .
Using a Model
You can interact with the models using curl
, or take advantage of their OpenAI-compatible interface to seamlessly integrate them with popular libraries like OpenAI's Python library or LangChain.
Accessing your model requires an API key, which you get on your profile page .
Instruct Models
Text models typically include a prompt template defined by the model creator to optimize performance. When using the /chat/completions
endpoint, this template is automatically applied according to the model tokenizer's configuration.
Curl OpenAI
Copy curl <MODEL_URL>/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <OPENAI_API_KEY>" \
-d '{
"model": "<MODEL_HF_NAME>",
"messages": [
{
"role": "user",
"content": "Tell me a joke."
}
]
}'
Copy from openai import OpenAI
openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
completion = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": "Tell me a joke."
}
]
)
print(completion.choices[0].message)
Embedding Models
Model's marked with the Embeddings
tag can be used to produce textual embeddings.
Curl Python
Copy curl <MODEL_URL>/embeddings \
-H "Authorization: Bearer <OPENAI_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "<MODEL_NAME>",
"encoding_format": "float"
}'
Copy from openai import OpenAI
openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
response = client.embeddings.create(
input="The food was delicious and the waiter...",
model=model
)
print(response.data[0].embedding)
Multimodal models
Model's marked with the Image
or Video
tag can also accept image or video inputs in addition to textual input.
For video input, replace the "image_url"
with "video_url"
.
Curl Python
Copy curl <MODEL_URL>/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <OPENAI_API_KEY>" \
-d '{
"model": "<MODEL_NAME>",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What'\''s in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
],
"max_tokens": 300
}'
Copy import OpenAI from "openai";
openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
response = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
],
}
],
max_tokens=300,
)
print(response.choices[0])