docs cortecs
cortecs.aiModels
  • Getting started
    • Introduction
    • Quickstart
    • LLM Workers
  • Examples
    • Basics
    • Structured output
    • Batch jobs
    • Multi-agents
    • Realtime streams
  • cortecs-py
    • Python client
      • Objects
    • Integrations
  • API
    • Authentication
    • User
    • Instances
    • Models
    • Hardware Types
  • Discord
Powered by GitBook
On this page
  • Starting a Model
  • Using a Model
  • Instruct Models
  • Embedding Models
  • Multimodal models
  1. Examples

Basics

PreviousLLM WorkersNextStructured output

Last updated 3 months ago

Starting a Model

You can start models via the page. For direct integration within your code, use the library.

Once the model is running, the assigned model URL can be found in your .

Using a Model

You can interact with the models using curl, or take advantage of their OpenAI-compatible interface to seamlessly integrate them with popular libraries like OpenAI's Python library or LangChain.

Accessing your model requires an API key, which you get on your .

Instruct Models

Text models typically include a prompt template defined by the model creator to optimize performance. When using the /chat/completions endpoint, this template is automatically applied according to the model tokenizer's configuration.

curl <MODEL_URL>/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <OPENAI_API_KEY>" \
  -d '{
    "model": "<MODEL_HF_NAME>",
    "messages": [
      {
        "role": "user",
        "content": "Tell me a joke."
      }
    ]
  }'
from openai import OpenAI

openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

completion = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "Tell me a joke."
        }
    ]
)

print(completion.choices[0].message)

Embedding Models

Model's marked with the Embeddings tag can be used to produce textual embeddings.

curl <MODEL_URL>/embeddings \
  -H "Authorization: Bearer <OPENAI_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "The food was delicious and the waiter...",
    "model": "<MODEL_NAME>",
    "encoding_format": "float"
  }'
from openai import OpenAI

openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

response = client.embeddings.create(
    input="The food was delicious and the waiter...",
    model=model
)

print(response.data[0].embedding)

Multimodal models

Model's marked with the Image or Video tag can also accept image or video inputs in addition to textual input.

For video input, replace the "image_url" with "video_url".

curl <MODEL_URL>/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <OPENAI_API_KEY>" \
  -d '{
    "model": "<MODEL_NAME>",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What'\''s in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
            }
          }
        ]
      }
    ],
    "max_tokens": 300
  }'
import OpenAI from "openai";

openai_api_key = "<OPENAI_API_KEY>"
openai_api_base = "<MODEL_URL>"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

response = client.chat.completions.create(
    model=model,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
                    },
                },
            ],
        }
    ],
    max_tokens=300,
)

print(response.choices[0])

cortecs.ai/models
cortecs-py
console
profile page