Quickstart

Get up and running in just a few minutes.

1. Register

Register at cortecs.ai and follow these steps to set up your account:

  • Fill out your billing address in the profile page and press Save.

  • Enter your credit card details.

  • Increase your account balance. Press Top up to increase your account balance.

If your balance reaches zero, your instances will be discontinued. To avoid this, use Auto top-up to set an amount that is automatically transferred when your balance falls below a specified threshold.

2. Start a model

To start a model, follow these steps:

  • Select a model from our catalog.

  • Ensure the model fits your needs by reviewing the quality assessment.

  • Start the model and wait until the status indicates it is running. This setup process can take a few minutes to complete.

The model initialization can take up to 20 minutes, depending on the model size. To speed up the initialization process, consider using smaller models.

3. Query the model

Our endpoints are compatible with OpenAI API by default. We assume you have either Python or Node.js setup. This example is based on meta-llama/Meta-Llama-3.1-8B-Instruct but works for all models supported by cortecs.

Using OpenAI

OpenAI provides Python and Node.js libraries that are compatible with Cortecs endpoints. First, install the library:

pip install openai

Query the model by calling the completion endpoint. Don't forget to set your API key, the model URL, and the model name.

from openai import OpenAI

client = OpenAI(api_key='<API_KEY>',
                base_url='<MODEL_URL>')

completion = client.chat.completions.create(
  model="meta-llama/Meta-Llama-3.1-8B-Instruct",
  messages=[
    {"role": "user", "content": "Tell me a joke."}
  ]
)

print(completion.choices[0].message.content)

Using Langchain

Langchain is a powerful Python library for building LLM-based applications. It is recommended when you have more complex use cases.

pip install langchain-openai

Query the model by calling the completion endpoint. Don't forget to set your API key, the model URL, and the model name.

from langchain_openai import OpenAI

llm = OpenAI(openai_api_key='<API_KEY>',
             openai_api_base='<MODEL_URL>',
             model_name='meta-llama/Meta-Llama-3.1-8B-Instruct')

res = llm.invoke('Tell me a joke.')
print(res)

Optionally follow the langchain docs to:

Last updated