Multi-agents

Collective Intelligence, Amplified Performance

Agents with CrewAI

As cortecs is OpenAI-compatible, it works out-of-the-box with CrewAI. Follow the basic example from their docs and put you cortecs credentials into the .env file. As outlined in the complementary liteLLM docs prepend 'openai/' to your model's URL. This indicates that you are using an OpenAI-compatible endpoint.

OPENAI_API_KEY=<YOUR_CORTECS_API_KEY>
MODEL=openai/<YOUR_CORTECS_MODEL_NAME> #in HF format
CORTECS_CLIENT_ID=<YOUR_CORTECS_CLIENT_ID>
CORTECS_CLIENT_SECRET=<YOUR_CLIENT_SECRET>

Dynamic crews

In some cases you might want to start extensive processes with many agent. With dedicated inference you avoid running into request limits.

Adjust the basic example

You can use cortecs-py to start a model and build your agents on top of it. To ensure you are dynamically provisioning your resources and shutting them down as soon as they are not needed, add the following code to the ExampleCrew class.

import os
from typing import Any
from dotenv import load_dotenv

from crewai import Agent, Crew, Process, Task
from crewai.project import agent, crew, task, after_kickoff, CrewBase

from cortecs_py import Cortecs

load_dotenv()

@CrewBase
class ExampleCrew:
    
    def __init__(self) -> None:
        self.start_llm()
    
    def start_llm(self) -> None:
        self.cortecs_client = Cortecs()
        self.model = os.environ["MODEL"].removeprefix("openai/")
        
        print(f"Starting model {self.model}...")
        self.instance = self.cortecs_client.ensure_instance(self.model)
        os.environ["OPENAI_API_BASE"] = self.instance.base_url
    
    @after_kickoff
    def stop_and_delete_llm(self, result: Any) -> Any:
        self.cortecs_client.stop(self.instance.instance_id)
        self.cortecs_client.delete(self.instance.instance_id)
        print(f"Model {self.model} stopped and deleted.")
        
    #The rest of the ExampleCrew stays the same...

Executing crewai run in your project root will:

  1. Start the model as specified in the .env

  2. Kickoff your crew

  3. Shut down the model as soon as crew is finished

The full code example is provided on GitHub.

Last updated