docs cortecs
cortecs.aiModels
  • Getting started
    • Introduction
    • Quickstart
    • LLM Workers
  • Examples
    • Basics
    • Structured output
    • Batch jobs
    • Multi-agents
    • Realtime streams
  • cortecs-py
    • Python client
      • Objects
    • Integrations
  • API
    • Authentication
    • User
    • Instances
    • Models
    • Hardware Types
  • Discord
Powered by GitBook
On this page
  • Agents with CrewAI
  • Dynamic crews
  • Adjust the basic example
  1. Examples

Multi-agents

Collective Intelligence, Amplified Performance

PreviousBatch jobsNextRealtime streams

Last updated 3 months ago

Multi-agent workflows are systems or processes managed by multiple autonomous agents. These agents can collaborate, communicate, and divide tasks to achieve a shared goal. Each agent is usually designed to specialize in specific tasks, and their collective effort ensures the completion of complex workflows that might be difficult for a single agent to handle.

Some examples of multi-agent workflows include:

  • Business Process Automation: Automating repetitive tasks such as invoice processing, where different agents handle scanning, validation, and data entry.

  • Customer Service: Agents managing inquiries, where one handles general FAQs while another handles account-specific issues.

  • Supply Chain Management: Coordinating multiple agents for inventory tracking, shipment scheduling, and supplier communication.

  • AI Research: Collaboration between agents for data preprocessing, model training, and performance evaluation.

is a platform designed to streamline such workflows, while Cortecs can bring the GPU power needed to power them.

Agents with CrewAI

As cortecs is OpenAI-compatible, it works out-of-the-box with CrewAI. Follow the basic from their docs and put you cortecs credentials into the .env file. As outlined in the complementary prepend 'openai/' to your model's URL. This indicates that you are using an OpenAI-compatible endpoint.

OPENAI_API_KEY=<YOUR_CORTECS_API_KEY>
MODEL=openai/<YOUR_CORTECS_MODEL_NAME> #in HF format
CORTECS_CLIENT_ID=<YOUR_CORTECS_CLIENT_ID>
CORTECS_CLIENT_SECRET=<YOUR_CLIENT_SECRET>

Dynamic crews

In some cases you might want to start extensive processes with many agent. With dedicated inference you avoid running into request limits.

Adjust the basic example

import os
from typing import Any
from dotenv import load_dotenv

from crewai import Agent, Crew, Process, Task
from crewai.project import agent, crew, task, after_kickoff, CrewBase

from cortecs_py import Cortecs

load_dotenv()

@CrewBase
class ExampleCrew:
    
    def __init__(self) -> None:
        self.start_llm()
    
    def start_llm(self) -> None:
        self.cortecs_client = Cortecs()
        self.model = os.environ["MODEL"].removeprefix("openai/")
        
        print(f"Starting model {self.model}...")
        self.instance = self.cortecs_client.ensure_instance(self.model)
        os.environ["OPENAI_API_BASE"] = self.instance.base_url
    
    @after_kickoff
    def stop_and_delete_llm(self, result: Any) -> Any:
        self.cortecs_client.stop(self.instance.instance_id)
        self.cortecs_client.delete(self.instance.instance_id)
        print(f"Model {self.model} stopped and deleted.")
        
    #The rest of the ExampleCrew stays the same...

Executing crewai run in your project root will:

  1. Start the model as specified in the .env

  2. Kickoff your crew

  3. Shut down the model as soon as crew is finished

You can use to start a model and build your agents on top of it. To ensure you are dynamically provisioning your resources and shutting them down as soon as they are not needed, add the following code to the ExampleCrew class.

The full code example is provided on .

CrewAI
example
liteLLM docs
cortecs-py
GitHub