Multi-agent system

Collective Intelligence, Amplified Performance

Agents with CrewAI

As cortecs is OpenAI-compatible, it works out-of-the-box with CrewAI. Follow the basic example from their docs and put you cortecs credentials into the .env file. As outlined in the complementary liteLLM docs prepend 'openai/' to your model's URL. This indicates that you are using an OpenAI-compatible endpoint.

OPENAI_API_KEY=<YOUR_CORTECS_API_KEY>
OPENAI_MODEL_NAME=<YOUR_CORTECS_MODEL_NAME>
OPENAI_BASE_URL=openai/<YOUR_CORTECS_MODEL_URL>

Dynamic crews

In some cases you might want to start extensive processes with many agent. With dedicated inference you avoid running into request limits.

Option 1: Adjust agents manually

You can use cortecs-py to start a model and build your agents on top of it.

from langchain_openai import ChatOpenAI
from cortecs_py import Cortecs

instance_id, llm_info = Cortecs(cortecs_id=<YOUR_ID>, cortecs_secret=<YOUR_SECRET>)
llm = ChatOpenAI(**llm_info)

agent = Agent(llm=llm, ...)

Option 2: DedicatedCrewBase

For convenience you can also use the crewAI integration. From the example from CrewAI navigate to /src/<example>/crew.py and change the decorator from CrewBase to DedicatedCrewBase. The DedicatedCrewBase automatically starts the llm based on your .env file. All agents are then by default linked to the model as specified in .env.

from cortecs_py.integrations import DedicatedCrewBase, DedicatedCrew

@DedicatedCrewBase  # --> replace CrewBase 
class ExampleCrew():
	...

To make sure that your model is also shut down as soon as the crew is finished replace Crew with DedicatedCrew. Don't forget to pass instance_id and cortecs client to the constructor.

from cortecs_py.integrations import DedicatedCrewBase, DedicatedCrewBase

@DedicatedCrewBase
class ExampleCrew():
	...
	
	@crew
	def crew(self) -> Crew:
		return DedicatedCrew(  # --> replace Crew
			instance_id=self.instance_id,  # --> add this parameter
			client=self.client,  # --> add this parameter
			agents=self.agents, 
			tasks=self.tasks, 
			process=Process.sequential,
			verbose=True,
		)

Executing crewai run in your project root will:

  1. start the model as specified in the .env

  2. kickoff your crew

  3. shut down the model as soon as crew is finished

The full code example is provided on GitHub.

Last updated