Multi-agent system
Collective Intelligence, Amplified Performance
Agents with CrewAI
As cortecs is OpenAI-compatible, it works out-of-the-box with CrewAI. Follow the basic example from their docs and put you cortecs credentials into the .env
file. As outlined in the complementary liteLLM docs prepend 'openai/' to your model's URL. This indicates that you are using an OpenAI-compatible endpoint.
Dynamic crews
In some cases you might want to start extensive processes with many agent. With dedicated inference you avoid running into request limits.
Option 1: Adjust agents manually
You can use cortecs-py to start a model and build your agents on top of it.
Option 2: DedicatedCrewBase
For convenience you can also use the crewAI integration. From the example from CrewAI navigate to /src/<example>/crew.py
and change the decorator from CrewBase
to DedicatedCrewBase
. The DedicatedCrewBase
automatically starts the llm based on your .env file. All agents are then by default linked to the model as specified in .env.
To make sure that your model is also shut down as soon as the crew is finished replace Crew
with DedicatedCrew
. Don't forget to pass instance_id
and cortecs client
to the constructor.
Executing crewai run
in your project root will:
start the model as specified in the .env
kickoff your crew
shut down the model as soon as crew is finished
The full code example is provided on GitHub.
Last updated