Provisioning API
Last updated
Last updated
Cortecs lets you provision dedicated LLM instances in two ways:
Cortecs Web App – an intuitive UI for quick setup
Provisioning REST API – for automation, scripting, and integration into your workflows
Python Client – a thin wrapper around the REST API
This page covers the Provisioning API, which allows you to start and stop dedicated models programmatically.
Automate resource allocation
Reduce cost by shutting down unused instances
Refer to the following sections for authentication, endpoint URLs, and request examples.