Introduction
Maximizing Speed, Minimizing Token Costs
Welcome to the cortecs docs! cortecs makes it easy to run dedicated language models at maximum performance 🚀.
Why On-Demand Inference?
Dedicated inference offers exclusive access to a specific model, ensuring that you are the sole user of the underlying compute resources. This makes it particularly suitable for applications that:
Need guaranteed latency
Have a heavy workload
Require many requests (no request limits)
Require high data security
For fully automated resource management see Cortecs-py.
Which model should I use?
cortecs offers a variety of popular models. Visit our models page to explore the available options. Each model comes with detailed information and quality assessments to help you determine if it meets your requirements. Generally, more complex tasks require larger models, while smaller models provide faster performance. For most use cases, we recommend models supporting 🔵Instant provisioning.
Don't see a model you want to use? Join our Discord to add or upvote the model you'd love to use.
Next steps
Register at cortecs.ai
Follow the quick start
Accomplish complex tasks using cortecs-py
Resources
Last updated