# Introduction

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FQrkj5yjJGbBBTGGS5DGw%2Fsocial_media_banner_flag.png?alt=media&#x26;token=af54490c-0f53-46c0-8156-e9dba359d963" alt=""><figcaption></figcaption></figure>

Built for developers and teams deploying AI applications, **Cortecs** combines **performance and compliance** in a unified platform. It provides a gateway to run large language models across a sovereign, scalable, and privacy-first network.

Built on the principles of [Sky Computing](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s02-stoica.pdf), it treats the cloud as a global utility instead of a single-vendor solution. Workloads are dynamically routed across multiple clouds, continuously optimized for speed, cost, and availability.&#x20;

## Key benefits

Sky Inference brings the vision of Sky Computing into practical use, giving you a simple, unified way to run AI workloads across many clouds.

| Feature                        | Description                                                  |
| ------------------------------ | ------------------------------------------------------------ |
| **Unified API**                | One endpoint to access multiple cloud providers              |
| **Resilient by design**        | If a provider goes down, traffic automatically reroutes      |
| **Compliant by design**        | Fully compliant with GDPR and custom regulatory requirements |
| **Cost and performance aware** | Dynamically optimized routing for latency or cost-efficiency |
| **No subscription**            | Only pay for what you use. No subscription needed.           |

## Key concept

Our managed approach breaks with traditional routers, making Cortecs the only default GDPR-ready AI gateway. While other routers often do claim compliance, it is conventionally restricted to their routing only, leaving you liable for downstream transfers and triggering **legal reviews for every new model.**

Cortecs eliminates this risk by acting as your primary Data Processor and legally integrating these foundational models under our umbrella as Subprocessors. With just one DPA, you get instant access to the world's best AI models while we absorb the legal overhead and cross-border compliance, turning months of paperwork into immediate, compliant usage.

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FmXWeFejzsW3mRXlXfjk6%2Fcomparison_draft.png?alt=media&#x26;token=d32c1b42-2703-41b2-a945-5650844578c2" alt=""><figcaption></figcaption></figure>

## First steps

Ready to try **cortecs**? Here's how to get started:

1. Register at [cortecs.ai](https://cortecs.ai)
2. Explore the [Quick Start](https://docs.cortecs.ai/serverless-inference/quickstart) Guide
3. Join the Community:

   * 💬 [Join us on Discord](https://discord.gg/bPFEFcWBhp)
   * 📩 [Contact Support](mailto:support@cortecs.ai)
   * 🔐 [View our Privacy Policy](https://cortecs.ai/privacyPolicy)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortecs.ai/introduction.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
