# Langfuse

[Langfuse](https://langfuse.com/) is an open-source LLM engineering platform that helps teams collaboratively debug, analyze, and iterate on their LLM applications.&#x20;

All platform features are natively integrated to accelerate the development workflow. Langfuse is open, self-hostable, and extensible.&#x20;

This guide will walk you through the setup.

{% hint style="info" %}
*Before you begin: Make sure you have generated your Cortecs API key. If not, check out our* [*QuickStart*](https://docs.cortecs.ai/quickstart) *guide.*
{% endhint %}

## 1. Deploy Langfuse&#x20;

There are multiple ways to deploy Langfuse.

To get started quickly, choose your preferred deployment strategy and follow the instructions in the [documentation](https://langfuse.com/self-hosting).

Once deployment is complete, you’ll need to configure an external provider.

## 2. Connect to cortecs

Cortecs provides **OpenAI-compatible API endpoints**, making it easy to integrate with Langfuse and many other tools.

### 2.1. Prompt Management

One of the core features of Langfuse is [prompt management](https://langfuse.com/docs/prompt-management/overview), which includes storing, versioning, and retrieving your prompts. To experiment with prompts, Langfuse needs to be connected to an LLM provider such as Cortecs.

To set up the connection, follow these steps in the administration console:

1. Navigate to `Settings -> LLM-Connections`
2. Configure the connection with the following settings:\
   – **LLM adapter:** openai\
   – **API Key:** Your API Key\
   – **API Base URL:**  <code class="expression">space.vars.DEFAULT\_API\_BASE\_URL</code>\
   – **Enable default models:** Disabled\
   – **Add custom model name:** <code class="expression">space.vars.DEFAULT\_CHAT\_MODEL</code> (feel free to pick another one from the [catalogue](https://cortecs.ai/serverlessModels))

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FoWXFjlNCJWKcRXITXDni%2Fimage.png?alt=media&#x26;token=bc99b959-cf57-4543-9903-cfb7aeb087c5" alt="" width="375"><figcaption></figcaption></figure>

Once the LLM connection is created, you can experiment with prompts, store versions, and later [retrieve and use them directly in your code](https://langfuse.com/docs/prompt-management/get-started).

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FC6pgBRRJS3vLMovZ1omR%2Fimage.png?alt=media&#x26;token=75071937-53e0-4f39-9ea3-8e4787ab99e5" alt=""><figcaption></figcaption></figure>

```python
from langfuse._client.get_client import get_client

langfuse = get_client()
prompt = langfuse.get_prompt("Friend")
compiled_prompt = prompt.compile(name="Freddy")
```

### 2.2. Tracing

Langfuse provides [tracing capabilities](https://langfuse.com/docs/observability/overview) to monitor and analyze the calls made to an LLM. This feature allows you to gain visibility into model interactions and performance. To integrate tracing, Langfuse offers multiple options, including [drop-in replacements and flexible connection methods](https://langfuse.com/docs/observability/get-started), making it easy to incorporate into your existing codebase.

```python
from langfuse.openai import openai

client = openai.OpenAI(
    api_key="eyJhbG***",
    base_url="https://api.cortecs.ai/v1"
)

completion = client.chat.completions.create(
    model="gpt-5-mini",
    messages=[
        {"role": "system", "content": "You are a professional comedian."},
        {"role": "user", "content": "Tell me a joke."}],
    stream=True,
    extra_body={"preference": "balanced"}
)

for chunk in completion:
  print(chunk.choices[0].delta.content)

```

<figure><img src="https://2211217319-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYGsEKyV2Zq4Q8fEJQT40%2Fuploads%2FE7vMO4TA9OkXSGOoMH7n%2Fimage.png?alt=media&#x26;token=1d8bccc2-fbb5-4a91-ab45-569e9cf6f636" alt=""><figcaption></figcaption></figure>

Enjoy your **privacy-preserving LLM engineering platform** with the power of Cortecs and Langfuse. Explore other models, tweak the setup to your needs, and join the conversation on our [Discord](https://discord.com/invite/bPFEFcWBhp) to share feedback 👩‍💻👨‍💻
