Audio Inputs

Audio Processing enables applications to understand, generate, and reason over audio content using multimodal AI models. The audio processing works through the Completion endpoint and supports models that handle multiple modalities, including audio.

To get a full list of supported models, visit cortecs.aiarrow-up-right and filter by the Audio tag.

Note: Audio format support depends on the provider. Check the model documentation for details.

from openai import OpenAI
import base64

client = OpenAI(
  base_url="https://api.cortecs.ai/v1",
  api_key="<API_KEY>",
)

# Load and encode audio file
with open("path/to/audio_test.mp3", "rb") as f:
    audio_base64 = base64.b64encode(f.read()).decode('utf-8')

chat_response = client.chat.completions.create(
    model="gemini-2.5-pro",
    messages=[{
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What is this file about?"
            },
            {
                "type": "input_audio",
                "input_audio": {
                    "data": audio_base64,
                    "format": "mp3"
                }
            },
        ]
    }]
)

print(chat_response)

Note: For a dedicated speech-to-text endpoint, see the Audio Transcriptionarrow-up-right page.

Last updated