Structured output
Generating structured output, such as JSON, is valuable for several reasons. It allows for standardized data formats that are easy to process, store, and integrate with other systems. Structured responses are particularly useful for tasks such as extracting specific information, feeding data into databases, or interfacing with other applications that require consistent data formats.
Using OpenAI
from openai import OpenAI
client = OpenAI(base_url='<MODEL_URL>')
response = client.chat.completions.create(
temperature=0,
model="cortecs/phi-4-FP8-Dynamic",
response_format={"type": "json_object"},
messages=[
{"role": "system", "content": "You must output a JSON object with a joke and a rating from 1-10 indicating how funny it is."},
{"role": "user", "content": "Tell me a joke about cats."}
]
)
print(response.choices[0].message.content)
Using LangChain
LangChain offers advanced control over the output structure, enabling you to define specific schemas for the data returned by the model. This feature is particularly useful for extracting and processing structured data, such as inserting information into a database or integrating with downstream systems.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(base_url='<MODEL_URL>',
model_name='cortecs/phi-4-FP8-Dynamic')
json_schema = {
"title": "joke",
"description": "Joke to tell user.",
"type": "object",
"properties": {
"setup": {
"type": "string",
"description": "The setup of the joke",
},
"punchline": {
"type": "string",
"description": "The punchline to the joke",
}
},
"required": ["setup", "punchline"],
}
structured_llm = llm.with_structured_output(json_schema, method="json_mode")
structured_joke = structured_llm.invoke("Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys")
print(structured_joke)
Last updated