- Complete Quickstart to understand basic operations
- Ensure you have LLM Providers configured
- Have some text to process
What It Is
- Single entrypoint:
LLMGateway.acreate_structured_output(text, system_prompt, response_model)
- Returns an instance of your Pydantic
response_model
filled by the LLM - Backend-agnostic: uses BAML or LiteLLM+Instructor under the hood based on config — your code doesn’t change
This function is used by default during cognify via the extractor. The backend switch lives in
cognee/infrastructure/llm/LLMGateway.py
.Full Working Example
This simple example uses a basic schema for demonstration. In practice, you can define complex Pydantic models with nested structures, validation rules, and custom types.
What Just Happened
Step 1: Define Your Schema
Step 2: Write a System Prompt
Step 3: Call the LLM
A sync variant exists:
LLMGateway.create_structured_output(...)
.Custom Tasks
This function is often used when creating custom tasks for processing data with structured output. You’ll see it in action when we cover custom task creation in a future guide.Backend Doesn’t Matter
The config decides the engine:STRUCTURED_OUTPUT_FRAMEWORK=instructor
→ LiteLLM + InstructorSTRUCTURED_OUTPUT_FRAMEWORK=baml
→ BAML client/registry