Structured output backends ensure reliable data extraction from LLM responses. Cognee supports two frameworks that convert LLM text into structured Pydantic models for knowledge graph extraction and other tasks.
New to configuration?See the Setup Configuration Overview for the complete workflow:install extras β†’ create .env β†’ choose providers β†’ handle pruning.

Supported Frameworks

Cognee supports two structured output approaches:
  • LiteLLM + Instructor β€” Provider-agnostic client with Pydantic coercion (default)
  • BAML β€” DSL-based framework with type registry and guardrails
Both frameworks produce the same Pydantic-validated outputs, so your application code remains unchanged regardless of which backend you choose.

How It Works

Cognee uses a unified interface that abstracts the underlying framework:
from cognee.infrastructure.llm.LLMGateway import LLMGateway
await LLMGateway.acreate_structured_output(text, system_prompt, response_model)
The STRUCTURED_OUTPUT_FRAMEWORK environment variable determines which backend processes your requests, but the API remains identical.

Configuration

STRUCTURED_OUTPUT_FRAMEWORK=instructor

Important Notes

  • Unified Interface: Your application code uses the same acreate_structured_output() call regardless of framework
  • Provider Flexibility: Both frameworks support the same LLM providers
  • Output Consistency: Both produce identical Pydantic-validated results
  • Performance: Framework choice doesn’t significantly impact performance