LLM (Large Language Model) providers handle text generation, reasoning, and structured output tasks in Cognee. You can choose from cloud providers like OpenAI and Anthropic, or run models locally with Ollama.
New to configuration?See the Setup Configuration Overview for the complete workflow:install extras → create .env → choose providers → handle pruning.

Supported Providers

Cognee supports multiple LLM providers:
  • OpenAI — GPT models via OpenAI API (default)
  • Azure OpenAI — GPT models via Azure OpenAI Service
  • Google Gemini — Gemini models via Google AI
  • Anthropic — Claude models via Anthropic API
  • Ollama — Local models via Ollama
  • Custom — OpenAI-compatible endpoints
LLM/Embedding Configuration: If you configure only LLM or only embeddings, the other defaults to OpenAI. Ensure you have a working OpenAI API key, or configure both LLM and embeddings to avoid unexpected defaults.

Configuration

Provider Setup Guides

Advanced Options

Notes

  • If EMBEDDING_API_KEY is not set, Cognee falls back to LLM_API_KEY for embeddings
  • Rate limiting helps manage API usage and costs
  • Structured output frameworks ensure consistent data extraction from LLM responses