Configure Cognee to use your preferred LLM, embedding engine, relational database, vector store, and graph store via environment variables in a local .env file. This section provides beginner-friendly guides for setting up different backends, with detailed technical information available in expandable sections.

What You Can Configure

Cognee uses a flexible architecture that lets you choose the best tools for your needs. We recommend starting with the defaults to get familiar with Cognee, then customizing each component as needed:
  • LLM Providers — Choose from OpenAI, Azure OpenAI, Google Gemini, Anthropic, Ollama, or custom providers for text generation and reasoning tasks
  • Embedding Providers — Select from OpenAI, Azure OpenAI, Google Gemini, Mistral, Ollama, Fastembed, or custom embedding services to create vector representations for semantic search
  • Relational Databases — Use SQLite for local development or Postgres for production to store metadata, documents, and system state
  • Vector Stores — Store embeddings in LanceDB, PGVector, ChromaDB, FalkorDB, or Neptune Analytics for similarity search
  • Graph Stores — Build knowledge graphs with Kuzu, Kuzu-remote, Neo4j, Neptune, or Neptune Analytics to manage relationships and reasoning

Configuration Workflow

  1. Install Cognee with all optional dependencies:
    • Local setup: uv sync --all-extras
    • Library: pip install "cognee[all]"
  2. Create a .env file in your project root (if you haven’t already) — see Installation for details
  3. Choose your preferred providers and follow the configuration instructions from the guides below
Configuration Changes: If you’ve already run Cognee with default settings and are now changing your configuration (e.g., switching from SQLite to Postgres, or changing vector stores), you should call pruning operations before the next cognification to ensure data consistency.
LLM/Embedding Configuration: If you configure only LLM or only embeddings, the other defaults to OpenAI. Ensure you have a working OpenAI API key, or configure both LLM and embeddings to avoid unexpected defaults.