.env
file.
This section provides beginner-friendly guides for setting up different backends, with detailed technical information available in expandable sections.
What You Can Configure
Cognee uses a flexible architecture that lets you choose the best tools for your needs. We recommend starting with the defaults to get familiar with Cognee, then customizing each component as needed:- LLM Providers — Choose from OpenAI, Azure OpenAI, Google Gemini, Anthropic, Ollama, or custom providers for text generation and reasoning tasks
- Structured Output Backends — Configure LiteLLM + Instructor or BAML for reliable data extraction from LLM responses
- Embedding Providers — Select from OpenAI, Azure OpenAI, Google Gemini, Mistral, Ollama, Fastembed, or custom embedding services to create vector representations for semantic search
- Relational Databases — Use SQLite for local development or Postgres for production to store metadata, documents, and system state
- Vector Stores — Store embeddings in LanceDB, PGVector, ChromaDB, FalkorDB, or Neptune Analytics for similarity search
- Graph Stores — Build knowledge graphs with Kuzu, Kuzu-remote, Neo4j, Neptune, or Neptune Analytics to manage relationships and reasoning
- Dataset Separation & Access Control — Configure dataset-level permissions and isolation
Dataset isolation is not enabled by default; see how to enable it.
Observability & Telemetry
Cognee includes built-in telemetry to help you monitor and debug your knowledge graph operations. You can control telemetry behavior with environment variables:TELEMETRY_DISABLED
(boolean, optional): Set totrue
to disable all telemetry collection (default:false
)
- Search query performance metrics
- Processing pipeline execution times
- Error rates and debugging information
- System resource usage
Telemetry data helps improve Cognee’s performance and reliability. It’s collected anonymously and doesn’t include your actual data content.
Configuration Workflow
- Install Cognee with all optional dependencies:
- Local setup:
uv sync --all-extras
- Library:
pip install "cognee[all]"
- Local setup:
- Create a
.env
file in your project root (if you haven’t already) — see Installation for details - Choose your preferred providers and follow the configuration instructions from the guides below
Configuration Changes: If you’ve already run Cognee with default settings and are now changing your configuration (e.g., switching from SQLite to Postgres, or changing vector stores), you should call pruning operations before the next cognification to ensure data consistency.
LLM/Embedding Configuration: If you configure only LLM or only embeddings, the other defaults to OpenAI. Ensure you have a working OpenAI API key, or configure both LLM and embeddings to avoid unexpected defaults.