New to configuration?See the Setup Configuration Overview for the complete workflow:install extras → create
.env → choose providers → handle pruning.Supported Providers
Cognee supports multiple embedding providers:- OpenAI — Text embedding models via OpenAI API (default)
- Azure OpenAI — Text embedding models via Azure OpenAI Service
- Google Gemini — Embedding models via Google AI
- Mistral — Embedding models via Mistral AI
- Ollama — Local embedding models via Ollama
- Fastembed — CPU-friendly local embeddings
- Custom — OpenAI-compatible embedding endpoints
LLM/Embedding Configuration: If you configure only LLM or only embeddings, the other defaults to OpenAI. Ensure you have a working OpenAI API key, or configure both LLM and embeddings to avoid unexpected defaults.
Configuration
Environment Variables
Environment Variables
Set these environment variables in your
.env file:EMBEDDING_PROVIDER— The provider to use (openai, gemini, mistral, ollama, fastembed, custom)EMBEDDING_MODEL— The specific embedding model to useEMBEDDING_DIMENSIONS— The vector dimension size (must match your vector store)EMBEDDING_API_KEY— Your API key (falls back toLLM_API_KEYif not set)EMBEDDING_ENDPOINT— Custom endpoint URL (for Azure, Ollama, or custom providers)EMBEDDING_API_VERSION— API version (for Azure OpenAI)EMBEDDING_MAX_TOKENS— Maximum tokens per request (optional)
Provider Setup Guides
OpenAI (Default)
OpenAI (Default)
OpenAI provides high-quality embeddings with good performance.
Azure OpenAI Embeddings
Azure OpenAI Embeddings
Use Azure OpenAI Service for embeddings with your own deployment.
Google Gemini
Google Gemini
Use Google’s embedding models for semantic search.
Mistral
Mistral
Use Mistral’s embedding models for high-quality vector representations.Installation: Install the required dependency:
Ollama (Local)
Ollama (Local)
Run embedding models locally with Ollama for privacy and cost control.Installation: Install Ollama from ollama.ai and pull your desired embedding model:
Fastembed (Local)
Fastembed (Local)
Use Fastembed for CPU-friendly local embeddings without GPU requirements.Installation: Fastembed is included by default with Cognee.Known Issues:
- As of September 2025, Fastembed requires Python < 3.13 (not compatible with Python 3.13+)
Custom Providers
Custom Providers
Use OpenAI-compatible embedding endpoints from other providers.
Advanced Options
Rate Limiting
Rate Limiting
Testing and Development
Testing and Development
Important Notes
- Dimension Consistency:
EMBEDDING_DIMENSIONSmust match your vector store collection schema - API Key Fallback: If
EMBEDDING_API_KEYis not set, Cognee usesLLM_API_KEY(except for custom providers) - Tokenization: For Ollama and Hugging Face models, set
HUGGINGFACE_TOKENIZERfor proper token counting - Performance: Local providers (Ollama, Fastembed) are slower but offer privacy and cost benefits