Embedding providers convert text into vector representations that enable semantic search. These vectors capture the meaning of text, allowing Cognee to find conceptually related content even when the wording is different.
New to configuration?See the Setup Configuration Overview for the complete workflow:install extras → create .env → choose providers → handle pruning.

Supported Providers

Cognee supports multiple embedding providers:
  • OpenAI — Text embedding models via OpenAI API (default)
  • Azure OpenAI — Text embedding models via Azure OpenAI Service
  • Google Gemini — Embedding models via Google AI
  • Mistral — Embedding models via Mistral AI
  • Ollama — Local embedding models via Ollama
  • Fastembed — CPU-friendly local embeddings
  • Custom — OpenAI-compatible embedding endpoints
LLM/Embedding Configuration: If you configure only LLM or only embeddings, the other defaults to OpenAI. Ensure you have a working OpenAI API key, or configure both LLM and embeddings to avoid unexpected defaults.

Configuration

Provider Setup Guides

Advanced Options

Important Notes

  • Dimension Consistency: EMBEDDING_DIMENSIONS must match your vector store collection schema
  • API Key Fallback: If EMBEDDING_API_KEY is not set, Cognee uses LLM_API_KEY (except for custom providers)
  • Tokenization: For Ollama and Hugging Face models, set HUGGINGFACE_TOKENIZER for proper token counting
  • Performance: Local providers (Ollama, Fastembed) are slower but offer privacy and cost benefits