Configure LLM providers for text generation and reasoning in Cognee
.env
→ choose providers → handle pruning.Environment Variables
.env
file:LLM_PROVIDER
— The provider to use (openai, gemini, anthropic, ollama, custom)LLM_MODEL
— The specific model to useLLM_API_KEY
— Your API key for the providerLLM_ENDPOINT
— Custom endpoint URL (for Azure, Ollama, or custom providers)LLM_API_VERSION
— API version (for Azure OpenAI)LLM_MAX_TOKENS
— Maximum tokens per request (optional)OpenAI (Default)
Azure OpenAI
Google Gemini
Anthropic
Ollama (Local)
HUGGINGFACE_TOKENIZER
: Ollama currently needs this env var set even when used only as LLM. Fix in progress.NoDataError
with mixed providers: Using Ollama as LLM and OpenAI as embedding provider may fail with NoDataError
. Workaround: use the same provider for both.Custom Providers
Rate Limiting
60
requests per 60
seconds ≈ 1 request/second average rate.EMBEDDING_API_KEY
is not set, Cognee falls back to LLM_API_KEY
for embeddings