- Complete Quickstart to understand basic operations
- Install Ollama if using the Ollama options below
After switching to a local provider for the first time, call
cognee.prune.prune_system(metadata=True) before running cognify to ensure there are no stale vector collections from the previous (OpenAI) embedding dimensions.- Ollama (LLM + Embeddings)
- Ollama LLM + Fastembed
Fully local setup using Ollama for both text generation and embeddings.Prerequisites: Install Ollama and pull the required models:.env configuration:
LLM_API_KEY="ollama" is a placeholder required by the client library — Ollama itself does not validate it.
HUGGINGFACE_TOKENIZER is the HuggingFace repo ID of the tokenizer used for token counting when sending requests to the Ollama embedding endpoint.LLM Providers
Configure OpenAI, Azure, Gemini, Anthropic, Ollama, or custom LLM providers
Embedding Providers
Set up OpenAI, Mistral, Ollama, Fastembed, or custom embedding services
Setup Configuration
Full configuration reference for all backends