Skip to Content
How-to GuidesLocal Models

🚀 Getting Started with Local Models

You’ll need to run the local model on your machine or use one of the providers hosting the model.

We had some success with llama3.3 and deepseek, but 7b models did not work well. We recommend using llama3.3 70b-instruct-q3_K_M and deepseek-r1:32b via Ollama.

Ollama

Set up Ollama by following instructions on Ollama website

For a quickstart simply run the model:

ollama run deepseek-r1:32b

and

ollama run avr/sfr-embedding-mistral:latest

Set the environment variable in your .env to use the model

LLM_PROVIDER = 'ollama'

Otherwise, you can set the configuration for the model:

cognee.config.llm_provider = 'ollama'

You can also set the HOST and model name:

cognee.config.llm_endpoint = "http://localhost:11434" cognee.config.llm_model = "ollama/llama3.2" cognee.embedding_provider = "ollama" cognee.embedding_model = "avr/sfr-embedding-mistral:latest" cognee.embedding_dimensions = 4096 cognee.huggingface_tokenizer ="Salesforce/SFR-Embedding-Mistral"

or set the env variables in your .env file

EMBEDDING_PROVIDER="ollama" EMBEDDING_MODEL="avr/sfr-embedding-mistral:latest" EMBEDDING_ENDPOINT="http://localhost:11434/api/embeddings" EMBEDDING_DIMENSIONS=4096 HUGGINGFACE_TOKENIZER="Salesforce/SFR-Embedding-Mistral"