phi4
or sfr-embedding-mistral:latest
) to run locallydeepseek-r1:32b
, llama3.3
, or phi4
depending on your system resources.
nomic-embed-text
or mxbai-embed-large
.
deepseek-r1:32b
or llama3.3
for best results.
[ollama]
extra includes all necessary dependencies for local model integration.
.env
file in your project directory with the following configuration:
YOUR_MODEL
and YOUR_EMBEDDING_MODEL
with the actual model names you downloaded.
LLM_ENDPOINT
: Points to Ollama’s local completion APIEMBEDDING_ENDPOINT
: Points to Ollama’s local embedding APIEMBEDDING_DIMENSIONS
: Must match your embedding model’s output dimensionsHUGGINGFACE_TOKENIZER
: Specifies the tokenizer for your embedding modeltest_ollama.py
with the following content: