Skip to main content
LLM (Large Language Model) providers handle text generation, reasoning, and structured output tasks in Cognee. You can choose from cloud providers like OpenAI and Anthropic, or run models locally with Ollama.
New to configuration?See the Setup Configuration Overview for the complete workflow:install extras → create .env → choose providers → handle pruning.

Supported Providers

Cognee supports multiple LLM providers:
  • OpenAI — GPT models via OpenAI API (default)
  • Azure OpenAI — GPT models via Azure OpenAI Service
  • Google Gemini — Gemini models via Google AI
  • Anthropic — Claude models via Anthropic API
  • AWS Bedrock — Models available via AWS Bedrock
  • Groq — Fast inference via Groq API (via LiteLLM)
  • Ollama — Local models via Ollama
  • LM Studio — Local models via LM Studio
  • HuggingFace — Models via HuggingFace Inference API or Inference Endpoints
  • llama.cpp — Local models via llama-cpp-python (in-process or server mode)
  • Custom — OpenAI-compatible endpoints (like vLLM, OpenRouter, DeepInfra, company-internal)
LLM/Embedding Configuration: If you configure only LLM or only embeddings, the other defaults to OpenAI. Ensure you have a working OpenAI API key, or configure both LLM and embeddings to avoid unexpected defaults.

Configuration

Set these environment variables in your .env file:
  • LLM_PROVIDER — The provider to use (openai, gemini, anthropic, ollama, custom)
  • LLM_MODEL — The specific model to use
  • LLM_API_KEY — Your API key for the provider
  • LLM_ENDPOINT — Custom endpoint URL (for Azure, Ollama, or custom providers)
  • LLM_API_VERSION — API version (for Azure OpenAI)
  • LLM_TEMPERATURE — Sampling temperature for generation (default: 0.0)
  • LLM_MAX_TOKENS — Maximum tokens per request (optional)
  • LLM_INSTRUCTOR_MODE — Structured-output mode override for Instructor-backed LLM calls (optional)
Why do model names have a prefix like gemini/ or openrouter/?Cognee routes all LLM requests through LiteLLM, which uses provider prefixes to identify the correct API endpoint. For example, Google lists their model as gemini-2.0-flash, but in Cognee you must write gemini/gemini-2.0-flash. This prefix tells LiteLLM to use the Gemini API. The same applies to custom providers — openrouter/, hosted_vllm/, lm_studio/, etc. See each provider section below for the correct format.

Provider Setup Guides

OpenAI is the default provider and works out of the box with minimal configuration.
LLM_PROVIDER="openai"
LLM_MODEL="gpt-4o-mini"
LLM_API_KEY="sk-..."
# Optional overrides
# LLM_ENDPOINT=https://api.openai.com/v1
# LLM_API_VERSION=
# LLM_MAX_TOKENS=16384
Use Azure OpenAI Service with your own deployment.
LLM_PROVIDER="openai"
LLM_MODEL="azure/gpt-4o-mini"
LLM_ENDPOINT="https://<your-resource>.openai.azure.com/openai/deployments/gpt-4o-mini"
LLM_API_KEY="az-..."
LLM_API_VERSION="2024-12-01-preview"
Use Google’s Gemini models for text generation.
LLM_PROVIDER="gemini"
LLM_MODEL="gemini/gemini-2.0-flash"
LLM_API_KEY="AIza..."
# Optional
# LLM_ENDPOINT=https://generativelanguage.googleapis.com/
# LLM_API_VERSION=v1beta
Use Anthropic’s Claude models for reasoning tasks.
LLM_PROVIDER="anthropic"
LLM_MODEL="claude-3-5-sonnet-20241022"
LLM_API_KEY="sk-ant-..."
Groq provides fast inference for open models. Cognee routes Groq requests through LiteLLM using the groq/ model prefix.
LLM_PROVIDER="custom"
LLM_MODEL="groq/llama-3.3-70b-versatile"
LLM_API_KEY="gsk_..."
Installation: Install the Groq dependency:
pip install cognee[groq]
Popular Groq models (use with the groq/ prefix):
  • groq/llama-3.3-70b-versatile
  • groq/llama3-8b-8192
  • groq/mixtral-8x7b-32768
  • groq/gemma2-9b-it
See the Groq model list for all available models. Your Groq API key can be created in the Groq Console.
No endpoint needed: The LLM_ENDPOINT variable is not required for Groq — LiteLLM resolves the Groq API endpoint automatically from the groq/ prefix.
Use models available on AWS Bedrock for various tasks. For Bedrock specifically, you will need to also specify some information regarding AWS.
LLM_API_KEY="<your_bedrock_api_key>"
LLM_MODEL="eu.amazon.nova-lite-v1:0"
LLM_PROVIDER="bedrock"
LLM_MAX_TOKENS="16384"
AWS_REGION="<your_aws_region>"
AWS_ACCESS_KEY_ID="<your_aws_access_key_id>"
AWS_SECRET_ACCESS_KEY="<your_aws_secret_access_key>"
AWS_SESSION_TOKEN="<your_aws_session_token>"

# Optional parameters
#AWS_BEDROCK_RUNTIME_ENDPOINT="bedrock-runtime.eu-west-1.amazonaws.com"
#AWS_PROFILE_NAME="<path_to_your_aws_credentials_file>"
There are multiple ways of connecting to Bedrock models:
  1. Using an API key and region. Simply generate you key on AWS, and put it in the LLM_API_KEY env variable.
  2. Using AWS Credentials. You can only specify AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, no need for the LLM_API_KEY. In this case, if you are using temporary credentials (e.g. AWS_ACCESS_KEY_ID starting with ASIA...), then you also must specify the AWS_SESSION_TOKEN.
  3. Using AWS profiles. Create a file called something like /.aws/credentials, and store your credentials inside it.
Installation: Install the required dependency:
pip install cognee[aws]
Model Name The name of the model might differ based on the region (the name begins with eu for Europe, us of USA, etc.)
Run models locally with Ollama for privacy and cost control.
LLM_PROVIDER="ollama"
LLM_MODEL="llama3.1:8b"
LLM_ENDPOINT="http://localhost:11434/v1"
LLM_API_KEY="ollama"
LLM_API_KEY="ollama" is a placeholder required by the client library — Ollama itself does not validate it.Installation: Install Ollama from ollama.ai and pull your desired model:
ollama pull llama3.1:8b
Zero-API-key setup: To avoid falling back to OpenAI for embeddings, you must also configure the embedding provider to use a local backend. See the Local Setup guide for a complete .env example using Ollama or Fastembed for both LLM and embeddings.

Known Issues

  • NoDataError with mixed providers: Using Ollama as LLM and OpenAI as embedding provider may fail with NoDataError. Workaround: configure both LLM and embeddings to the same local provider (see the local setup guide above).
  • Audio transcription is not supported: AudioLoader relies on a Whisper-compatible transcription endpoint. Cognee’s Ollama adapter does not provide one, so audio ingestion will fail when LLM_PROVIDER="ollama".
Use models from HuggingFace via the HuggingFace Inference API (serverless) or dedicated Inference Endpoints.
LLM_PROVIDER="custom"
LLM_MODEL="huggingface/mistralai/Mistral-7B-Instruct-v0.3"
LLM_API_KEY="hf_..."
Installation: Install the HuggingFace extra to enable the HuggingFace tokenizer used for chunking:
pip install cognee[huggingface]
Model names: Use the full HuggingFace model repo ID after the huggingface/ prefix (e.g., huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1). Not all models on HuggingFace support the text generation inference API — check the model card for compatibility. The model is routed through LiteLLM.
Run models locally with LM Studio for privacy and cost control.
LLM_PROVIDER="custom"
LLM_MODEL="lm_studio/magistral-small-2509"
LLM_ENDPOINT="http://127.0.0.1:1234/v1"
LLM_API_KEY="."
LLM_INSTRUCTOR_MODE="json_schema_mode"
Installation: Install LM Studio from lmstudio.ai and download your desired model from LM Studio’s interface. Load your model, start the LM Studio server, and Cognee will be able to connect to it.
Set up instructor mode: LLM_INSTRUCTOR_MODE controls how Cognee asks the model for structured output. LM Studio models often work best with json_schema_mode. For more detail, see LLM Instructor Modes below and Structured Output Backends.
Run models locally using llama-cpp-python for full offline inference.Cognee supports two setup modes:
  • Local mode — Load a .gguf model directly in-process
  • Server mode — Connect to a running llama-cpp-python server over HTTP
Installation: Install the required dependency:
pip install cognee[llama-cpp]
Choosing a mode: Use local mode for the simplest setup with no separate server process. Use server mode if you want to share one model across multiple processes or run the model on another machine.
Load a GGUF model file directly. No server setup required.
LLM_PROVIDER="llama_cpp"
LLAMA_CPP_MODEL_PATH="/path/to/your/model.gguf"

# Optional: context window size (default: 2048)
LLAMA_CPP_N_CTX=4096

# Optional: GPU layers to offload (default: 0 = CPU only, -1 = all layers on GPU)
LLAMA_CPP_N_GPU_LAYERS=35

# Optional: chat format (default: chatml)
LLAMA_CPP_CHAT_FORMAT="chatml"
GPU acceleration: Set LLAMA_CPP_N_GPU_LAYERS=-1 to offload all layers to GPU, or set a positive integer to offload a specific number of layers. Leave it at 0 for CPU-only inference.
Connect to a running llama-cpp-python server. Start the server separately:
python -m llama_cpp.server --model /path/to/your/model.gguf --port 8000
Then configure Cognee to connect to it:
LLM_PROVIDER="llama_cpp"
LLM_ENDPOINT="http://localhost:8000/v1"
LLM_API_KEY="."
LLM_MODEL="your-model-name"
Use OpenAI-compatible endpoints like OpenRouter or other services.
LLM_PROVIDER="custom"
LLM_MODEL="openrouter/google/gemini-2.0-flash-lite-preview-02-05:free"
LLM_ENDPOINT="https://openrouter.ai/api/v1"
LLM_API_KEY="or-..."
# Optional: fallback provider for content policy violations
# FALLBACK_MODEL=openrouter/openai/gpt-4o-mini
# FALLBACK_ENDPOINT=https://openrouter.ai/api/v1
# FALLBACK_API_KEY=or-...
See Fallback Provider in Advanced Options for full details.Custom Provider Prefixes: When using LLM_PROVIDER="custom", you must include the correct provider prefix in your model name. Cognee forwards requests to LiteLLM, which uses these prefixes to route requests correctly.Common prefixes include:
  • hosted_vllm/ — vLLM servers
  • openrouter/ — OpenRouter
  • lm_studio/ — LM Studio
  • openai/ — OpenAI-compatible APIs
See the LiteLLM providers documentation for the full list of supported prefixes.Below are examples for common providers and patterns:
Use DeepSeek’s models for reasoning and chat via their OpenAI-compatible API.
LLM_PROVIDER="custom"
LLM_MODEL="deepseek/deepseek-chat"
LLM_ENDPOINT="https://api.deepseek.com/v1"
LLM_API_KEY="sk-..."
Get your API key from platform.deepseek.com. The deepseek/ prefix tells LiteLLM to route to the DeepSeek API.Popular DeepSeek models (use with the deepseek/ prefix):
  • deepseek/deepseek-chat — DeepSeek-V3 (general chat and instruction following)
  • deepseek/deepseek-reasoner — DeepSeek-R1 (chain-of-thought reasoning)
Structured output: DeepSeek’s API is OpenAI-compatible, so the default json_mode for custom providers works well. If you encounter issues with structured output, try setting LLM_INSTRUCTOR_MODE="tool_call".
Use Moonshot AI’s Kimi models via their OpenAI-compatible API.
LLM_PROVIDER="custom"
LLM_MODEL="moonshot/moonshot-v1-32k"
LLM_ENDPOINT="https://api.moonshot.cn/v1"
LLM_API_KEY="sk-..."
Get your API key from platform.moonshot.cn. The moonshot/ prefix tells LiteLLM to route to the Moonshot AI API.Available Kimi models (use with the moonshot/ prefix):
  • moonshot/moonshot-v1-8k — 8k context window
  • moonshot/moonshot-v1-32k — 32k context window
  • moonshot/moonshot-v1-128k — 128k context window (for long documents)
Use OpenRouter to access hundreds of models from a single API endpoint.
LLM_PROVIDER="custom"
LLM_MODEL="openrouter/deepseek/deepseek-r1"
LLM_ENDPOINT="https://openrouter.ai/api/v1"
LLM_API_KEY="sk-or-..."
Get your API key from openrouter.ai/keys. Browse all available models at openrouter.ai/models — prefix the model slug with openrouter/.Example models (use with the openrouter/ prefix):
  • openrouter/deepseek/deepseek-r1 — DeepSeek R1 via OpenRouter
  • openrouter/google/gemini-2.0-flash-lite-preview-02-05:free — Free Gemini tier
  • openrouter/openai/gpt-4o-mini — GPT-4o Mini via OpenRouter
Use DeepInfra to access open-source models via their OpenAI-compatible API.
LLM_PROVIDER="custom"
LLM_MODEL="deepinfra/meta-llama/Meta-Llama-3-8B-Instruct"
LLM_ENDPOINT="https://api.deepinfra.com/v1/openai"
LLM_API_KEY="<your-deepinfra-api-key>"
Find your model name in the DeepInfra model catalog. The deepinfra/ prefix tells LiteLLM to route to DeepInfra.
Any internal LLM server that exposes an OpenAI-compatible REST API (e.g., a corporate vLLM deployment, internal TGI server, or private OpenRouter proxy) can be used with the custom provider.
LLM_PROVIDER="custom"
LLM_MODEL="openai/<your-internal-model-name>"
LLM_ENDPOINT="https://llm.internal.example.com/v1"
LLM_API_KEY="<internal-api-key-or-bearer-token>"
The model prefix you use (openai/, hosted_vllm/, etc.) determines which LiteLLM adapter handles the request. For most OpenAI-compatible servers, openai/ works best. Set LLM_API_KEY to whatever bearer token your server requires (use . if no auth is needed).
Use vLLM for high-performance model serving with OpenAI-compatible API.
LLM_PROVIDER="custom"
LLM_MODEL="hosted_vllm/<your-model-name>"
LLM_ENDPOINT="https://your-vllm-endpoint/v1"
LLM_API_KEY="."
Example with Gemma:
LLM_PROVIDER="custom"
LLM_MODEL="hosted_vllm/gemma-3-12b"
LLM_ENDPOINT="https://your-vllm-endpoint/v1"
LLM_API_KEY="."
Important: The hosted_vllm/ prefix is required for LiteLLM to correctly route requests to your vLLM server. The model name after the prefix should match the model ID returned by your vLLM server’s /v1/models endpoint.
To find the correct model name, see their documentation.

Advanced Options

When using the Instructor structured-output framework (the default), Cognee instructs the model to return structured data in a specific way. The LLM_INSTRUCTOR_MODE environment variable controls which strategy is used.Each provider has a built-in default that matches its API capabilities. Override it only when the default doesn’t work for your specific model.Available modes:
ModeDescriptionWhen to use
json_schema_modePasses the full JSON Schema of the expected output in the request and enforces strict schema compliance.OpenAI models that support the response_format / structured-output feature (e.g. GPT-4o). Also works well with Bedrock and some local models.
json_modeInstructs the model to return any valid JSON object. Instructor then validates and coerces it to the target schema.Gemini, Ollama, Generic/Custom endpoints, and any model that supports response_format: json_object but not strict schema enforcement.
anthropic_toolsUses Anthropic’s native tool-calling API to extract structured data.Anthropic Claude models only. Leverages first-class tool-use support for reliable extraction.
mistral_toolsUses Mistral’s native tool-calling API to extract structured data.Mistral models only. Mirrors the OpenAI function-calling interface provided by Mistral.
tool_callUses the generic OpenAI-style function/tool-calling API to define the schema as a callable tool.OpenAI-compatible APIs that support function calling but not strict JSON schema output.
md_jsonAsks the model to return JSON wrapped in a Markdown code block. Instructor extracts the block and validates it.Models that reliably format code blocks but may not support json_mode (e.g. some self-hosted models).
Per-provider defaults (from source code):
Provider (LLM_PROVIDER)Default mode
openai (and Azure OpenAI)json_schema_mode
anthropicanthropic_tools
geminijson_mode
bedrockjson_schema_mode
mistralmistral_tools
ollamajson_mode
custom (generic OpenAI-compatible)json_mode
Example — override the mode:
LLM_INSTRUCTOR_MODE="json_schema_mode"
Override the default only when the model you are using requires a different mode. For example, LM Studio models typically need json_schema_mode even though the custom provider defaults to json_mode.
Control the randomness of LLM responses with the LLM_TEMPERATURE environment variable.
VariableDefaultDescription
LLM_TEMPERATURE0.0Sampling temperature. 0.0 = deterministic / focused output. Higher values (e.g. 0.71.0) produce more varied, creative responses.
When to adjust: Cognee’s default of 0.0 is recommended for knowledge-graph extraction because it produces consistent, structured output. Raise the temperature only if you need more variety in generated text (e.g. conversational responses or creative summarisation).
Control client-side throttling for LLM calls to manage API usage and costs.
Rate limiting is disabled by default. You must explicitly set LLM_RATE_LIMIT_ENABLED="true" to activate it.
Defaults (when rate limiting is enabled):
VariableDefaultMeaning
LLM_RATE_LIMIT_ENABLEDfalseOff by default — opt-in
LLM_RATE_LIMIT_REQUESTS60Max requests per interval
LLM_RATE_LIMIT_INTERVAL60Interval in seconds
The defaults (60 requests / 60 seconds) allow 1 request/second on average. Adjust both values to match your provider’s tier limit.How it works:
  • Client-side limiter: Cognee paces outbound LLM calls before they reach the provider
  • Moving window: Spreads allowance across the time window for smoother throughput
  • Per-process scope: In-memory limits don’t share across multiple processes/containers
  • Auto-applied: Works with all providers (OpenAI, Gemini, Anthropic, Ollama, Custom)
Sizing guidance:Set LLM_RATE_LIMIT_REQUESTS to your provider’s RPM (requests per minute) limit, and LLM_RATE_LIMIT_INTERVAL to 60. To leave headroom, use ~80–90% of the advertised limit. Check your provider’s dashboard for your current tier limits.Each cognify() call issues multiple LLM requests (entity extraction, summarization, etc.) per document chunk — plan for several requests per chunk, not one.Example configurations for common provider tiersThese examples target chat/completions-style LLM endpoints, such as OpenAI models like gpt-4o-mini.
LLM_RATE_LIMIT_ENABLED="true"
LLM_RATE_LIMIT_REQUESTS="450"
LLM_RATE_LIMIT_INTERVAL="60"
LLM_RATE_LIMIT_ENABLED="true"
LLM_RATE_LIMIT_REQUESTS="4500"
LLM_RATE_LIMIT_INTERVAL="60"
LLM_RATE_LIMIT_ENABLED="true"
LLM_RATE_LIMIT_REQUESTS="45"
LLM_RATE_LIMIT_INTERVAL="60"
LLM_RATE_LIMIT_ENABLED="true"
LLM_RATE_LIMIT_REQUESTS="13"
LLM_RATE_LIMIT_INTERVAL="60"
LLM_RATE_LIMIT_ENABLED="true"
LLM_RATE_LIMIT_REQUESTS="60"
LLM_RATE_LIMIT_INTERVAL="60"
Always verify your exact tier limits in your provider’s dashboard — limits vary by model, tier, and region. The examples above are approximations for common tiers and may change.
Cognee supports a primary-plus-fallback model configuration that automatically retries a failed request against a secondary provider. This is useful when your primary provider may reject certain content, and you want a fallback to handle those cases gracefully.When the fallback triggersThe fallback is invoked only on content policy violations from the primary provider:
  • ContentFilterFinishReasonError — the provider’s output filter blocked the response
  • ContentPolicyViolationError — the request was rejected for policy reasons
  • InstructorRetryException containing “content management policy”
The fallback does not activate for network errors, rate limits, or authentication failures.Supported providersFallback is available when LLM_PROVIDER is set to openai or custom. Other providers (Anthropic, Gemini, Mistral, Bedrock, Ollama) do not currently support the fallback chain.ConfigurationSet these three variables alongside your primary LLM configuration:
# Primary provider
LLM_PROVIDER="openai"
LLM_MODEL="openai/gpt-4o-mini"
LLM_API_KEY="sk-..."

# Fallback provider (used only on content policy violations)
FALLBACK_MODEL="openrouter/openai/gpt-4o-mini"
FALLBACK_ENDPOINT="https://openrouter.ai/api/v1"
FALLBACK_API_KEY="or-..."
For LLM_PROVIDER="custom", all three fallback variables (FALLBACK_MODEL, FALLBACK_ENDPOINT, FALLBACK_API_KEY) must be set. If any is missing, Cognee raises a ContentPolicyFilterError instead of falling back.For LLM_PROVIDER="openai", only FALLBACK_MODEL and FALLBACK_API_KEY are required. FALLBACK_ENDPOINT is accepted but currently unused for the OpenAI adapter.Variable reference
VariableDescription
FALLBACK_MODELModel identifier for the fallback provider (use LiteLLM prefix format, e.g. openrouter/openai/gpt-4o-mini)
FALLBACK_ENDPOINTBase URL for the fallback provider’s API (required for custom, optional for openai)
FALLBACK_API_KEYAPI key for the fallback provider

Notes

  • If EMBEDDING_API_KEY is not set, Cognee falls back to LLM_API_KEY for embeddings
  • Rate limiting helps manage API usage and costs
  • Structured output frameworks ensure consistent data extraction from LLM responses

Embedding Providers

Configure embedding providers for semantic search

Overview

Return to setup configuration overview

Relational Databases

Set up SQLite or Postgres for metadata storage