Hello cognee SDK
A quick tutorial to get started with cognee SDK using different configurations.
Check our .env.template for all configuration options.
Install cognee SDK
$ uv venv
$ source .venv/bin/activate
$ uv pip install cognee
Add your OpenAI key to cognee
Create a .env
file in your project directory:
echo 'LLM_API_KEY="your_api_key"' > .env
Configure cognee via ENV variables
Cognee supports multiple LLM providers and database configurations. Here are the main options:
LLM Providers
- OpenAI: Simple setup with
LLM_API_KEY
- Azure OpenAI: Use
LLM_MODEL="azure/gpt-4o-mini"
with endpoint and API version - Ollama: Free local LLMs with
LLM_PROVIDER="ollama"
- OpenRouter: Free models with
LLM_PROVIDER="custom"
- DeepInfra: Custom provider setup
Database Options
- Default: SQLite + NetworkX + LanceDB (no setup required)
- Neo4j: For advanced graph operations
- PostgreSQL: For production relational storage
- PGVector: For PostgreSQL-based vector storage
Case 0 - Default cognee
The simplest setup using local storage (no external services required):
# .env
LLM_API_KEY="your_openai_key"
# Optional: Specify default providers explicitly
GRAPH_DATABASE_PROVIDER="networkx"
VECTOR_DB_PROVIDER="lancedb"
DB_PROVIDER="sqlite"
DB_NAME="cognee_db"
Cognee stores files in .cognee/
directory by default. Use DATA_ROOT_DIRECTORY
and SYSTEM_ROOT_DIRECTORY
to customize storage locations.
import cognee
import asyncio
async def main():
text = "Cognee is a memory layer for AI agents."
await cognee.add(text)
await cognee.cognify()
results = await cognee.search("What is cognee?")
print(results[0])
asyncio.run(main())
Case 1 - Using Neo4j
For advanced graph capabilities with Neo4j:
# .env
LLM_API_KEY="your_openai_key"
# Neo4j configuration
GRAPH_DATABASE_PROVIDER="neo4j"
GRAPH_DATABASE_URL="bolt://localhost:7687"
GRAPH_DATABASE_USERNAME="neo4j"
GRAPH_DATABASE_PASSWORD="your_password"
# Keep other defaults
VECTOR_DB_PROVIDER="lancedb"
DB_PROVIDER="sqlite"
Start Neo4j locally:
docker run -p 7474:7474 -p 7687:7687 -e NEO4J_AUTH=neo4j/your_password neo4j:latest
Case 2 - Using PostgreSQL + PGVector
For production-ready setup with PostgreSQL:
# .env
LLM_API_KEY="your_openai_key"
# PostgreSQL configuration
DB_PROVIDER="postgres"
DB_NAME="cognee_db"
DB_HOST="127.0.0.1"
DB_PORT="5432"
DB_USERNAME="cognee"
DB_PASSWORD="cognee"
# PGVector for embeddings
VECTOR_DB_PROVIDER="pgvector"
# Graph database
GRAPH_DATABASE_PROVIDER="kuzu"
Free Local Setup
Run cognee completely offline with Ollama:
# .env
LLM_API_KEY="ollama"
LLM_MODEL="llama3.1:8b"
LLM_PROVIDER="ollama"
LLM_ENDPOINT="http://localhost:11434/v1"
EMBEDDING_PROVIDER="ollama"
EMBEDDING_MODEL="avr/sfr-embedding-mistral:latest"
EMBEDDING_ENDPOINT="http://localhost:11434/api/embeddings"
EMBEDDING_DIMENSIONS="4096"
Install and start Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull models
ollama pull llama3.1:8b
ollama pull avr/sfr-embedding-mistral:latest
Next Steps
- Explore custom pipelines
- Learn about data ingestion
- Check out search types
Last updated on