Deploying Cognee
Cognee is a standalone tool that can be deployed easily on-premise or in the cloud. This guide covers the various deployment options available and helps you choose the right approach for your use case.
Deployment Options Overview
Local Development & Testing
- MCP Integration - Connect to code editors and IDEs
- Docker - Containerized local deployment
Cloud & Production Deployments
- Modal - Serverless deployment for easily scalable remote processing
- Helm - Production-ready Kubernetes deployment
- EC2 - Traditional cloud server deployment
When to Use Each Option
Option | Best For | Complexity | Scalability |
---|---|---|---|
MCP | Code editors, development workflows | Low | Limited |
Docker | Local development, testing, small deployments | Low | Medium |
Modal | Serverless processing, auto-scaling workloads | Medium | High |
Helm/K8s | Production environments, enterprise deployments | High | High |
EC2 | Traditional server deployments, custom configurations | Medium | Medium |
Quick Start
Choose your deployment path:
For Development
Start with MCP integration for code editor workflows:
docker run --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
# Connect to your preferred editor
# See MCP integration guides for specific editors
For Production
For most production use cases, we recommend starting with Modal for its simplicity and automatic scaling, then moving to Kubernetes with Helm for enterprise deployments that require more control.
Docker Deployment
Docker provides an easy way to run cognee in a containerized environment, perfect for local development and testing.
Prerequisites
- Docker installed and running
- Docker Compose (for multi-service setup)
Environment Configuration
Configure your deployment using environment variables:
# Database connections
POSTGRES_URL=postgresql://user:password@localhost:5432/cognee
NEO4J_URL=bolt://localhost:7687
# Vector database
QDRANT_URL=http://localhost:6333
# LLM configuration
OPENAI_API_KEY=your_openai_key
Architecture Considerations
Single Instance Deployment
- Docker or EC2 for simple, single-server deployments
- Suitable for development, testing, or small-scale production use
- Limited scalability but easier to manage
Distributed Deployment
- Modal for serverless, auto-scaling processing
- Kubernetes/Helm for container orchestration
- Better for high-volume, production workloads
Database Requirements
All deployment options require:
- PostgreSQL - Primary data storage
- Vector Database - Qdrant, Weaviate, or Pinecone
- Graph Database - Neo4j (optional but recommended)
Next Steps
- Choose your deployment method based on your use case
- Follow the specific deployment guide for detailed instructions
- Configure your databases according to your requirements
- Test your deployment with sample data
Need Help?
- Join our Discord community for deployment support
- Check our Troubleshooting Guide for common issues
- Review the API Reference for configuration options
Ready to deploy? Choose your preferred method above and follow the detailed guides for step-by-step instructions.