Skip to main content
Build and run Cognee MCP from source to access advanced customization, multiple transport options, and the latest development features.

Advantages of Local Setup

  • Full Control: Customize server configuration, add providers, and modify behavior
  • Latest Features: Access development features before they reach Docker releases
  • Multiple Transports: Choose stdio, SSE, or HTTP transport modes
  • Development Ready: Debug, modify, and contribute to the codebase

Setup Steps

1

Clone Repository

git clone https://github.com/topoteretes/cognee.git
cd cognee
2

Create Environment File

Create a .env file with your configuration:
LLM_API_KEY="your-openai-api-key"
3

Install Dependencies

# Install uv package manager
brew install uv

# Install project dependencies
cd cognee-mcp
uv sync --dev --all-extras --reinstall
4

Activate and Run

# Activate virtual environment
source .venv/bin/activate

# Run with default stdio transport
python src/server.py

Transport Modes

Choose the transport mode based on your client requirements:
  • stdio
  • HTTP
  • SSE
Default mode for most MCP clients. The client starts the server as a subprocess and communicates through standard input/output.
python src/server.py
Use this with Cursor, Claude Code, Cline, and Roo Code when running from source.
If you encounter errors on first run, reset your MCP configuration and restart.

Running in API Mode

To connect the MCP server to an existing Cognee backend instead of running standalone:
# Set the backend API URL
export API_URL=http://localhost:8080

# Optional: Set authentication token if backend requires it
export API_TOKEN=your_backend_token

# Start MCP in HTTP or SSE mode pointing to the backend
python src/server.py --transport http
When API_URL is set, the MCP server acts as an interface to the centralized backend. This allows multiple MCP instances and clients to share the same knowledge graph. You can also pass these as command-line arguments:
python src/server.py --transport http --api-url http://localhost:8080 --api-token your_token
Use cases:
  • Team collaboration with shared memory
  • Multiple AI clients accessing consistent data
  • Centralized knowledge graph management

Next Steps

After starting the server, configure your AI client to connect to it. See the integrations section for client-specific setup instructions.

Need Help?

Join Our Community

Get support and connect with other developers using Cognee MCP.