Skip to main content
Build and run Cognee MCP from source to access advanced customization, multiple transport options, and the latest development features.

Advantages of Local Setup

  • Full Control: Customize server configuration, add providers, and modify behavior
  • Latest Features: Access development features before they reach Docker releases
  • Multiple Transports: Choose stdio, SSE, or HTTP transport modes
  • Development Ready: Debug, modify, and contribute to the codebase

Setup Steps

1

Clone Repository

git clone https://github.com/topoteretes/cognee.git
cd cognee
2

Create Environment File

Create a .env file with your configuration:
LLM_API_KEY="your-openai-api-key"
3

Install Dependencies

# Install uv package manager
brew install uv

# Install project dependencies
cd cognee-mcp
uv sync --dev --all-extras --reinstall
4

Activate and Run

# Activate virtual environment
source .venv/bin/activate

# Run with default stdio transport
python src/server.py

Running in API Mode

To connect the MCP server to an existing Cognee backend instead of running standalone:
# Set the backend API URL
export API_URL=http://localhost:8080

# Optional: Set authentication token if backend requires it
export API_TOKEN=your_backend_token

# Start MCP in HTTP or SSE mode pointing to the backend
python src/server.py --transport http
When API_URL is set, the MCP server acts as an interface to the centralized backend. This allows multiple MCP instances and clients to share the same knowledge graph. You can also pass these as command-line arguments:
python src/server.py --transport http --api-url http://localhost:8080 --api-token your_token
Use cases:
  • Team collaboration with shared memory
  • Multiple AI clients accessing consistent data
  • Centralized knowledge graph management

Further details

Choose the transport mode based on your client requirements:
Default mode for most MCP clients. The client starts the server as a subprocess and communicates through standard input/output.
python src/server.py
# or equivalently:
python src/server.py --transport stdio
Configure your MCP client to launch the server directly:
{
  "mcpServers": {
    "cognee": {
      "command": "uv",
      "args": [
        "--directory", "/absolute/path/to/cognee-mcp",
        "run", "cognee-mcp"
      ]
    }
  }
}
Replace /absolute/path/to/cognee-mcp with the actual path to your cloned cognee-mcp directory.
If you encounter errors on first run, reset your MCP configuration and restart.
All available arguments for python src/server.py:
ArgumentDefaultDescription
--transportstdioTransport protocol: stdio, http, or sse
--host127.0.0.1Host to bind the server to (HTTP/SSE only)
--port8000Port to bind the server to (HTTP/SSE only)
--path/mcpURL path for the HTTP endpoint
--log-levelinfoLog verbosity: debug, info, warning, or error
--no-migrationoffSkip database migrations on startup
--api-urlURL of a running Cognee backend (enables API mode)
--api-tokenAuth token for the backend API (if required)
Example with all options:
python src/server.py \
  --transport http \
  --host 0.0.0.0 \
  --port 8000 \
  --log-level debug \
  --api-url http://localhost:8080 \
  --api-token your_token

Next Steps

After starting the server, configure your AI client to connect to it. See the integrations section for client-specific setup instructions.

Need Help?

Join Our Community

Get support and connect with other developers using Cognee MCP.