Skip to main content
Cognee MCP works with AI development tools that support the Model Context Protocol. These clients connect to your Cognee MCP server and provide access to memory management, code intelligence, and data operations through their interfaces. The client configuration is the same whether your MCP server runs in standalone mode or connects to a centralized Cognee backend. The server architecture is transparent to clients.

Available Clients

Configuration Options

Each integration requires a configuration file that tells the client how to connect to your Cognee MCP server. You have two options:

Docker (HTTP Transport)

Use this if you started Cognee MCP with Docker using the quickstart guide. The client connects to the server over HTTP.
{
  "mcpServers": {
    "cognee": {
      "url": "http://localhost:8000/mcp"
    }
  }
}

Local (stdio Transport)

Use this if you cloned the Cognee repository and run the server from source. The client starts the server as a subprocess and communicates through standard input/output.
{
  "mcpServers": {
    "cognee": {
      "command": "uv",
      "args": [
        "--directory",
        "/absolute/path/to/cognee-mcp",
        "run",
        "cognee-mcp"
      ],
      "env": {
        "LLM_API_KEY": "your-api-key"
      }
    }
  }
}
The configuration file location and exact format varies by client. See the specific integration guide for your tool.

Next Steps

Choose your client from the cards above to see detailed setup instructions for that specific tool.