Skip to main content
Start the Cognee MCP server using Docker to quickly test AI memory integration.

Prerequisites

  • Docker installed and running
  • OpenAI API key

Setup Steps

1

Set Your API Key

export LLM_API_KEY=your_api_key_here
2

Create Environment File

echo "LLM_API_KEY=your_api_key_here" > .env
3

Start the Server

docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
The server starts on port 8000 with HTTP transport mode.
4

Verify the Server

curl http://localhost:8000/health
You should see a healthy response from the server.
The container removes all data when stopped. Use volume mounts for persistent storage.

API Mode (Shared Knowledge Graph)

To connect multiple clients to a shared knowledge graph, run MCP in API mode pointing to a centralized Cognee backend:
1

Start Cognee Backend

First, start a Cognee backend instance:
docker run -e LLM_API_KEY=your_api_key_here -p 8080:8000 --rm -it cognee/cognee:main
2

Start MCP in API Mode

Start the MCP server and point it to the backend:
docker run -e TRANSPORT_MODE=sse -e API_URL=http://localhost:8080 -p 8000:8000 --rm -it cognee/cognee-mcp:main
The container automatically converts localhost to host.docker.internal so the MCP container can reach your host machine. The MCP server now acts as an interface to the shared backend.
3

Connect Additional Clients (Optional)

If you need to support multiple clients, start additional MCP instances on different ports:
docker run -e TRANSPORT_MODE=sse -e API_URL=http://localhost:8080 -p 8001:8000 --rm -it cognee/cognee-mcp:main
Each client connects to its own MCP instance, but all share the same knowledge graph through the backend.
  • The API mode requires SSE or HTTP transport
  • The localhost in API_URL is automatically mapped to work from inside the container
  • Add -e API_TOKEN=your_token if your backend requires authentication

Docker Compose (Production Setup)

For production deployments, use Docker Compose to run the Cognee backend and MCP server together. This avoids localhost mapping issues and uses Docker’s internal DNS for service discovery.
docker-compose.yml
services:
  cognee-backend:
    image: cognee/cognee:main
    container_name: cognee-backend
    restart: unless-stopped
    ports:
      - "8080:8000"
    environment:
      LLM_API_KEY: "${LLM_API_KEY}"
      LLM_PROVIDER: "${LLM_PROVIDER:-openai}"
      LLM_MODEL: "${LLM_MODEL:-gpt-4o-mini}"
    volumes:
      - cognee_data:/app/data
    networks:
      - cognee_internal

  cognee-mcp:
    image: cognee/cognee-mcp:main
    container_name: cognee-mcp
    restart: unless-stopped
    ports:
      - "8000:8000"
    environment:
      TRANSPORT_MODE: "sse"
      API_URL: "http://cognee-backend:8000"
      LLM_API_KEY: "${LLM_API_KEY}"
      LLM_PROVIDER: "${LLM_PROVIDER:-openai}"
      LLM_MODEL: "${LLM_MODEL:-gpt-4o-mini}"
    depends_on:
      - cognee-backend
    networks:
      - cognee_internal

volumes:
  cognee_data:

networks:
  cognee_internal:
    driver: bridge
Networking notes:
  • Use the service name (cognee-backend) as the hostname in API_URL — Docker resolves it automatically within the same network.
  • Use the internal port (8000) in API_URL, not the host-mapped port (8080).
  • If you place a reverse proxy (Nginx, Caddy) in front, you do not need to set a Host: localhost header — the backend accepts requests on any host.
  • Add -e API_TOKEN=your_token to the MCP service if your backend requires authentication.

Connect to AI Clients

After starting the server, connect it to your AI development tool:

Cursor

AI-powered code editor with native MCP support

Claude Code

Command-line AI assistant from Anthropic

Cline

VS Code extension for AI-assisted development

Continue

Open-source AI coding assistant

Roo Code

AI-powered development environment

Next Steps

Tools Reference

See all available MCP tools and operations

Local Setup

Run from source for customization and development