Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cognee.ai/llms.txt

Use this file to discover all available pages before exploring further.

Start the Cognee MCP server using Docker to quickly test AI memory integration.

Prerequisites

  • Docker installed and running
  • OpenAI API key

Setup Steps

1

Set Your API Key

export LLM_API_KEY=your_api_key_here
2

Create Environment File

echo "LLM_API_KEY=your_api_key_here" > .env
3

Start the Server

docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
The server starts on port 8000 with HTTP transport mode.
4

Verify the Server

curl http://localhost:8000/health
You should see a healthy response from the server.

Persist Data

By default, the container removes its local data when it stops. Use a bind mount or named Docker volume if you want memory to survive restarts.
docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 \
  -v cognee_data:/app/data \
  --rm -it cognee/cognee-mcp:main
Replace cognee_data with either:
  • A named Docker volume, such as cognee_data:/app/data
  • A local directory path, such as ./cognee_data:/app/data

API Mode (Shared Knowledge Graph)

To connect multiple clients to a shared knowledge graph, run MCP in API mode pointing to a centralized Cognee backend:
1

Start Cognee Backend

First, start a Cognee backend instance:
docker run -e LLM_API_KEY=your_api_key_here -p 8080:8000 --rm -it cognee/cognee:main
2

Start MCP in API Mode

Start the MCP server and point it to the backend:
docker run -e TRANSPORT_MODE=http -e API_URL=http://localhost:8080 -p 8000:8000 --rm -it cognee/cognee-mcp:main
The container rewrites localhost in API_URL to host.docker.internal so the MCP container can reach a backend running on your host machine. This works on macOS, Windows, and Linux setups that provide host.docker.internal such as Docker Desktop. On Linux without that support, use --network host or set API_URL to the Docker bridge IP instead. The MCP server now acts as an interface to the shared backend.
3

Connect Additional Clients (Optional)

If you need to support multiple clients, start additional MCP instances on different ports:
docker run -e TRANSPORT_MODE=http -e API_URL=http://localhost:8080 -p 8001:8000 --rm -it cognee/cognee-mcp:main
Each client connects to its own MCP instance, but all share the same knowledge graph through the backend.
  • The API mode requires SSE or HTTP transport
  • If API_URL uses localhost, the container rewrites it to host.docker.internal
  • This works on macOS, Windows, and Linux environments where host.docker.internal is available
  • On Linux without that support, use --network host or a bridge address such as 172.17.0.1
  • Add -e API_TOKEN=your_token if your backend requires authentication
  • For backend authentication setup and how to obtain a Bearer token, see Deploy REST API Server

Docker Compose (Production Setup)

For production deployments, use Docker Compose to run the Cognee backend and MCP server together. This avoids localhost mapping issues and uses Docker’s internal DNS for service discovery.
docker-compose.yml
services:
  cognee-backend:
    image: cognee/cognee:main
    container_name: cognee-backend
    restart: unless-stopped
    ports:
      - "8080:8000"
    environment:
      LLM_API_KEY: "${LLM_API_KEY}"
      LLM_PROVIDER: "${LLM_PROVIDER:-openai}"
      LLM_MODEL: "${LLM_MODEL:-gpt-4o-mini}"
    volumes:
      - cognee_data:/app/data
    networks:
      - cognee_internal

  cognee-mcp:
    image: cognee/cognee-mcp:main
    container_name: cognee-mcp
    restart: unless-stopped
    ports:
      - "8000:8000"
    environment:
      TRANSPORT_MODE: "http"
      API_URL: "http://cognee-backend:8000"
      LLM_API_KEY: "${LLM_API_KEY}"
      LLM_PROVIDER: "${LLM_PROVIDER:-openai}"
      LLM_MODEL: "${LLM_MODEL:-gpt-4o-mini}"
    depends_on:
      - cognee-backend
    networks:
      - cognee_internal

volumes:
  cognee_data:

networks:
  cognee_internal:
    driver: bridge
Networking notes:
  • Use the service name (cognee-backend) as the hostname in API_URL — Docker resolves it automatically within the same network.
  • Use the internal port (8000) in API_URL, not the host-mapped port (8080).
  • If you place a reverse proxy (Nginx, Caddy) in front, you do not need to set a Host: localhost header — the backend accepts requests on any host.
  • Add -e API_TOKEN=your_token to the MCP service if your backend requires authentication.

Connect to AI Clients

After starting the server, connect it to your AI development tool:

Cursor

AI-powered code editor with native MCP support

Claude Code

Command-line AI assistant from Anthropic

Codex

OpenAI coding agent with built-in MCP support

Cline

VS Code extension for AI-assisted development

Continue

Open-source AI coding assistant

Roo Code

AI-powered development environment

Next Steps

Tools Reference

See all available MCP tools and operations

Local Setup

Run from source for customization and development