Use the API

Difficulty: Medium

Overview

This tutorial demonstrates how to interact with cognee’s API service using Docker and cURL. The API allows you to:
  • Set up your environment with an API key
  • Start the API service using Docker
  • Authenticate to get an access token
  • Upload a document (in this case, a text file containing Alice in Wonderland)
  • Process the document with cognee’s pipeline
  • Search the processed content with natural language queries

Step 1: Environment Setup

First, create a .env file that contains your API key. This key is used by the underlying language model (LLM) that powers cognee.
echo 'LLM_API_KEY="YOUR-KEY"' > .env
This step ensures that cognee can access your LLM credentials.

Step 2: Starting the API Service

Pull and run the cognee Docker container:
docker pull cognee/cognee:main
docker run -d -p 8000:8000 --name cognee_container -v $(pwd)/.env:/app/.env cognee/cognee:main
This command pulls the latest cognee image and starts it with your environment file mounted.

Step 3: Authentication

Obtain an access token using the default credentials:
access_token=$(curl --location 'http://127.0.0.1:8000/api/v1/auth/login' \
  --form 'username="default_user@example.com"' \
  --form 'password="default_password"' | sed -n 's/.*"access_token":"\([^"]*\)".*/\1/p')

echo "Access Token: $access_token"
The access token will be used to authenticate all subsequent API requests.

Step 4: Uploading the Document

Upload Alice in Wonderland to a dataset called ‘test-dataset’:
curl -s https://raw.githubusercontent.com/topoteretes/cognee/main/examples/data/alice_in_wonderland.txt | \
curl --location 'http://127.0.0.1:8000/api/v1/add' \
  --header "Authorization: Bearer $access_token" \
  --form 'data=@-' \
  --form 'datasetName="test-dataset"'
This uploads the document and associates it with the specified dataset name.

Step 5: Processing the Document

Process the uploaded dataset:
curl --location 'http://127.0.0.1:8000/api/v1/cognify' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $access_token" \
--data '{
 "datasets": ["test-dataset"]
}'
This step processes the document through cognee’s pipeline, preparing it for querying.

Step 6: Performing Search Queries

Now you can query the processed dataset. Here are three example queries using the COMPLETION search type:

Query 1: List Important Characters

curl --location 'http://127.0.0.1:8000/api/v1/search' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $access_token" \
--data '{
 "searchType": "GRAPH_COMPLETION",
 "query": "List me all the important characters in Alice in Wonderland."
}'

Query 2: How Did Alice End Up in Wonderland?

curl --location 'http://127.0.0.1:8000/api/v1/search' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $access_token" \
--data '{
 "searchType": "GRAPH_COMPLETION",
 "query": "How did Alice end up in Wonderland?"
}'

Query 3: Describe Alice’s Personality

curl --location 'http://127.0.0.1:8000/api/v1/search' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $access_token" \
--data '{
 "searchType": "GRAPH_COMPLETION",
 "query": "Tell me about Alice'\''s personality."
}'

Step 5: Deleting the Document

Let’s remove the uploaded dataset:
curl -s https://raw.githubusercontent.com/topoteretes/cognee/main/examples/data/alice_in_wonderland.txt | \
curl --location --request DELETE 'http://127.0.0.1:8000/api/v1/delete' \
  --header "Authorization: Bearer $access_token" \
  --form 'data=@-' \
  --form 'datasetName="test-dataset"' \
  --form 'mode="hard"'
}'

Conclusion

This tutorial showed you how to use cognee’s API to:
  • Set up your environment
  • Start the cognee service using Docker
  • Authenticate and get an access token
  • Upload and process a document
  • Query the processed content using natural language
  • Delete the document you have worked with
You can adapt these steps for your own use cases, whether you’re building a search application, a knowledge base, or any other system that needs powerful document processing and querying capabilities. Happy coding!