Skip to Content
How-to GuidesOptimizationEvaluating the Retriever

Evaluating the Retriever

The retriever plays a crucial role in fetching relevant knowledge from structured graph data. A well-optimized retriever ensures that the most contextually relevant nodes and edges are chosen for downstream processing, ultimately improving response quality in your AI system.

This guide walks you through the process of evaluating our different retrievers’ effectiveness.

Step 1: Clone cognee repo

git clone https://github.com/topoteretes/cognee.git

Step 2: Install with poetry

Navigate to cognee repo

cd cognee

Install with poetry

poetry install

Step 3: Set configuration

You can override the configuration parameters by setting them in your .env file. For example:

# .env file example QA_ENGINE=cognee_completion EVALUATING_CONTEXTS=True NUMBER_OF_SAMPLES_IN_CORPUS=50 BENCHMARK=HotPotQA

To choose a retriever to evaluate, the QA_ENGINE parameter needs to be set. Currently supported options are cognee_completion and cognee_graph_completion.

Ensure that EVALUATING_CONTEXTS=True is set. This is the default option, so simply avoid setting it to False.

Step 4: Run evaluation

Run
python evals/eval_framework/run_eval.py

Step 5: Open the generated dashboard to see the results

The automatically generated dashboard.html contains detailed evaluation results. Quality of retriever output is characterized by Contextual Relevancy Score and Context Coverage Score, as described on the corresponding reference page.


Join the Conversation!

Join our community now to connect with professionals, share insights, and get your questions answered!


Last updated on