cognee.search()
to ask questions against your processed datasets. This guide shows the basic call and what each parameter does so you know which knob to turn.
Before you start:
- Complete Quickstart to understand basic operations
- Ensure you have LLM Providers configured for LLM-backed search types
- Run
cognee.cognify(...)
to build the graph before searching - Keep at least one dataset with
read
permission for the user running the search
Code in Action
SearchType.GRAPH_COMPLETION
is the default, so you get an LLM-backed answer plus supporting context as soon as you have data in your graph.What Just Happened
The search call uses the defaultSearchType.GRAPH_COMPLETION
mode to provide LLM-backed answers with supporting context from your knowledge graph. The results are returned as a list that you can iterate through and process as needed.
Parameters Reference
Most examples below assume you are inside an async function. Import helpers when you need them:Core Parameters
Core Parameters
query_text
(str, required): The question or phrase you want answered.query_type
(SearchType, optional, default:SearchType.GRAPH_COMPLETION
): Switch search modes without changing your code flow. See Search Types for the complete list.top_k
(int, optional, default: 10): Cap how many ranked results you want back.
Prompt & Generation Parameters
Prompt & Generation Parameters
system_prompt_path
(str, optional, default:"answer_simple_question.txt"
): Point to a prompt file packaged with your project.system_prompt
(Optional[str]): Inline override for experiments or dynamically generated prompts.only_context
(bool, optional, default: False): Skip LLM generation and just fetch supporting context chunks.use_combined_context
(bool, optional, default: False): Collapse results into a single combined response when you query multiple datasets.
Node Sets & Filtering Parameters
Node Sets & Filtering Parameters
These options filter the graph down to the node sets you care about. In most workflows you set both: keep
node_type=NodeSet
and pass one or more set names in node_name
—the same labels you used when calling cognee.add(..., node_set=[...])
.node_type
(Optional[Type], optional, default:NodeSet
): Controls which graph model to search. Leave this asNodeSet
unless you’ve built a custom node model.node_name
(Optional[List[str]]): Names of the node sets to include. Cognee treats each string as a logical bucket of memories.
Interaction & History Parameters
Interaction & History Parameters
save_interaction
(bool, optional, default: False): Persist the Q&A as a graph interaction for auditing or later review.last_k
(Optional[int], optional, default: 1): When usingSearchType.FEEDBACK
, choose how many recent interactions to update with your feedback.
Datasets & Users
Datasets & Users
datasets
(Optional[Union[list[str], str]]): Limit search to dataset names you already know.dataset_ids
(Optional[Union[list[UUID], UUID]]): Same asdatasets
, but with explicit UUIDs when names collide.user
(Optional[User]): Provide a user object when running multi-tenant flows or background jobs.
-
Access control OFF (
ENABLE_BACKEND_ACCESS_CONTROL=false
),use_combined_context=False
:- What gets searched? Cognee does one search across every dataset you passed (and any others you can access) because access control is disabled.
- What comes back? A Python list, e.g.
['answer 1', 'answer 2']
.
-
Access control OFF,
use_combined_context=True
:- What gets searched? Same single search as above.
- What comes back? A
CombinedSearchResult
. Because no dataset boundaries are enforced, thedatasets
field holds a placeholder entry labelled “all available datasets”.
-
Access control ON,
use_combined_context=False
:- What gets searched? Cognee validates each dataset ID, then runs the query separately against each permitted dataset.
- What comes back? A list of dictionaries, one per dataset, for example:
-
Access control ON,
use_combined_context=True
:- What gets searched? Same per-dataset queries as the previous case.
- What comes back? A single
CombinedSearchResult
wheredatasets
lists each contributing dataset andcontext
aggregates their snippets. Use this when you prefer one blended answer.