Skip to main content
A minimal guide to using cognee.search() to ask questions against your processed datasets. This guide shows the basic call and what each parameter does so you know which knob to turn. Before you start:
  • Complete Quickstart to understand basic operations
  • Ensure you have LLM Providers configured for LLM-backed search types
  • Run cognee.cognify(...) to build the graph before searching
  • Keep at least one dataset with read permission for the user running the search

Code in Action

import asyncio
import cognee

async def main():
    # Make sure you've already run cognee.cognify(...) so the graph has content
    answers = await cognee.search(
        query_text="What are the main themes in my data?"
    )
    for answer in answers:
        print(answer)

asyncio.run(main())
SearchType.GRAPH_COMPLETION is the default, so you get an LLM-backed answer plus supporting context as soon as you have data in your graph.

What Just Happened

The search call uses the default SearchType.GRAPH_COMPLETION mode to provide LLM-backed answers with supporting context from your knowledge graph. The results are returned as a list that you can iterate through and process as needed.

Parameters Reference

Most examples below assume you are inside an async function. Import helpers when you need them:
from cognee import SearchType
from cognee.modules.engine.models.node_set import NodeSet

Core Parameters

  • query_text (str, required): The question or phrase you want answered.
    answers = await cognee.search(query_text="Who owns the rollout plan?")
    
  • query_type (SearchType, optional, default: SearchType.GRAPH_COMPLETION): Switch search modes without changing your code flow. See Search Types for the complete list and Retrievers for how each type maps to a retriever.
    await cognee.search(
        query_text="List coding guidelines",
        query_type=SearchType.CODING_RULES,
    )
    
  • top_k (int, optional, default: 10): Cap how many ranked results you want back.
    await cognee.search(query_text="Summaries please", top_k=3)
    
  • system_prompt_path (str, optional, default: "answer_simple_question.txt"): Point to a prompt file packaged with your project.
    await cognee.search(
        query_text="Explain the roadmap in bullet points",
        system_prompt_path="prompts/bullets.txt",
    )
    
  • system_prompt (Optional[str]): Inline override for experiments or dynamically generated prompts.
    await cognee.search(
        query_text="Give me a confident answer",
        system_prompt="Answer succinctly and state confidence at the end.",
    )
    
  • only_context (bool, optional, default: False): Skip LLM generation and just fetch supporting context chunks.
    context = await cognee.search(
        query_text="What did we promise the client?",
        only_context=True,
    )
    
  • wide_search_top_k (int, optional, default: 100): Used by graph-completion retrievers to cap initial candidate retrieval before ranking. Increase for broader recall when the graph is large.
  • triplet_distance_penalty (float, optional, default: 3.5): Penalty applied in graph retrieval ranking; affects how triplet distance influences the final ranking.
  • retriever_specific_config (dict, optional): Per-retriever options. Examples: response_model for typed LLM output; max_iter for GRAPH_COMPLETION_COT; context_extension_rounds for GRAPH_COMPLETION_CONTEXT_EXTENSION. See the API reference for full keys.
  • verbose (bool, optional, default: False): When true, the result includes detailed fields (text_result, context_result, objects_result) where applicable (e.g. when access control is on).
These options filter the graph down to the node sets you care about. In most workflows you set both: keep node_type=NodeSet and pass one or more set names in node_name—the same labels you used when calling cognee.add(..., node_set=[...]).
  • node_type (Optional[Type], optional, default: NodeSet): Controls which graph model to search. Leave this as NodeSet unless you’ve built a custom node model.
  • node_name (Optional[List[str]]): Names of the node sets to include. Cognee treats each string as a logical bucket of memories.
    await cognee.search(
        query_text="What discounts did TechSupply offer?",
        node_type=NodeSet,
        node_name=["vendor_conversations"],
    )
    
    await cognee.search(
        query_text="Summarize procurement rules",
        node_type=NodeSet,
        node_name=["procurement_policies", "purchase_history"],
    )
    
  • session_id (Optional[str]): Maintain conversation history across searches. Sessions are used only by GRAPH_COMPLETION, RAG_COMPLETION, and TRIPLET_COMPLETION; other modes (CHUNKS, SUMMARIES, etc.) do not use or write session history. Batch mode does not use session cache. When you omit session_id and caching is enabled, Cognee uses default_session and still stores the turn. When you use the same session_id, Cognee includes previous interactions in the LLM prompt, enabling contextual follow-up questions.
    await cognee.search(
        query_text="Where does Alice live?",
        session_id="conversation_1"
    )
    # Later, same session remembers previous context
    await cognee.search(
        query_text="What does she do for work?",
        session_id="conversation_1"  # "she" refers to Alice
    )
    
    See Sessions Guide for complete examples. To record feedback on answers, use Sessions and the Feedback System (cognee.session.add_feedback / delete_feedback).
  • datasets (Optional[Union[list[str], str]]): Limit search to dataset names you already know.
    await cognee.search(
        query_text="Key risks",
        datasets=["risk_register", "exec_summary"],
    )
    
  • dataset_ids (Optional[Union[list[UUID], UUID]]): Same as datasets, but with explicit UUIDs when names collide.
    from uuid import UUID
    await cognee.search(
        query_text="Customer feedback",
        dataset_ids=[UUID("aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee")],
    )
    
  • user (Optional[User]): Provide a user object when running multi-tenant flows or background jobs.
    from cognee.modules.users.methods import get_user
    user = await get_user(user_id)
    await cognee.search(query_text="Team OKRs", user=user)
    
    When ENABLE_BACKEND_ACCESS_CONTROL=true:
    • Result shape: Searches run only on datasets the user can access. Results are returned as a list of per-dataset objects (dataset_name, dataset_id, search_result). Use verbose=True to include text_result, context_result, and objects_result in each item.
    • If no user is given, get_default_user() is used (created if missing); errors only if this user lacks dataset permissions.
    • If datasets is not set, all datasets readable by the user are searched; errors if none are accessible or if requested datasets are forbidden.
    PermissionDeniedError will be raised unless you search with the same user that added the data or grant access to the default user.
    When ENABLE_BACKEND_ACCESS_CONTROL=false
    • Dataset filters (datasets, dataset_ids) are ignored — everything is searched.
    • Results come back as a plain list of answers (e.g. ["answer1", "answer2"]). If only one dataset is searched and the retriever returns a list, Cognee may unwrap one level for backwards compatibility.

Additional examples

Full python script that expands the previous example can be found underneath.
  import asyncio
  import cognee

  async def main():
      # Start clean (optional in your app)
      await cognee.prune.prune_data()
      await cognee.prune.prune_system(metadata=True)
      # Prepare knowledge base
      await cognee.add([
          "Alice moved to Paris in 2010. She works as a software engineer.",
          "Bob lives in New York. He is a data scientist.",
          "Alice and Bob met at a conference in 2015."
      ])

      await cognee.cognify()

      # Make sure you've already run cognee.cognify(...) so the graph has content
      answers = await cognee.search(
          query_text="What are the main themes in my data?"
      )
      for answer in answers:
          print(answer)

  asyncio.run(main())
Additional examples about Search Basics are available on our github.