Skip to main content

cognee.search()

async def search(
    query_text: str,
    query_type: SearchType = SearchType.GRAPH_COMPLETION,
    user: Optional[User] = None,
    datasets: Optional[Union[list[str], str]] = None,
    dataset_ids: Optional[Union[list[UUID], UUID]] = None,
    system_prompt_path: str = 'answer_simple_question.txt',
    system_prompt: Optional[str] = None,
    top_k: int = 10,
    node_type: Optional[Type] = NodeSet,
    node_name: Optional[List[str]] = None,
    only_context: bool = False,
    session_id: Optional[str] = None,
    wide_search_top_k: Optional[int] = 100,
    triplet_distance_penalty: Optional[float] = 3.5,
    verbose: bool = False,
    retriever_specific_config: Optional[dict] = None,
) -> List[SearchResult]

Description

Search and query the knowledge graph for insights, information, and connections. This is the final step in the Cognee workflow that retrieves information from the processed knowledge graph. It supports multiple search modes optimized for different use cases - from simple fact retrieval to complex reasoning and code analysis. Search Prerequisites:
  • LLM_API_KEY: Required for GRAPH_COMPLETION and RAG_COMPLETION search types
  • Data Added: Must have data previously added via cognee.add()
  • Knowledge Graph Built: Must have processed data via cognee.cognify()
  • Dataset Permissions: User must have ‘read’ permission on target datasets
  • Vector Database: Must be accessible for semantic search functionality
Search Types & Use Cases: GRAPH_COMPLETION (Default - Recommended): Natural language Q&A using full graph context and LLM reasoning. Best for: Complex questions, analysis, summaries, insights. Returns: Conversational AI responses with graph-backed context. RAG_COMPLETION: Traditional RAG using document chunks without graph structure. Best for: Direct document retrieval, specific fact-finding. Returns: LLM responses based on relevant text chunks. CHUNKS: Raw text segments that match the query semantically. Best for: Finding specific passages, citations, exact content. Returns: Ranked list of relevant text chunks with metadata. SUMMARIES: Pre-generated hierarchical summaries of content. Best for: Quick overviews, document abstracts, topic summaries. Returns: Multi-level summaries from detailed to high-level. CODE: Code-specific search with syntax and semantic understanding. Best for: Finding functions, classes, implementation patterns. Returns: Structured code information with context and relationships. CYPHER: Direct graph database queries using Cypher syntax. Best for: Advanced users, specific graph traversals, debugging. Returns: Raw graph query results. FEELING_LUCKY: Intelligently selects and runs the most appropriate search type. Best for: General-purpose queries or when you’re unsure which search type is best. Returns: The results from the automatically selected search type. CHUNKS_LEXICAL: Token-based lexical chunk search (e.g., Jaccard). Best for: exact-term matching, stopword-aware lookups. Returns: Ranked text chunks (optionally with scores). Args: query_text: Your question or search query in natural language. Examples:
  • “What are the main themes in this research?”
  • “How do these concepts relate to each other?”
  • “Find information about machine learning algorithms”
  • “What functions handle user authentication?”
query_type: SearchType enum specifying the search mode. Defaults to GRAPH_COMPLETION for conversational AI responses. user: User context for data access permissions. Uses default if None. datasets: Dataset name(s) to search within. Searches all accessible if None.
  • Single dataset: “research_papers”
  • Multiple datasets: [“docs”, “reports”, “analysis”]
  • None: Search across all user datasets
dataset_ids: Alternative to datasets - use specific UUID identifiers. system_prompt_path: Custom system prompt file for LLM-based search types. Defaults to “answer_simple_question.txt”. top_k: Maximum number of results to return (1-N) Higher values provide more comprehensive but potentially noisy results. node_type: Filter results to specific entity types (for advanced filtering). node_name: Filter results to specific named entities (for targeted search). session_id: Optional session identifier for caching Q&A interactions. Defaults to ‘default_session’ if None. verbose: If True, returns detailed result information including graph representation (when possible). retriever_specific_config: Optional dictionary of additional configuration parameters specific to the retriever being used. Returns: list: Search results in format determined by query_type: GRAPH_COMPLETION/RAG_COMPLETION: [List of conversational AI response strings] CHUNKS: [List of relevant text passages with source metadata] SUMMARIES: [List of hierarchical summaries from general to specific] CODE: [List of structured code information with context] FEELING_LUCKY: [List of results in the format of the search type that is automatically selected] Performance & Optimization:
  • GRAPH_COMPLETION: Slower but most intelligent, uses LLM + graph context
  • RAG_COMPLETION: Medium speed, uses LLM + document chunks (no graph traversal)
  • CHUNKS: Fastest, pure vector similarity search without LLM
  • SUMMARIES: Fast, returns pre-computed summaries
  • CODE: Medium speed, specialized for code understanding
  • FEELING_LUCKY: Variable speed, uses LLM + search type selection intelligently
  • top_k: Start with 10, increase for comprehensive analysis (max 100)
  • datasets: Specify datasets to improve speed and relevance
Next Steps After Search:
  • Use results for further analysis or application integration
  • Combine different search types for comprehensive understanding
  • Export insights for reporting or downstream processing
  • Iterate with refined queries based on initial results
Environment Variables: Required for LLM-based search types (GRAPH_COMPLETION, RAG_COMPLETION):
  • LLM_API_KEY: API key for your LLM provider
Optional:
  • LLM_PROVIDER, LLM_MODEL: Configure LLM for search responses
  • VECTOR_DB_PROVIDER: Must match what was used during cognify
  • GRAPH_DATABASE_PROVIDER: Must match what was used during cognify

Parameters

query_text
str
required
Natural language search query.
query_type
SearchType
default:"SearchType.GRAPH_COMPLETION"
Type of search to perform.
user
Optional[User]
default:"None"
User performing the search.
datasets
Optional[Union[list[str], str]]
default:"None"
Dataset name(s) to search within.
dataset_ids
Optional[Union[list[UUID], UUID]]
default:"None"
Dataset UUID(s) to search within. Required for datasets not owned by the user.
system_prompt_path
str
default:"'answer_simple_question.txt'"
Path to a custom system prompt file.
system_prompt
Optional[str]
default:"None"
Inline system prompt string (overrides system_prompt_path).
top_k
int
default:"10"
Maximum number of results to return.
node_type
Optional[Type]
default:"NodeSet"
Filter results to a specific entity type.
node_name
Optional[List[str]]
default:"None"
Filter results to specific named entities.
only_context
bool
default:"False"
If true, return only the retrieved context without LLM completion.
session_id
Optional[str]
default:"None"
Session ID for conversational context tracking.
wide_search_top_k
Optional[int]
default:"100"
Number of candidates for the wide search phase.
triplet_distance_penalty
Optional[float]
default:"3.5"
Penalty factor for triplet distance in scoring.
verbose
bool
default:"False"
Include detailed retrieval metadata in results.
retriever_specific_config
Optional[dict]
default:"None"
Additional configuration for the selected retriever.

Returns

List[SearchResult]

Examples

import cognee
from cognee import SearchType

# Default graph completion search
results = await cognee.search("What is Cognee?")

# RAG-style search
results = await cognee.search(
    "How does chunking work?",
    query_type=SearchType.RAG_COMPLETION,
)

# Get raw chunks without LLM completion
results = await cognee.search(
    "knowledge graph",
    query_type=SearchType.CHUNKS,
    top_k=5,
)

# Search within specific datasets
results = await cognee.search(
    "deployment options",
    datasets=["infrastructure_docs"],
)

# Context-only mode (no LLM answer)
context = await cognee.search(
    "What are the main entities?",
    only_context=True,
)

# Session-aware conversational search
results = await cognee.search(
    "Tell me more about that",
    session_id="conversation_123",
)
See SearchType for all available search modes.