What is search
search
lets you ask questions over everything you’ve ingested and cognified.Under the hood, Cognee blends vector similarity, graph structure, and LLM reasoning to return answers with context and provenance.
The big picture
- Dataset-aware: searches run against one or more datasets you can read (requires
ENABLE_BACKEND_ACCESS_CONTROL=true
) - Multiple modes: from simple chunk lookup to graph-aware Q&A
- Hybrid retrieval: vectors find relevant pieces; graphs provide structure; LLMs compose answers
- Safe by default: permissions are checked before any retrieval
- Observability: telemetry is emitted for query start/completion
Dataset scoping requires specific configuration. See permissions system for details on access control requirements and supported database setups.
Where search fits
Usesearch
after you’ve run .add
and .cognify
.
At that point, your dataset has chunks, summaries, embeddings, and a knowledge graph—so queries can leverage both similarity and structure.
How it works (conceptually)
-
Scope & permissions
Resolve target datasets (by name or id) and enforce read access. -
Mode dispatch
Pick a search mode (default: graph-aware completion) and route to its retriever. -
Retrieve → (optional) generate
Collect context via vectors and/or graph traversal; some modes then ask an LLM to compose a final answer. -
Return results
Depending on mode: answers, chunks/summaries with metadata, graph records, Cypher results, or code contexts.
GRAPH_COMPLETION (default)
GRAPH_COMPLETION (default)
Graph-aware question answering.
- What it does: Finds relevant graph triplets using vector hints across indexed fields, resolves them into readable context, and asks an LLM to answer your question grounded in that context.
- Why it’s useful: Combines fuzzy matching (vectors) with precise structure (graph) so answers reflect relationships, not just nearby text.
- Typical output: A natural-language answer with references to the supporting graph context.
RAG_COMPLETION
RAG_COMPLETION
Retrieve-then-generate over text chunks.
- What it does: Pulls top-k chunks via vector search, stitches a context window, then asks an LLM to answer.
- When to use: You want fast, text-only RAG without graph structure.
- Output: An LLM answer grounded in retrieved chunks.
CHUNKS
CHUNKS
Direct chunk retrieval.
- What it does: Returns the most similar text chunks to your query via vector search.
- When to use: You want raw passages/snippets to display or post-process.
- Output: Chunk objects with metadata.
SUMMARIES
SUMMARIES
Search over precomputed summaries.
- What it does: Vector search on
TextSummary
content for concise, high-signal hits. - When to use: You prefer short summaries instead of full chunks.
- Output: Summary objects with provenance.
GRAPH_SUMMARY_COMPLETION
GRAPH_SUMMARY_COMPLETION
Graph-aware summary answering.
- What it does: Builds graph context like GRAPH_COMPLETION, then condenses it before answering.
- When to use: You want a tighter, summary-first response.
- Output: A concise answer grounded in graph context.
GRAPH_COMPLETION_COT
GRAPH_COMPLETION_COT
Chain-of-thought over the graph.
- What it does: Iterative rounds of graph retrieval and LLM reasoning to refine the answer.
- When to use: Complex questions that benefit from stepwise reasoning.
- Output: A refined answer produced through multiple reasoning steps.
GRAPH_COMPLETION_CONTEXT_EXTENSION
GRAPH_COMPLETION_CONTEXT_EXTENSION
Iterative context expansion.
- What it does: Starts with initial graph context, lets the LLM suggest follow-ups, fetches more graph context, repeats.
- When to use: Open-ended queries that need broader exploration.
- Output: An answer assembled after expanding the relevant subgraph.
NATURAL_LANGUAGE
NATURAL_LANGUAGE
Natural language to Cypher to execution.
- What it does: Infers a Cypher query from your question using the graph schema, runs it, returns the results.
- When to use: You want structured graph answers without writing Cypher.
- Output: Executed graph results.
CYPHER
CYPHER
Run Cypher directly.
- What it does: Executes your Cypher query against the graph database.
- When to use: You know the schema and want full control.
- Output: Raw query results.
CODE
CODE
Code-focused retrieval.
- What it does: Interprets your intent (files/snippets), searches code embeddings and related graph nodes, and assembles relevant source.
- When to use: Codebases indexed by Cognee.
- Output: Structured code contexts and related graph information.
FEELING_LUCKY
FEELING_LUCKY
Automatic mode selection.
- What it does: Uses an LLM to pick the most suitable search mode for your query, then runs it.
- When to use: You’re not sure which mode fits best.
- Output: Results from the selected mode.
FEEDBACK
FEEDBACK
Store feedback on recent interactions.
- What it does: Records user feedback on recent answers and links it to the associated graph elements for future tuning.
- When to use: Closing the loop on quality and relevance.
- Output: A feedback record tied to recent interactions.