What is search
search lets you ask questions over everything you’ve ingested and cognified.Under the hood, Cognee blends vector similarity, graph structure, and LLM reasoning to return answers with context and provenance.
The big picture
- Dataset-aware: searches run against one or more datasets you can read (requires
ENABLE_BACKEND_ACCESS_CONTROL=true) - Multiple modes: from simple chunk lookup to graph-aware Q&A
- Hybrid retrieval: vectors find relevant pieces; graphs provide structure; LLMs compose answers
- Conversational memory: for GRAPH_COMPLETION, RAG_COMPLETION, and TRIPLET_COMPLETION, use
session_idto maintain conversation history across searches (requires caching enabled). When caching is on, omittingsession_idusesdefault_sessionand still stores history. Other search types do not use session history. - Safe by default: permissions are checked before any retrieval
- Observability: telemetry is emitted for query start/completion
Where search fits
Usesearch after you’ve run .add and .cognify.
At that point, your dataset has chunks, summaries, embeddings, and a knowledge graph—so queries can leverage both similarity and structure.
How it works (conceptually)
-
Scope & permissions
Resolve target datasets (by name or id) and enforce read access. -
Mode dispatch
Pick a search mode (default: graph-aware completion) and route to its retriever. -
Retrieve → (optional) generate
Collect context via vectors and/or graph traversal; some modes then ask an LLM to compose a final answer. -
Return results
Depending on mode: answers, chunks/summaries with metadata, graph records, Cypher results, or code contexts.
Retrievers
Each search type is handled by a retriever. The pipeline is:get_retrieved_objects → get_context_from_objects → get_completion_from_context (skipped when only_context=True).
| Search type | Retriever |
|---|---|
| GRAPH_COMPLETION | GraphCompletionRetriever |
| RAG_COMPLETION | CompletionRetriever |
| CHUNKS | ChunksRetriever |
| SUMMARIES | SummariesRetriever |
| GRAPH_SUMMARY_COMPLETION | GraphSummaryCompletionRetriever |
| GRAPH_COMPLETION_COT | GraphCompletionCotRetriever |
| GRAPH_COMPLETION_CONTEXT_EXTENSION | GraphCompletionContextExtensionRetriever |
| TRIPLET_COMPLETION | TripletRetriever |
| CHUNKS_LEXICAL | JaccardChunksRetriever |
| CODING_RULES | CodingRulesRetriever |
| TEMPORAL | TemporalRetriever |
| CYPHER | CypherSearchRetriever |
| NATURAL_LANGUAGE | NaturalLanguageRetriever |
use_retriever(SearchType, RetrieverClass); the class must implement the same three-step interface (BaseRetriever). See the API reference for BaseRetriever and register_retriever.
Multi-query (batch)
GraphCompletionRetriever, GraphCompletionCotRetriever, and GraphCompletionContextExtensionRetriever support batch mode: passquery_batch (a non-empty list of strings) instead of query. You get one result per query; session cache is not used in batch mode. The public cognee.search() API accepts only a single query_text; batch is available when you use the retrievers directly (e.g. in custom pipelines).
GRAPH_COMPLETION (default)
GRAPH_COMPLETION (default)
Graph-aware question answering.
- What it does: Finds relevant graph triplets using vector hints across indexed fields, resolves them into readable context, and asks an LLM to answer your question grounded in that context.
- Why it’s useful: Combines fuzzy matching (vectors) with precise structure (graph) so answers reflect relationships, not just nearby text.
- Typical output: A natural-language answer with references to the supporting graph context.
RAG_COMPLETION
RAG_COMPLETION
Retrieve-then-generate over text chunks.
- What it does: Pulls top-k chunks via vector search, stitches a context window, then asks an LLM to answer.
- When to use: You want fast, text-only RAG without graph structure.
- Output: An LLM answer grounded in retrieved chunks.
CHUNKS
CHUNKS
Direct chunk retrieval.
- What it does: Returns the most similar text chunks to your query via vector search.
- When to use: You want raw passages/snippets to display or post-process.
- Output: Chunk objects with metadata.
SUMMARIES
SUMMARIES
Search over precomputed summaries.
- What it does: Vector search on
TextSummarycontent for concise, high-signal hits. - When to use: You prefer short summaries instead of full chunks.
- Output: Summary objects with provenance.
GRAPH_SUMMARY_COMPLETION
GRAPH_SUMMARY_COMPLETION
Graph-aware summary answering.
- What it does: Builds graph context like GRAPH_COMPLETION, then condenses it before answering.
- When to use: You want a tighter, summary-first response.
- Output: A concise answer grounded in graph context.
GRAPH_COMPLETION_COT
GRAPH_COMPLETION_COT
Chain-of-thought over the graph.
- What it does: Iterative rounds of graph retrieval and LLM reasoning to refine the answer.
- When to use: Complex questions that benefit from stepwise reasoning.
- Output: A refined answer produced through multiple reasoning steps.
GRAPH_COMPLETION_CONTEXT_EXTENSION
GRAPH_COMPLETION_CONTEXT_EXTENSION
Iterative context expansion.
- What it does: Starts with initial graph context, lets the LLM suggest follow-ups, fetches more graph context, repeats.
- When to use: Open-ended queries that need broader exploration.
- Output: An answer assembled after expanding the relevant subgraph.
NATURAL_LANGUAGE
NATURAL_LANGUAGE
Natural language to Cypher to execution.
- What it does: Infers a Cypher query from your question using the graph schema, runs it, returns the results.
- When to use: You want structured graph answers without writing Cypher.
- Output: Executed graph results.
CYPHER
CYPHER
Run Cypher directly.
- What it does: Executes your Cypher query against the graph database.
- When to use: You know the schema and want full control.
- Output: Raw query results.
CYPHER and NATURAL_LANGUAGE are disabled when
ALLOW_CYPHER_QUERY=false (environment variable).CODING_RULES
CODING_RULES
Code-focused retrieval (coding rules / codebase search).
- What it does: Retrieves rules or code context from the
coding_agent_rulesnodeset and returns structured code information. - When to use: Codebases or coding guidelines indexed by Cognee (e.g. via memify).
- Output: Structured code contexts and related graph information.
- Prereq: The
coding_agent_rulesnodeset must be populated (e.g. via memify).
TRIPLET_COMPLETION
TRIPLET_COMPLETION
Triple-based retrieval with LLM completion (no full graph traversal).
- What it does: Retrieves graph triplets by vector similarity, resolves them to text, and asks an LLM to answer.
- When to use: You want triplet-level context without full graph expansion.
- Output: An LLM answer grounded in retrieved triplets.
- Prereq: Triplet embeddings must exist—set
TRIPLET_EMBEDDING=truebefore running cognify or run the memify pipelinecreate_triplet_embeddings(retriever uses theTriplet_textcollection).
CHUNKS_LEXICAL
CHUNKS_LEXICAL
Lexical (keyword-style) chunk search.
- What it does: Returns chunks that match your query using token-based similarity (e.g. Jaccard), not semantic embeddings.
- When to use: Exact-term or keyword-style lookups; stopword-aware search.
- Output: Ranked text chunks, optionally with scores.
TEMPORAL
TEMPORAL
Time-aware retrieval.
- What it does: Retrieves and ranks content by temporal relevance (dates, events) and answers with time context.
- When to use: Queries about “before/after X”, “in 2020”, or event timelines.
- Output: An answer grounded in time-filtered graph context. See Time-awareness for setup.
FEELING_LUCKY
FEELING_LUCKY
Automatic mode selection.
- What it does: Uses an LLM to pick the most suitable search mode for your query, then runs it.
- When to use: You’re not sure which mode fits best.
- Output: Results from the selected mode.
Feedback is handled via Sessions and the Feedback System—use
cognee.session.add_feedback and cognee.session.delete_feedback. See the Sessions Guide and Feedback System for full details.