Skip to main content

cognee.memify()

async def memify(
    extraction_tasks: Union[List[Task], List[str]] = None,
    enrichment_tasks: Union[List[Task], List[str]] = None,
    data: Optional[Any] = None,
    dataset: Union[str, UUID] = 'main_dataset',
    user: User = None,
    node_type: Optional[Type] = NodeSet,
    node_name: Optional[List[str]] = None,
    vector_db_config: Optional[dict] = None,
    graph_db_config: Optional[dict] = None,
    run_in_background: bool = False,
)

Description

Enrichment pipeline in Cognee, can work with already built graphs. If no data is provided existing knowledge graph will be used as data, custom data can also be provided instead which can be processed with provided extraction and enrichment tasks. Provided tasks and data will be arranged to run the Cognee pipeline and execute graph enrichment/creation. This is the core processing step in Cognee that converts raw text and documents into an intelligent knowledge graph. It analyzes content, extracts entities and relationships, and creates semantic connections for enhanced search and reasoning. Args: extraction_tasks: List of Cognee Tasks to execute for graph/data extraction. enrichment_tasks: List of Cognee Tasks to handle enrichment of provided graph/data from extraction tasks. data: The data to ingest. Can be anything when custom extraction and enrichment tasks are used. Data provided here will be forwarded to the first extraction task in the pipeline as input. If no data is provided the whole graph (or subgraph if node_name/node_type is specified) will be forwarded dataset: Dataset name or dataset uuid to process. user: User context for authentication and data access. Uses default if None. node_type: Filter graph to specific entity types (for advanced filtering). Used when no data is provided. node_name: Filter graph to specific named entities (for targeted search). Used when no data is provided. vector_db_config: Custom vector database configuration for embeddings storage. graph_db_config: Custom graph database configuration for relationship storage. run_in_background: If True, starts processing asynchronously and returns immediately. If False, waits for completion before returning. Background mode recommended for large datasets (>100MB). Use pipeline_run_id from return value to monitor progress.

Parameters

extraction_tasks
Union[List[Task], List[str]]
default:"None"
List of Task objects or task names for graph/data extraction.
enrichment_tasks
Union[List[Task], List[str]]
default:"None"
List of Task objects or task names for graph enrichment.
data
Optional[Any]
default:"None"
Data to ingest. If not provided, operates on existing knowledge graph.
dataset
Union[str, UUID]
default:"'main_dataset'"
Dataset name or UUID to operate on.
user
User
default:"None"
User performing the operation.
node_type
Optional[Type]
default:"NodeSet"
Filter to specific entity types in the graph.
node_name
Optional[List[str]]
default:"None"
Filter to specific named entities.
vector_db_config
Optional[dict]
default:"None"
Override vector database configuration.
graph_db_config
Optional[dict]
default:"None"
Override graph database configuration.
run_in_background
bool
default:"False"
If true, return immediately and process in background.

How memify() differs from cognify()

cognify()memify()
PurposeBuild knowledge graph from raw dataEnrich an existing graph
InputRaw text/filesExisting graph or new data
PipelineFixed (chunk → extract → build)Customizable extraction + enrichment tasks
Use caseInitial processingIterative refinement, entity consolidation

Examples

import cognee

# Enrich existing graph with default tasks
await cognee.memify()

# Enrich a specific dataset
await cognee.memify(dataset="my_dataset")

# Custom extraction and enrichment
from cognee.modules.pipelines import Task

await cognee.memify(
    extraction_tasks=[my_extractor_task],
    enrichment_tasks=[my_enrichment_task],
    dataset="my_dataset",
)

# Filter to specific entity types
await cognee.memify(node_name=["Person", "Organization"])
See the Memify guides for detailed walkthroughs.