What You’ll Learn
In this tutorial, you’ll:- Organize memory with nodesets and apply filters during retrieval
- Define your data model using ontology support
- Enhance memory with contextual enrichment layers
- Visualize your graph with graph visualization to explore stored knowledge
- Search smarter by combining vector similarity with graph traversal
- Refine results through interactive search and feedback
Example Use Case
In this example, you will use a Cognee-powered Coding Assistant to get context-aware coding help. You can open this example on a Google Colab Notebook and run the steps shown below to build your cognee memory interactively.Prerequisites
- OpenAI API key (or another supported LLM provider)
Cognee uses OpenAI’s GPT-5 model as default. Note that the OpenAI free tier does not satisfy the rate limit requirements. Please refer to our LLM providers documentation to use another provider.
Setup
First, let’s set up the environment and import necessary modules.Utility Functions Setup
Utility Functions Setup
Create a utility class to handle file downloads and visualization helpers:
Create Sample Data to Ingest into Memory
In this example, we’ll use a Python developer scenario. The data sources we’ll ingest into Cognee include:- A short introduction about the developer (
developer_intro) - A conversation between the developer and a coding agent (
human_agent_conversations) - The Zen of Python principles (
python_zen_principles) - A basic ontology file with structured data about common technologies (
ontology)
Prepare the Sample Data
download_remote_assets() function:
- Handles multiple file types (JSON, Markdown, ontology)
- Creates the required folders automatically
- Prevents redundant downloads
Review the Structure and Content of Downloaded Data
Next, let’s inspect the data we just downloaded.Use
preview_downloaded_assets() to quickly summarize and preview each file’s structure and contents before Cognee processes them.
Reset Memory and Add Structured Data
Start by resetting Cognee’s memory usingprune() to ensure a clean, reproducible run.Then, use
add() to load your data into dedicated node sets for organized memory management.
Configure the Ontology and Build a Knowledge Graph
Set the ontology file path, then runcognify() to transform all data into a knowledge graph backed by embeddings.Cognee automatically loads the ontology configuration from the
ONTOLOGY_FILE_PATH environment variable.
Visualize and Inspect the Graph Before and After Enrichment
Generate HTML visualizations of your knowledge graph to see how Cognee processed the data. First, visualize the initial graph structure. Then, usememify() to enhance the knowledge graph adding deeper semantic connections and improves relationships between concepts. Finally, generate a second visualization to compare the enriched graph.
Query Cognee Memory with Natural Language
Run cross-document searches to connect information across multiple data sources.Then, perform filtered searches within specific node sets to focus on targeted context.
Provide Interactive Feedback for Continuous Learning
Run a search withsave_interaction=True to capture user feedback.Then, use the
FEEDBACK query type to refine future retrievals and improve Cognee’s performance over time.