Anthropic Integration

Anthropic’s Claude models provide exceptional reasoning capabilities and handle long contexts, making them ideal for complex knowledge graph construction and analysis tasks.
Claude models excel at understanding nuanced relationships and handling large documents, perfect for comprehensive knowledge extraction.

Supported Models

Claude 3.5 Sonnet

Best Performance
  • Highest reasoning capability
  • 200k context window
  • Best for complex analysis
  • Premium pricing

Claude 3 Haiku

Fast & Efficient
  • Fastest response times
  • Most cost-effective
  • Good for simple tasks
  • 200k context window

Claude 3 Opus

Maximum Capability
  • Highest accuracy
  • Best for difficult tasks
  • Largest context understanding
  • Highest cost

Legacy Models

Previous Versions
  • Claude 2.1, Claude 2.0
  • Still supported
  • Lower performance
  • Budget-friendly options

Quick Setup

1

Get API Key

  1. Visit Anthropic Console
  2. Sign up or log in to your account
  3. Navigate to API Keys section
  4. Create a new API key
Anthropic requires phone verification and may have a waitlist for new accounts.
2

Configure Cognee

import os
import cognee

# Configure Anthropic as LLM provider
os.environ["LLM_PROVIDER"] = "anthropic"
os.environ["LLM_API_KEY"] = "your-anthropic-api-key"
os.environ["LLM_MODEL"] = "claude-3-5-sonnet-20241022"
3

Test Setup

import asyncio

async def test_anthropic():
    await cognee.add("Claude is Anthropic's AI assistant.")
    await cognee.cognify()
    
    result = await cognee.search("Who created Claude?")
    print(result[0])

asyncio.run(test_anthropic())

Configuration Options

import os

# Claude 3.5 Sonnet (recommended)
os.environ["LLM_MODEL"] = "claude-3-5-sonnet-20241022"

# Claude 3 Haiku (fast & economical)
os.environ["LLM_MODEL"] = "claude-3-haiku-20240307"

# Claude 3 Opus (maximum capability)
os.environ["LLM_MODEL"] = "claude-3-opus-20240229"

Code Examples

import cognee
import os
import asyncio

async def analyze_document():
    # Configure Claude for document analysis
    os.environ["LLM_PROVIDER"] = "anthropic"
    os.environ["LLM_API_KEY"] = "your-anthropic-api-key"
    os.environ["LLM_MODEL"] = "claude-3-5-sonnet-20241022"
    
    # Analyze a research paper
    document = """
    Recent advances in transformer architectures have revolutionized 
    natural language processing. The attention mechanism allows models 
    to focus on relevant parts of the input, leading to better 
    understanding of context and relationships between concepts.
    """
    
    await cognee.add(document)
    await cognee.cognify()
    
    # Get detailed analysis
    analysis = await cognee.search(
        "What are the key innovations in transformer architectures?",
        query_type=SearchType.GRAPH_COMPLETION
    )
    
    print(analysis[0])

asyncio.run(analyze_document())
import cognee
import os
import asyncio

async def process_long_document():
    # Configure for long context processing
    os.environ["LLM_PROVIDER"] = "anthropic"
    os.environ["LLM_MODEL"] = "claude-3-5-sonnet-20241022"
    os.environ["LLM_MAX_TOKENS"] = "8000"
    
    # Process a long document (Claude handles up to 200k tokens)
    with open("long_research_paper.pdf", "r") as f:
        long_content = f.read()
    
    print(f"Processing document with {len(long_content)} characters...")
    
    await cognee.add(long_content)
    await cognee.cognify()
    
    # Get comprehensive insights
    insights = await cognee.search(
        "What are the main themes and conclusions?",
        query_type=SearchType.SUMMARIES
    )
    
    for insight in insights:
        print(f"Theme: {insight}")

asyncio.run(process_long_document())

Claude Model Features

Long Context Window

Performance Optimization

Model Selection

Choose Wisely
  • Haiku: Simple entity extraction, high volume
  • Sonnet: Balanced performance for most tasks
  • Opus: Complex reasoning, critical applications

Parameter Tuning

Optimize Settings
  • Lower temperature for consistent outputs
  • Adjust max_tokens based on needs
  • Use appropriate top_p values

Error Handling

import cognee
import asyncio
from anthropic import APIError, RateLimitError

async def robust_anthropic():
    try:
        await cognee.add("Your data here")
        await cognee.cognify()
        
        result = await cognee.search("Your query")
        return result
        
    except RateLimitError:
        print("Rate limit hit, waiting before retry...")
        await asyncio.sleep(60)
        return await robust_anthropic()
        
    except APIError as e:
        print(f"Anthropic API error: {e}")
        # Implement fallback logic
        
    except Exception as e:
        print(f"Unexpected error: {e}")

asyncio.run(robust_anthropic())

Next Steps