- Notifications
You must be signed in to change notification settings - Fork1.2k
Build Real-Time Knowledge Graphs for AI Agents
License
getzep/graphiti
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
⭐Help us reach more developers and grow the Graphiti community. Star this repo!
Tip
Check out the newMCP server for Graphiti! Give Claude, Cursor, and other MCP clients powerful Knowledge Graph-based memory.
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
Use Graphiti to:
- Integrate and maintain dynamic user interactions and business data.
- Facilitate state-based reasoning and task automation for agents.
- Query complex, evolving data with semantic, keyword, and graph-based search methods.
A knowledge graph is a network of interconnected facts, such as"Kendra loves Adidas shoes." Each fact is a "triplet" represented by two entities, ornodes ("Kendra", "Adidas shoes"), and their relationship, or edge ("loves"). Knowledge Graphs have been exploredextensively for information retrieval. What makes Graphiti unique is its ability to autonomously build a knowledge graphwhile handling changing relationships and maintaining historical context.
Graphiti powers the core ofZep's memory layer for AI Agents.
Using Graphiti, we've demonstrated Zep istheState of the Art in Agent Memory.
Read our paper:Zep: A Temporal Knowledge Graph Architecture for Agent Memory.
We're excited to open-source Graphiti, believing its potential reaches far beyond AI memory applications.
Traditional RAG approaches often rely on batch processing and static data summarization, making them inefficient for frequently changing data. Graphiti addresses these challenges by providing:
- Real-Time Incremental Updates: Immediate integration of new data episodes without batch recomputation.
- Bi-Temporal Data Model: Explicit tracking of event occurrence and ingestion times, allowing accurate point-in-time queries.
- Efficient Hybrid Retrieval: Combines semantic embeddings, keyword (BM25), and graph traversal to achieve low-latency queries without reliance on LLM summarization.
- Custom Entity Definitions: Flexible ontology creation and support for developer-defined entities through straightforward Pydantic models.
- Scalability: Efficiently manages large datasets with parallel processing, suitable for enterprise environments.
Aspect | GraphRAG | Graphiti |
---|---|---|
Primary Use | Static document summarization | Dynamic data management |
Data Handling | Batch-oriented processing | Continuous, incremental updates |
Knowledge Structure | Entity clusters & community summaries | Episodic data, semantic entities, communities |
Retrieval Method | Sequential LLM summarization | Hybrid semantic, keyword, and graph-based search |
Adaptability | Low | High |
Temporal Handling | Basic timestamp tracking | Explicit bi-temporal tracking |
Contradiction Handling | LLM-driven summarization judgments | Temporal edge invalidation |
Query Latency | Seconds to tens of seconds | Typically sub-second latency |
Custom Entity Types | No | Yes, customizable |
Scalability | Moderate | High, optimized for large datasets |
Graphiti is specifically designed to address the challenges of dynamic and frequently updated datasets, making it particularly suitable for applications requiring real-time interaction and precise historical queries.
Requirements:
- Python 3.10 or higher
- Neo4j 5.26 / FalkorDB 1.1.2 or higher (serves as the embeddings storage backend)
- OpenAI API key (Graphiti defaults to OpenAI for LLM inference and embedding)
Important
Graphiti works best with LLM services that support Structured Output (such as OpenAI and Gemini).Using other services may result in incorrect output schemas and ingestion failures. This is particularlyproblematic when using smaller models.
Optional:
- Google Gemini, Anthropic, or Groq API key (for alternative LLM providers)
Tip
The simplest way to install Neo4j is viaNeo4j Desktop. It provides a user-friendlyinterface to manage Neo4j instances and databases.Alternatively, you can use FalkorDB on-premises via Docker and instantly start with the quickstart example:
docker run -p 6379:6379 -p 3000:3000 -it --rm falkordb/falkordb:latest
pip install graphiti-core
or
uv add graphiti-core
If you plan to use FalkorDB as your graph database backend, install with the FalkorDB extra:
pip install graphiti-core[falkordb]# or with uvuv add graphiti-core[falkordb]
# Install with Anthropic supportpip install graphiti-core[anthropic]# Install with Groq supportpip install graphiti-core[groq]# Install with Google Gemini supportpip install graphiti-core[google-genai]# Install with multiple providerspip install graphiti-core[anthropic,groq,google-genai]# Install with FalkorDB and LLM providerspip install graphiti-core[falkordb,anthropic,google-genai]
Important
Graphiti defaults to using OpenAI for LLM inference and embedding. Ensure that anOPENAI_API_KEY
is set in your environment.Support for Anthropic and Groq LLM inferences is available, too. Other LLM providers may be supported via OpenAIcompatible APIs.
For a complete working example, see theQuickstart Example in the examples directory. The quickstart demonstrates:
- Connecting to a Neo4j or FalkorDB database
- Initializing Graphiti indices and constraints
- Adding episodes to the graph (both text and structured JSON)
- Searching for relationships (edges) using hybrid search
- Reranking search results using graph distance
- Searching for nodes using predefined search recipes
The example is fully documented with clear explanations of each functionality and includes a comprehensive README with setup instructions and next steps.
Themcp_server
directory contains a Model Context Protocol (MCP) server implementation for Graphiti. This server allows AI assistants to interact with Graphiti's knowledge graph capabilities through the MCP protocol.
Key features of the MCP server include:
- Episode management (add, retrieve, delete)
- Entity management and relationship handling
- Semantic and hybrid search capabilities
- Group management for organizing related data
- Graph maintenance operations
The MCP server can be deployed using Docker with Neo4j, making it easy to integrate Graphiti into your AI assistant workflows.
For detailed setup instructions and usage examples, see theMCP server README.
Theserver
directory contains an API service for interacting with the Graphiti API. It is built using FastAPI.
Please see theserver README for more information.
In addition to the Neo4j and OpenAi-compatible credentials, Graphiti also has a few optional environment variables.If you are using one of our supported models, such as Anthropic or Voyage models, the necessary environment variablesmust be set.
Database names are configured directly in the driver constructors:
- Neo4j: Database name defaults to
neo4j
(hardcoded in Neo4jDriver) - FalkorDB: Database name defaults to
default_db
(hardcoded in FalkorDriver)
As of v0.17.0, if you need to customize your database configuration, you can instantiate a database driver and pass it to the Graphiti constructor using thegraph_driver
parameter.
fromgraphiti_coreimportGraphitifromgraphiti_core.driver.neo4j_driverimportNeo4jDriver# Create a Neo4j driver with custom database namedriver=Neo4jDriver(uri="bolt://localhost:7687",user="neo4j",password="password",database="my_custom_database"# Custom database name)# Pass the driver to Graphitigraphiti=Graphiti(graph_driver=driver)
fromgraphiti_coreimportGraphitifromgraphiti_core.driver.falkordb_driverimportFalkorDriver# Create a FalkorDB driver with custom database namedriver=FalkorDriver(host="localhost",port=6379,username="falkor_user",# Optionalpassword="falkor_password",# Optionaldatabase="my_custom_graph"# Custom database name)# Pass the driver to Graphitigraphiti=Graphiti(graph_driver=driver)
USE_PARALLEL_RUNTIME
is an optional boolean variable that can be set to true if you wishto enable Neo4j's parallel runtime feature for several of our search queries.Note that this feature is not supported for Neo4j Community edition or for smaller AuraDB instances,as such this feature is off by default.
Graphiti supports Azure OpenAI for both LLM inference and embeddings. Azure deployments often require different endpoints for LLM and embedding services, and separate deployments for default and small models.
fromopenaiimportAsyncAzureOpenAIfromgraphiti_coreimportGraphitifromgraphiti_core.llm_clientimportLLMConfig,OpenAIClientfromgraphiti_core.embedder.openaiimportOpenAIEmbedder,OpenAIEmbedderConfigfromgraphiti_core.cross_encoder.openai_reranker_clientimportOpenAIRerankerClient# Azure OpenAI configuration - use separate endpoints for different servicesapi_key="<your-api-key>"api_version="<your-api-version>"llm_endpoint="<your-llm-endpoint>"# e.g., "https://your-llm-resource.openai.azure.com/"embedding_endpoint="<your-embedding-endpoint>"# e.g., "https://your-embedding-resource.openai.azure.com/"# Create separate Azure OpenAI clients for different servicesllm_client_azure=AsyncAzureOpenAI(api_key=api_key,api_version=api_version,azure_endpoint=llm_endpoint)embedding_client_azure=AsyncAzureOpenAI(api_key=api_key,api_version=api_version,azure_endpoint=embedding_endpoint)# Create LLM Config with your Azure deployment namesazure_llm_config=LLMConfig(small_model="gpt-4.1-nano",model="gpt-4.1-mini",)# Initialize Graphiti with Azure OpenAI clientsgraphiti=Graphiti("bolt://localhost:7687","neo4j","password",llm_client=OpenAIClient(llm_config=azure_llm_config,client=llm_client_azure ),embedder=OpenAIEmbedder(config=OpenAIEmbedderConfig(embedding_model="text-embedding-3-small-deployment"# Your Azure embedding deployment name ),client=embedding_client_azure ),cross_encoder=OpenAIRerankerClient(llm_config=LLMConfig(model=azure_llm_config.small_model# Use small model for reranking ),client=llm_client_azure ))# Now you can use Graphiti with Azure OpenAI
Make sure to replace the placeholder values with your actual Azure OpenAI credentials and deployment names that match your Azure OpenAI service configuration.
Graphiti supports Google's Gemini models for LLM inference, embeddings, and cross-encoding/reranking. To use Gemini, you'll need to configure the LLM client, embedder, and the cross-encoder with your Google API key.
Install Graphiti:
uv add"graphiti-core[google-genai]"# orpip install"graphiti-core[google-genai]"
fromgraphiti_coreimportGraphitifromgraphiti_core.llm_client.gemini_clientimportGeminiClient,LLMConfigfromgraphiti_core.embedder.geminiimportGeminiEmbedder,GeminiEmbedderConfigfromgraphiti_core.cross_encoder.gemini_reranker_clientimportGeminiRerankerClient# Google API key configurationapi_key="<your-google-api-key>"# Initialize Graphiti with Gemini clientsgraphiti=Graphiti("bolt://localhost:7687","neo4j","password",llm_client=GeminiClient(config=LLMConfig(api_key=api_key,model="gemini-2.0-flash" ) ),embedder=GeminiEmbedder(config=GeminiEmbedderConfig(api_key=api_key,embedding_model="embedding-001" ) ),cross_encoder=GeminiRerankerClient(config=LLMConfig(api_key=api_key,model="gemini-2.5-flash-lite-preview-06-17" ) ))# Now you can use Graphiti with Google Gemini for all components
The Gemini reranker uses thegemini-2.5-flash-lite-preview-06-17
model by default, which is optimized for cost-effective and low-latency classification tasks. It uses the same boolean classification approach as the OpenAI reranker, leveraging Gemini's log probabilities feature to rank passage relevance.
Graphiti supports Ollama for running local LLMs and embedding models via Ollama's OpenAI-compatible API. This is ideal for privacy-focused applications or when you want to avoid API costs.
Install the models:ollama pull deepseek-r1:7b # LLMollama pull nomic-embed-text # embeddings
fromgraphiti_coreimportGraphitifromgraphiti_core.llm_client.configimportLLMConfigfromgraphiti_core.llm_client.openai_clientimportOpenAIClientfromgraphiti_core.embedder.openaiimportOpenAIEmbedder,OpenAIEmbedderConfigfromgraphiti_core.cross_encoder.openai_reranker_clientimportOpenAIRerankerClient# Configure Ollama LLM clientllm_config=LLMConfig(api_key="abc",# Ollama doesn't require a real API keymodel="deepseek-r1:7b",small_model="deepseek-r1:7b",base_url="http://localhost:11434/v1",# Ollama provides this port)llm_client=OpenAIClient(config=llm_config)# Initialize Graphiti with Ollama clientsgraphiti=Graphiti("bolt://localhost:7687","neo4j","password",llm_client=llm_client,embedder=OpenAIEmbedder(config=OpenAIEmbedderConfig(api_key="abc",embedding_model="nomic-embed-text",embedding_dim=768,base_url="http://localhost:11434/v1", ) ),cross_encoder=OpenAIRerankerClient(client=llm_client,config=llm_config),)# Now you can use Graphiti with local Ollama models
Ensure Ollama is running (ollama serve
) and that you have pulled the models you want to use.
Graphiti collects anonymous usage statistics to help us understand how the framework is being used and improve it for everyone. We believe transparency is important, so here's exactly what we collect and why.
When you initialize a Graphiti instance, we collect:
- Anonymous identifier: A randomly generated UUID stored locally in
~/.cache/graphiti/telemetry_anon_id
- System information: Operating system, Python version, and system architecture
- Graphiti version: The version you're using
- Configuration choices:
- LLM provider type (OpenAI, Azure, Anthropic, etc.)
- Database backend (Neo4j, FalkorDB)
- Embedder provider (OpenAI, Azure, Voyage, etc.)
We are committed to protecting your privacy. Wenever collect:
- Personal information or identifiers
- API keys or credentials
- Your actual data, queries, or graph content
- IP addresses or hostnames
- File paths or system-specific information
- Any content from your episodes, nodes, or edges
This information helps us:
- Understand which configurations are most popular to prioritize support and testing
- Identify which LLM and database providers to focus development efforts on
- Track adoption patterns to guide our roadmap
- Ensure compatibility across different Python versions and operating systems
By sharing this anonymous information, you help us make Graphiti better for everyone in the community.
The Telemetry codemay be found here.
Telemetry isopt-out and can be disabled at any time. To disable telemetry collection:
Option 1: Environment Variable
export GRAPHITI_TELEMETRY_ENABLED=false
Option 2: Set in your shell profile
# For bash users (~/.bashrc or ~/.bash_profile)echo'export GRAPHITI_TELEMETRY_ENABLED=false'>>~/.bashrc# For zsh users (~/.zshrc)echo'export GRAPHITI_TELEMETRY_ENABLED=false'>>~/.zshrc
Option 3: Set for a specific Python session
importosos.environ['GRAPHITI_TELEMETRY_ENABLED']='false'# Then initialize Graphiti as usualfromgraphiti_coreimportGraphitigraphiti=Graphiti(...)
Telemetry is automatically disabled during test runs (whenpytest
is detected).
- Telemetry uses PostHog for anonymous analytics collection
- All telemetry operations are designed to fail silently - they will never interrupt your application or affect Graphiti functionality
- The anonymous ID is stored locally and is not tied to any personal information
Graphiti is under active development. We aim to maintain API stability while working on:
- Supporting custom graph schemas:
- Allow developers to provide their own defined node and edge classes when ingesting episodes
- Enable more flexible knowledge representation tailored to specific use cases
- Enhancing retrieval capabilities with more robust and configurable options
- Graphiti MCP Server
- Expanding test coverage to ensure reliability and catch edge cases
We encourage and appreciate all forms of contributions, whether it's code, documentation, addressing GitHub Issues, oranswering questions in the Graphiti Discord channel. For detailed guidelines on code contributions, please refertoCONTRIBUTING.
Join theZep Discord server and make your way to the#Graphiti channel!
About
Build Real-Time Knowledge Graphs for AI Agents
Topics
Resources
License
Code of conduct
Security policy
Uh oh!
There was an error while loading.Please reload this page.