- Notifications
You must be signed in to change notification settings - Fork3.8k
Memory for AI Agents; Announcing OpenMemory MCP - local and secure memory management.
License
mem0ai/mem0
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Learn more ·Join Discord ·Demo ·OpenMemory
📄 Building Production-Ready AI Agents with Scalable Long-Term Memory →
⚡ +26% Accuracy vs. OpenAI Memory • 🚀 91% Faster • 💰 90% Fewer Tokens
- +26% Accuracy over OpenAI Memory on the LOCOMO benchmark
- 91% Faster Responses than full-context, ensuring low-latency at scale
- 90% Lower Token Usage than full-context, cutting costs without compromise
- Read the full paper
Mem0 ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over time—ideal for customer support chatbots, AI assistants, and autonomous systems.
Core Capabilities:
- Multi-Level Memory: Seamlessly retains User, Session, and Agent state with adaptive personalization
- Developer-Friendly: Intuitive API, cross-platform SDKs, and a fully managed service option
Applications:
- AI Assistants: Consistent, context-rich conversations
- Customer Support: Recall past tickets and user history for tailored help
- Healthcare: Track patient preferences and history for personalized care
- Productivity & Gaming: Adaptive workflows and environments based on user behavior
Choose between our hosted platform or self-hosted package:
Get up and running in minutes with automatic updates, analytics, and enterprise security.
- Sign up onMem0 Platform
- Embed the memory layer via SDK or API keys
Install the sdk via pip:
pip install mem0ai
Install sdk via npm:
npm install mem0ai
Mem0 requires an LLM to function, withgpt-4o-mini
from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to ourSupported LLMs documentation.
First step is to instantiate the memory:
fromopenaiimportOpenAIfrommem0importMemoryopenai_client=OpenAI()memory=Memory()defchat_with_memories(message:str,user_id:str="default_user")->str:# Retrieve relevant memoriesrelevant_memories=memory.search(query=message,user_id=user_id,limit=3)memories_str="\n".join(f"-{entry['memory']}"forentryinrelevant_memories["results"])# Generate Assistant responsesystem_prompt=f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"messages= [{"role":"system","content":system_prompt}, {"role":"user","content":message}]response=openai_client.chat.completions.create(model="gpt-4o-mini",messages=messages)assistant_response=response.choices[0].message.content# Create new memories from the conversationmessages.append({"role":"assistant","content":assistant_response})memory.add(messages,user_id=user_id)returnassistant_responsedefmain():print("Chat with AI (type 'exit' to quit)")whileTrue:user_input=input("You: ").strip()ifuser_input.lower()=='exit':print("Goodbye!")breakprint(f"AI:{chat_with_memories(user_input)}")if__name__=="__main__":main()
For detailed integration steps, see theQuickstart andAPI Reference.
- ChatGPT with Memory: Personalized chat powered by Mem0 (Live Demo)
- Browser Extension: Store memories across ChatGPT, Perplexity, and Claude (Chrome Extension)
- Langgraph Support: Build a customer bot with Langgraph + Mem0 (Guide)
- CrewAI Integration: Tailor CrewAI outputs with Mem0 (Example)
- Full docs:https://docs.mem0.ai
- Community:Discord ·Twitter
- Contact:founders@mem0.ai
We now have a paper you can cite:
@article{mem0,title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},journal={arXiv preprint arXiv:2504.19413},year={2025}}
Apache 2.0 — see theLICENSE file for details.
About
Memory for AI Agents; Announcing OpenMemory MCP - local and secure memory management.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.