Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Store millions of text chunks inside ultra-compact MP4 files, index them with local embeddings, and retrieve answers instantly for fully offline RAG with any LLM.

License

NotificationsYou must be signed in to change notification settings

framerecall/FrameRecall

Repository files navigation

PyPI versionDownloadsLicense: MITPython 3.8+Code style: black

Ultra-fast toolkit crafted within a leading scripting language, capable of generating, archiving, then recalling artificial intelligence recollections through two-dimensional matrix clip sequences. The platform supplies meaning-based lookup spanning countless document fragments, answering in under one second.

🚀 Why FrameRecall?

Transformative Approach

  • Clips as Storage: Archive vast amounts of textual information in a compact .mp4
  • Blazing Access: Retrieve relevant insights within milliseconds using meaning-based queries
  • Superior Compression: Frame encoding significantly lowers data requirements
  • Serverless Design: Operates entirely via standalone files – no backend needed
  • Fully Local: Entire system runs independently once memory footage is created

Streamlined System

  • Tiny Footprint: Core logic spans fewer than 1,000 lines of code
  • Resource-Conscious: Optimised to perform well on standard processors
  • Self-Contained: Entire intelligence archive stored in one clip
  • Remote-Friendly: Media can be delivered directly from online storage

📦 Installation

Quick Install

pip install framerecall

For PDF Support

pip install framerecall PyPDF2

Recommended Setup (Virtual Environment)

# Create a new project directorymkdir my-framerecall-projectcd my-framerecall-project# Create virtual environmentpython -m venv venv# Activate it# On macOS/Linux:source venv/bin/activate# On Windows:venv\Scripts\activate# Install framerecallpip install framerecall# For PDF support:pip install PyPDF2

🎯 Getting Started Instantly

fromframerecallimportFrameRecallEncoder,FrameRecallChat# Construct memory sequence using textual inputssegments= ["Crucial insight 1","Crucial insight 2","Contextual knowledge snippet"]builder=FrameRecallEncoder()builder.add_chunks(segments)builder.build_video("archive.mp4","archive_index.json")# Interact with stored intelligenceassistant=FrameRecallChat("archive.mp4","archive_index.json")assistant.start_session()output=assistant.chat("Tell me what’s known about past happenings?")print(output)

Constructing Memory from Files

fromframerecallimportFrameRecallEncoderimportos# Prepare input textsassembler=FrameRecallEncoder(chunk_size=512,overlap=50)# Inject content from directoryforfilenameinos.listdir("documents"):withopen(f"documents/{filename}","r")asdocument:assembler.add_text(document.read(),metadata={"source":filename})# Generate compressed video sequenceassembler.build_video("knowledge_base.mp4","knowledge_index.json",fps=30,# More chunks processed per secondframe_size=512# Expanded resolution accommodates extra information)

Intelligent Lookup & Extraction

fromframerecallimportFrameRecallRetriever# Set up fetcherfetcher=FrameRecallRetriever("knowledge_base.mp4","knowledge_index.json")# Contextual discoverymatches=fetcher.search("machine learning algorithms",top_k=5)forfragment,relevanceinmatches:print(f"Score:{relevance:.3f} |{fragment[:100]}...")# Retrieve neighbouring fragmentswindow=fetcher.get_context("explain neural networks",max_tokens=2000)print(window)

Conversational Interface

fromframerecallimportFrameRecallInteractive# Open real-time discussion UIinteractive=FrameRecallInteractive("knowledge_base.mp4","knowledge_index.json")interactive.run()# Web panel opens at http://localhost:7860

Testing with file_chat.py

Theexamples/file_chat.py utility enables thorough experimentation with FrameRecall using your personal data files:

# Ingest an entire folder of materialspython examples/file_chat.py --input-dir /path/to/documents --provider google# Load chosen documentspython examples/file_chat.py --files doc1.txt doc2.pdf --provider openai# Apply H.265 encoding (Docker required)python examples/file_chat.py --input-dir docs/ --codec h265 --provider google# Adjust chunking for lengthy inputspython examples/file_chat.py --files large.pdf --chunk-size 2048 --overlap 32 --provider google# Resume from previously saved memorypython examples/file_chat.py --load-existing output/my_memory --provider google

Full Demo: Converse with a PDF Book

# 1. Prepare project directory and virtual environmentmkdir book-chat-democd book-chat-demopython -m venv venvsource venv/bin/activate# On Windows: venv\Scripts\activate# 2. Install necessary packagespip install framerecall PyPDF2# 3. Build book_chat.pycat> book_chat.py<< 'EOF'from framerecall import FrameRecallEncoder, chat_with_memoryimport os# Path to your documentbook_pdf = "book.pdf"  # Replace with your PDF filename# Encode video from bookencoder = FrameRecallEncoder()encoder.add_pdf(book_pdf)encoder.build_video("book_memory.mp4", "book_index.json")# Initiate interactive Q&Aapi_key = os.getenv("OPENAI_API_KEY")  # Optional for model outputchat_with_memory("book_memory.mp4", "book_index.json", api_key=api_key)EOF# 4. Launch the assistantexport OPENAI_API_KEY="your-api-key"# Optionalpython book_chat.py

🛠️ Extended Setup

Tailored Embeddings

fromsentence_transformersimportSentenceTransformer# Load alternative semantic modelcustom_model=SentenceTransformer('sentence-transformers/all-mpnet-base-v2')encoder=FrameRecallEncoder(embedding_model=custom_model)

Parallelized Workloads

# Accelerate processing with concurrencyencoder=FrameRecallEncoder(n_workers=8)encoder.add_chunks_parallel(massive_chunk_list)

🐛 Debugging Guide

Frequent Pitfalls

ModuleNotFoundError: No module named 'framerecall'

# Confirm the correct Python interpreter is being usedwhich python# Expected to point to your environment# If incorrect, reactivate the virtual setup:source venv/bin/activate# On Windows: venv\Scripts\activate

ImportError: PyPDF2 missing for document parsing

pip install PyPDF2

Missing or Invalid OpenAI Token

# Provide your OpenAI credentials (register at https://platform.openai.com)export OPENAI_API_KEY="sk-..."# macOS/Linux# On Windows:set OPENAI_API_KEY=sk-...

Handling Extensive PDFs

# Reduce segment length for better handlingencoder=FrameRecallEncoder()encoder.add_pdf("large_book.pdf",chunk_size=400,overlap=50)

🤝 Get Involved

We’re excited to collaborate! Refer to ourContribution Manual for full instructions.

# Execute test suitepytest tests/# Execute with coverage reportingpytest --cov=framerecall tests/# Apply code stylingblack framerecall/

🆚 How FrameRecall Compares to Other Technologies

CapabilityFrameRecallEmbedding StoresRelational Systems
Data Compression⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Configuration TimeMinimalAdvancedModerate
Conceptual Matching
Disconnected Access
MobilityStandalone FileHostedHosted
Throughput LimitsMulti-millionMulti-millionMulti-billion
Financial ImpactNo ChargeHigh FeesModerate Expense

🗺️ What’s Coming Next

  • v0.2.0 – International text handling
  • v0.3.0 – On-the-fly memory insertion
  • v0.4.0 – Parallel video segmentation
  • v0.5.0 – Visual and auditory embedding
  • v1.0.0 – Enterprise-grade, stable release

📚 Illustrative Use Cases

Explore theexamples/ folder to discover:

  • Transforming Wikipedia datasets into searchable memories
  • Developing custom insight archives
  • Multilingual capabilities
  • Live content updates
  • Linking with top-tier LLM platforms

🔗 Resources

📄 Usage Rights

Licensed under the MIT agreement, refer to theLICENSE document for specifics.

Time to redefine how your LLMs recall information, deploy FrameRecall and ignite knowledge! 🚀

About

Store millions of text chunks inside ultra-compact MP4 files, index them with local embeddings, and retrieve answers instantly for fully offline RAG with any LLM.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Languages


[8]ページ先頭

©2009-2026 Movatter.jp