Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

Rememberizer

Rememberizer is a knowledge enhancement service for AI applications created by SkyDeck AI Inc.

This notebook shows how to retrieve documents fromRememberizer into the Document format that is used downstream.

Preparation

You will need an API key: you can get one after creating a common knowledge athttps://rememberizer.ai. Once you have an API key, you must set it as an environment variableREMEMBERIZER_API_KEY or pass it asrememberizer_api_key when initializingRememberizerRetriever.

RememberizerRetriever has these arguments:

  • optionaltop_k_results: default=10. Use it to limit number of returned documents.
  • optionalrememberizer_api_key: required if you don't set the environment variableREMEMBERIZER_API_KEY.

get_relevant_documents() has one argument,query: free text which used to find documents in the common knowledge ofRememberizer.ai

Examples

Basic usage

# Setup API key
from getpassimport getpass

REMEMBERIZER_API_KEY= getpass()
import os

from langchain_community.retrieversimport RememberizerRetriever

os.environ["REMEMBERIZER_API_KEY"]= REMEMBERIZER_API_KEY
retriever= RememberizerRetriever(top_k_results=5)
API Reference:RememberizerRetriever
docs= retriever.get_relevant_documents(query="How does Large Language Models works?")
docs[0].metadata# meta-information of the Document
{'id': 13646493,
'document_id': '17s3LlMbpkTk0ikvGwV0iLMCj-MNubIaP',
'name': 'What is a large language model (LLM)_ _ Cloudflare.pdf',
'type': 'application/pdf',
'path': '/langchain/What is a large language model (LLM)_ _ Cloudflare.pdf',
'url': 'https://drive.google.com/file/d/17s3LlMbpkTk0ikvGwV0iLMCj-MNubIaP/view',
'size': 337089,
'created_time': '',
'modified_time': '',
'indexed_on': '2024-04-04T03:36:28.886170Z',
'integration': {'id': 347, 'integration_type': 'google_drive'}}
print(docs[0].page_content[:400])# a content of the Document
before, or contextualized in new ways. on some level they " understand " semantics in that they can associate words and concepts by their meaning, having seen them grouped together in that way millions or billions of times. how developers can quickly start building their own llms to build llm applications, developers need easy access to multiple data sets, and they need places for those data sets

Usage in a chain

OPENAI_API_KEY= getpass()
os.environ["OPENAI_API_KEY"]= OPENAI_API_KEY
from langchain.chainsimport ConversationalRetrievalChain
from langchain_openaiimport ChatOpenAI

model= ChatOpenAI(model_name="gpt-3.5-turbo")
qa= ConversationalRetrievalChain.from_llm(model, retriever=retriever)
questions=[
"What is RAG?",
"How does Large Language Models works?",
]
chat_history=[]

for questionin questions:
result= qa.invoke({"question": question,"chat_history": chat_history})
chat_history.append((question, result["answer"]))
print(f"-> **Question**:{question} \n")
print(f"**Answer**:{result['answer']} \n")
-> **Question**: What is RAG?

**Answer**: RAG stands for Retrieval-Augmented Generation. It is an AI framework that retrieves facts from an external knowledge base to enhance the responses generated by Large Language Models (LLMs) by providing up-to-date and accurate information. This framework helps users understand the generative process of LLMs and ensures that the model has access to reliable information sources.

-> **Question**: How does Large Language Models works?

**Answer**: Large Language Models (LLMs) work by analyzing massive data sets of language to comprehend and generate human language text. They are built on machine learning, specifically deep learning, which involves training a program to recognize features of data without human intervention. LLMs use neural networks, specifically transformer models, to understand context in human language, making them better at interpreting language even in vague or new contexts. Developers can quickly start building their own LLMs by accessing multiple data sets and using services like Cloudflare's Vectorize and Cloudflare Workers AI platform.

Related


[8]ページ先頭

©2009-2025 Movatter.jp