Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open on GitHub

MongoDB Atlas

MongoDB Atlas is a fully-managed clouddatabase available in AWS, Azure, and GCP. It now has support for nativeVector Search on the MongoDB document data.

Installation and Setup

Seedetail configuration instructions.

We need to installlangchain-mongodb python package.

pip install langchain-mongodb

Vector Store

See ausage example.

from langchain_mongodbimport MongoDBAtlasVectorSearch

Retrievers

Full Text Search Retriever

Hybrid Search Retriever performs full-text searches usingLucene’s standard (BM25) analyzer.

from langchain_mongodb.retrieversimport MongoDBAtlasFullTextSearchRetriever

Hybrid Search Retriever

Hybrid Search Retriever combines vector and full-text searches weightingthem the viaReciprocal Rank Fusion (RRF) algorithm.

from langchain_mongodb.retrieversimport MongoDBAtlasHybridSearchRetriever

Model Caches

MongoDBCache

An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.

To import this cache:

from langchain_mongodb.cacheimport MongoDBCache
API Reference:MongoDBCache

To use this cache with your LLMs:

from langchain_core.globalsimport set_llm_cache

# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddingsimport FakeEmbeddings

mongodb_atlas_uri="<YOUR_CONNECTION_STRING>"
COLLECTION_NAME="<YOUR_CACHE_COLLECTION_NAME>"
DATABASE_NAME="<YOUR_DATABASE_NAME>"

set_llm_cache(MongoDBCache(
connection_string=mongodb_atlas_uri,
collection_name=COLLECTION_NAME,
database_name=DATABASE_NAME,
))
API Reference:set_llm_cache

MongoDBAtlasSemanticCache

Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.The MongoDBAtlasSemanticCache inherits fromMongoDBAtlasVectorSearch and needs an Atlas Vector Search Index defined to work. Please look at theusage example on how to set up the index.

To import this cache:

from langchain_mongodb.cacheimport MongoDBAtlasSemanticCache

To use this cache with your LLMs:

from langchain_core.globalsimport set_llm_cache

# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddingsimport FakeEmbeddings

mongodb_atlas_uri="<YOUR_CONNECTION_STRING>"
COLLECTION_NAME="<YOUR_CACHE_COLLECTION_NAME>"
DATABASE_NAME="<YOUR_DATABASE_NAME>"

set_llm_cache(MongoDBAtlasSemanticCache(
embedding=FakeEmbeddings(),
connection_string=mongodb_atlas_uri,
collection_name=COLLECTION_NAME,
database_name=DATABASE_NAME,
))
API Reference:set_llm_cache

[8]ページ先頭

©2009-2025 Movatter.jp