Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

Pinecone Embeddings

Pinecone's inference API can be accessed viaPineconeEmbeddings. Providing text embeddings via the Pinecone service. We start by installing prerequisite libraries:

!pip install-qU"langchain-pinecone>=0.2.0"

Next, wesign up / log in to Pinecone to get our API key:

import os
from getpassimport getpass

os.environ["PINECONE_API_KEY"]= os.getenv("PINECONE_API_KEY")or getpass(
"Enter your Pinecone API key: "
)

Check the document for availablemodels. Now we initialize our embedding model like so:

from langchain_pineconeimport PineconeEmbeddings

embeddings= PineconeEmbeddings(model="multilingual-e5-large")
API Reference:PineconeEmbeddings

From here we can create embeddings either sync or async, let's start with sync! We embed a single text as a query embedding (ie what we search with in RAG) usingembed_query:

docs=[
"Apple is a popular fruit known for its sweetness and crisp texture.",
"The tech company Apple is known for its innovative products like the iPhone.",
"Many people enjoy eating apples as a healthy snack.",
"Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.",
"An apple a day keeps the doctor away, as the saying goes.",
]
doc_embeds= embeddings.embed_documents(docs)
doc_embeds
query="Tell me about the tech company known as Apple"
query_embed= embeddings.embed_query(query)
query_embed

Related


[8]ページ先頭

©2009-2025 Movatter.jp