LocalAI
info
langchain-localai
is a 3rd party integration package for LocalAI. It provides a simple way to use LocalAI services in Langchain.
The source code is available onGithub
Let's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation athttps://localai.io/basics/getting_started/index.html andhttps://localai.io/features/embeddings/index.html.
%pip install-U langchain-localai
from langchain_localaiimport LocalAIEmbeddings
embeddings= LocalAIEmbeddings(
openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text="This is a test document."
query_result= embeddings.embed_query(text)
doc_result= embeddings.embed_documents([text])
Let's load the LocalAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - seehere
from langchain_community.embeddingsimport LocalAIEmbeddings
API Reference:LocalAIEmbeddings
embeddings= LocalAIEmbeddings(
openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text="This is a test document."
query_result= embeddings.embed_query(text)
doc_result= embeddings.embed_documents([text])
import os
# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
os.environ["OPENAI_PROXY"]="http://proxy.yourcompany.com:8080"
Related
- Embedding modelconceptual guide
- Embedding modelhow-to guides