- Notifications
You must be signed in to change notification settings - Fork6.4k
LlamaIndex is the leading framework for building LLM-powered agents over your data.
License
run-llama/llama_index
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
LlamaIndex (GPT Index) is a data framework for your LLM application. Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins). There are two ways to start building with LlamaIndex inPython:
Starter:
llama-index
. A starter Python package that includes core LlamaIndex as well as a selection of integrations.Customized:
llama-index-core
. Install core LlamaIndex and add your chosen LlamaIndex integration packages onLlamaHubthat are required for your application. There are over 300 LlamaIndex integrationpackages that work seamlessly with core, allowing you to build with your preferredLLM, embedding, and vector store providers.
The LlamaIndex Python library is namespaced such that import statements whichincludecore
imply that the core package is being used. In contrast, thosestatements withoutcore
imply that an integration package is being used.
# typical patternfromllama_index.core.xxximportClassABC# core submodule xxxfromllama_index.xxx.yyyimport (SubclassABC,)# integration yyy for submodule xxx# concrete examplefromllama_index.core.llmsimportLLMfromllama_index.llms.openaiimportOpenAI
LlamaIndex.TS(Typescript/Javascript)
NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!
- LLMs are a phenomenal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
- How do we best augment LLMs with our own private data?
We need a comprehensive toolkit to help perform this data augmentation for LLMs.
That's whereLlamaIndex comes in. LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:
- Offersdata connectors to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).
- Provides ways tostructure your data (indices, graphs) so that this data can be easily used with LLMs.
- Provides anadvanced retrieval/query interface over your data: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.
- Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, or anything else).
LlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules),to fit their needs.
Interested in contributing? Contributions to LlamaIndex core as well as contributingintegrations that build on the core are both accepted and highly encouraged! See ourContribution Guide for more details.
New integrations should meaningfully integrate with existing LlamaIndex framework components. At the discretion of LlamaIndex maintainers, some integrations may be declined.
Full documentation can be foundhere
Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!
# custom selection of integrations to work with corepip install llama-index-corepip install llama-index-llms-openaipip install llama-index-llms-replicatepip install llama-index-embeddings-huggingface
Examples are in thedocs/examples
folder. Indices are in theindices
folder (see list of indices below).
To build a simple vector store index using OpenAI:
importosos.environ["OPENAI_API_KEY"]="YOUR_OPENAI_API_KEY"fromllama_index.coreimportVectorStoreIndex,SimpleDirectoryReaderdocuments=SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()index=VectorStoreIndex.from_documents(documents)
To build a simple vector store index using non-OpenAI LLMs, e.g. Llama 2 hosted onReplicate, where you can easily create a free trial API token:
importosos.environ["REPLICATE_API_TOKEN"]="YOUR_REPLICATE_API_TOKEN"fromllama_index.coreimportSettings,VectorStoreIndex,SimpleDirectoryReaderfromllama_index.embeddings.huggingfaceimportHuggingFaceEmbeddingfromllama_index.llms.replicateimportReplicatefromtransformersimportAutoTokenizer# set the LLMllama2_7b_chat="meta/llama-2-7b-chat:8e6975e5ed6174911a6ff3d60540dfd4844201974602551e10e9e87ab143d81e"Settings.llm=Replicate(model=llama2_7b_chat,temperature=0.01,additional_kwargs={"top_p":1,"max_new_tokens":300},)# set tokenizer to match LLMSettings.tokenizer=AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-chat-hf")# set the embed modelSettings.embed_model=HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")documents=SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()index=VectorStoreIndex.from_documents(documents,)
To query:
query_engine=index.as_query_engine()query_engine.query("YOUR_QUESTION")
By default, data is stored in-memory.To persist to disk (under./storage
):
index.storage_context.persist()
To reload from disk:
fromllama_index.coreimportStorageContext,load_index_from_storage# rebuild storage contextstorage_context=StorageContext.from_defaults(persist_dir="./storage")# load indexindex=load_index_from_storage(storage_context)
We use poetry as the package manager for all Python packages. As a result, thedependencies of each Python package can be found by referencing thepyproject.toml
file in each of the package's folders.
cd<desired-package-folder>pip install poetrypoetry install --with dev
By default,llama-index-core
includes a_static
folder that contains the nltk and tiktoken cache that is included with the package installation. This ensures that you can easily runllama-index
in environments with restrictive disk access permissions at runtime.
To verify that these files are safe and valid, we use the githubattest-build-provenance
action. This action will verify that the files in the_static
folder are the same as the files in thellama-index-core/llama_index/core/_static
folder.
To verify this, you can run the following script (pointing to your installed package):
#!/bin/bashSTATIC_DIR="venv/lib/python3.13/site-packages/llama_index/core/_static"REPO="run-llama/llama_index"find"$STATIC_DIR" -type f|whileread -r file;doecho"Verifying:$file" gh attestation verify"$file" -R"$REPO"||echo"Failed to verify:$file"done
Reference to cite if you use LlamaIndex in a paper:
@software{Liu_LlamaIndex_2022,author = {Liu, Jerry},doi = {10.5281/zenodo.1234},month = {11},title = {{LlamaIndex}},url = {https://github.com/jerryjliu/llama_index},year = {2022}}
About
LlamaIndex is the leading framework for building LLM-powered agents over your data.
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.