Retrieval-augmented generation (RAG) is a technique that enableslarge language models (LLMs) to retrieve and incorporate new information.[1] With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM's pre-existingtraining data.[2] This allows LLMs to use domain-specific and/or updated information that is not available in the training data.[2] For example, this helps LLM-basedchatbots access internal company data or generate responses based on authoritative sources.
RAG improves large language models (LLMs) by incorporatinginformation retrieval before generating responses.[3] Unlike traditional LLMs that rely on static training data, RAG pulls relevant text from databases, uploaded documents, or web sources.[1] According toArs Technica, "RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process to help LLMs stick to the facts." This method helps reduceAI hallucinations,[3] which have caused chatbots to describe policies that don't exist, or recommend nonexistent legal cases to lawyers that are looking for citations to support their arguments.[4]
RAG also reduces the need to retrain LLMs with new data, saving on computational and financial costs.[1] Beyond efficiency gains, RAG also allows LLMs to include sources in their responses, so users can verify the cited sources. This provides greater transparency, as users can cross-check retrieved content to ensure accuracy and relevance.
The term RAG was first introduced in a 2020 research paper.[3]
LLMs can provide incorrect information. For example, when Google first demonstrated its LLM tool "Google Bard", the LLM provided incorrect information about theJames Webb Space Telescope. This error contributed to a $100 billion decline inthe company’s stock value.[4] RAG is used to prevent these errors, but it does not solve all the problems. For example, LLMs can generate misinformation even when pulling from factually correct sources if they misinterpret the context.MIT Technology Review gives the example of an AI-generated response stating, "The United States has had one Muslim president, Barack Hussein Obama." The model retrieved this from an academic book rhetorically titledBarack Hussein Obama: America’s First Muslim President? The LLM did not "know" or "understand" the context of the title, generating a false statement.[2]
LLMs with RAG are programmed to prioritize new information. This technique has been called "prompt stuffing." Without prompt stuffing, the LLM's input is generated by a user; with prompt stuffing, additional relevant context is added to this input to guide the model’s response. This approach provides the LLM with key information early in the prompt, encouraging it to prioritize the supplied data over pre-existing training knowledge.[5]
Retrieval-augmented generation (RAG) enhanceslarge language models (LLMs) by incorporating aninformation-retrieval mechanism that allows models to access and utilize additional data beyond their original training set.Ars Technica notes that "when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information" ("augmentation").[4] IBM states that "in the generative phase, the LLM draws from the augmented prompt and its internal representation of its training data to synthesize an engaging answer tailored to the user in that instant".[1]

Typically, the data to be referenced is converted into LLMembeddings, numerical representations in the form of a large vector space. RAG can be used on unstructured (usually text), semi-structured, or structured data (for exampleknowledge graphs). These embeddings are then stored in avector database to allow fordocument retrieval.
Given a user query, a document retriever is first called to select the most relevant documents that will be used to augment the query.[2][3] This comparison can be done using a variety of methods, which depend in part on the type of indexing used.[1]
The model feeds this relevant retrieved information into the LLM viaprompt engineering of the user's original query. Newer implementations (as of 2023[update]) can also incorporate specific augmentation modules with abilities such as expanding queries into multiple domains and using memory and self-improvement to learn from previous retrievals.
Finally, the LLM can generate output based on both the query and the retrieved documents.[2][6] Some models incorporate extra steps to improve output, such as the re-ranking of retrieved information, context selection, andfine-tuning.
Improvements to the basic process above can be applied at different stages in the RAG flow.
These methods focus on the encoding of text as either dense or sparse vectors.Sparse vectors, which encode the identity of a word, are typicallydictionary-length and contain mostly zeros.Dense vectors, which encode meaning, are more compact and contain fewer zeros. Various enhancements can improve the way similarities are calculated in the vector stores (databases).[7]
These methods aim to enhance the quality of document retrieval in vector databases:
By redesigning the language model with the retriever in mind, a 25-time smaller network can get comparable perplexity as its much larger counterparts.[14] Because it is trained from scratch, this method (Retro) incurs the high cost of training runs that the original RAG scheme avoided. The hypothesis is that by giving domain knowledge during training, Retro needs less focus on the domain and can devote its smaller weight resources only to language semantics. The redesigned language model is shown here.
It has been reported that Retro is not reproducible, so modifications were made to make it so. The more reproducible version is called Retro++ and includes in-context RAG.[15]
Chunking involves various strategies for breaking up the data into vectors so the retriever can find details in it.
Three types of chunking strategies are:[citation needed]
Sometimes vector database searches can miss key facts needed to answer a user's question. One way to mitigate this is to do a traditional text search, add those results to the text chunks linked to the retrieved vectors from the vector search, and feed the combined hybrid text into the language model for generation.[citation needed]
RAG systems are commonly evaluated using benchmarks designed to testretrievability, retrieval accuracy and generative quality. Popular datasets include BEIR, a suite of information retrieval tasks across diverse domains, and Natural Questions or Google QA for open-domain QA.[citation needed]
RAG does not prevent hallucinations in LLMs. According toArs Technica, "It is not a direct solution because the LLM can still hallucinate around the source material in its response."[4]
While RAG improves the accuracy of large language models (LLMs), it does not eliminate all challenges. One limitation is that while RAG reduces the need for frequent model retraining, it does not remove it entirely. Additionally, LLMs may struggle to recognize when they lack sufficient information to provide a reliable response. Without specific training, models may generate answers even when they should indicate uncertainty. According toIBM, this issue can arise when the model lacks the ability to assess its own knowledge limitations.[1]
RAG systems may retrieve factually correct but misleading sources, leading to errors in interpretation. In some cases, an LLM may extract statements from a source without considering its context, resulting in an incorrect conclusion. Additionally, when faced with conflicting information RAG models may struggle to determine which source is accurate. The worst case outcome of this limitation is that the model may combine details from multiple sources producing responses that merge outdated and updated information in a misleading manner. According to theMIT Technology Review, these issues occur because RAG systems may misinterpret the data they retrieve.[2]
{{cite book}}: CS1 maint: multiple names: authors list (link)