Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A RAG-enabled workbench to index files and chat with an LLM about their contents.

License

NotificationsYou must be signed in to change notification settings

intentor/llm-workbench

Repository files navigation

A RAG-enabled workbench to index files and chat with an LLM about their contents.

How it works

image

Setup

Download and installOllama, a framework to interact with LLMs.

After installation, run Ollama:

ollama serve

In order to configure the model and the application, in a new terminal, run:

make setup

You can copysample.env as.env in the app root to customize application settings.

Running the app

make

A page to load context files and interact with the LLM will open in your browser.

Features

  • One-shot prompts to LLM.
  • File indexing for context querying.
  • Prompt tools to assist with prompt construction, context gathering, and response generation.
  • Replaying of a set of prompts, either from the current prompt history or a text file.
  • Displaying of all prompts and responses in the chat container.
  • Download of all or only the last chat messages in text or HTML files.

Prompt tools

Tools are used directly in the chat message input box.

ToolUsage
:<label>Add a label to a prompt for later reference. Labels should contain only lowercase alphanumeric characters and hyphens.
{response:last}Replaced by the last response in the chat history.
{response:label:<label>}Replaced by the labeled response in the chat history.
/contextQuery chunks from uploaded files.
/context?top-k=<number>Set the number of chunks to return.
/context?file="<file name with extension>Query chunks only from the specified file.
/rag <prompt>A shortcut to query the context and ask the LLM to use it to answer the prompt.
/endpoint <url>Perform aGET to the provided URL.
/echoEcho the prompt without sending it to the LLM. Can have replacements{response*} can be used for replacements.
/templateGet the last response as JSON and apply it to aJinja based template, allowing the custom formatting of response without relying on the LLM. The JSON data is available in thecontext variable. Refer to theTemplate usage section for details.

Prompt construction

:<label> /<tool> <prompt text, can contain {response:*} for replacement>

Template usage

Given a previous JSON response, it's possible to use the/template tool to create a template that will be processed using the JSON data.

Using the following JSON as the last response in the prompt history:

{"name":"User"}

The prompt below will generate a response using the JSON as input data in thecontext variable:

/template Name: {{context.name}}

The response to the prompt will be:

Name: User

Quick Cheat Sheet

Setting a variable

{% set variable_name = context %}

Date/time format (from ISO 8601)

{{context.field_date|parse_date|format_date("%d/%m/%y %H:%M:%S")}}

Conditional

{% if context.field_boolean %}Value if True{% else %}Value if False{% endif %})

Loop

 {% for item in context.list %} {{item.field}} {% endfor %}

Documentation

Please refer toJinja andjinja2_iso8601 documentations for more details on templating.

API mocking

In case you want to use API mocking to test context retrieval from endpoints, it's possible to use theJSON Server package for mocking.

Having Node.js/NPM installed, run the the following command to install dependencies:

make setup/server

Add the JSON you want to use as mock data in thedb.json file and run the server in a new terminal with the command below:

make run/server

The server will be accessible inhttp://localhost:3000/, with the root nodes of the JSON file as URL paths (e.g. in the demodb.json file) there's adata root node, which can be accessible throughhttp://localhost:3000/data.

Changing the model

By default, the workbench uses the LLM modelLlama3.1.

To change the LLM model used by the workbench, update theFROM parameter incontextualized_assistant.model file by a model available in theOllama library.

Using OpenRouter for generation

It's possible to useOpenRuter for requesting LLM generation from prompts, which replaces the default Ollama generator.

To setup OpenRouter, update theconfig.py settings below:

  • OPEN_ROUTER_KEY: Enter yourOpenRouter API key.
  • MODEL_GENERATOR: Change toOPENROUTER.
  • MODEL_LLM: Enter the model name from OpenRouter.

Note

The embebbding model still requires Ollama.

Known issues

  1. The buttons in the screen are not always disabled during operations. Please be aware that clicking on different buttons during actions may lead to unintended consequences.
  2. The download of chat history may not work during first attempt.
  3. Complex Excel/.xlsx files may not be loadable due to format incompatibility withopenpyxl.
  4. During replay, the scrolling may not be automatic.

About

A RAG-enabled workbench to index files and chat with an LLM about their contents.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp