Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

LLM as Agent

NotificationsYou must be signed in to change notification settings

Qiyuan-Ge/OpenAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Badge 1Badge 2

Quick Experience

openassistant app

darkassistant is the new version(the task will be completed by agents):
darkassistant app

Features

Let's ask the assistant first🤔:

display4

So this project encompasses the following features:

1. Rapid Conversion of LLM to Agent🤖

Effortlessly transform your Language Model (LLM) into an Agent.

2. LLM Proficiency Testing Tool🛠️

Explore and evaluate the capabilities of your Language Model through the integrated testing tool.

3. Open Assistant WebUI

Experience the convenience of the Open Assistant Web User Interface (WebUI).

Watch the demo(Vicuna v1.5)

  • YoutubeWatch the video
  • BiliBiliWatch the video

Everyone have their own AI assistant

Image 1Image 2
Image 3Image 4

Support models

models on huggingface:

  • vicuna
  • airoboros
  • koala
  • alpaca
  • chatglm
  • chatglm2
  • dolly_v2
  • oasst_pythia
  • oasst_llama
  • tulu
  • stablelm
  • baize
  • rwkv
  • openbuddy
  • phoenix
  • claude
  • mpt-7b-chat
  • mpt-30b-chat
  • mpt-30b-instruct
  • bard
  • billa
  • redpajama-incite
  • h2ogpt
  • Robin
  • snoozy
  • manticore
  • falcon
  • polyglot_changgpt
  • tigerbot
  • xgen
  • internlm-chat
  • starchat
  • baichuan-chat
  • llama-2
  • cutegpt
  • open-orca
  • qwen-7b-chat
  • aquila-chat
  • ...

How to use

1. Installation

git clone https://github.com/Qiyuan-Ge/OpenAssistant.gitcd OpenAssistantpip install -r requirements.txt

2. Starting the server

First, launch the controller:

python3 -m fastchat.serve.controller

Then, launch the model worker(s):

python3 -m fastchat.serve.multi_model_worker \    --model-path model_math \    --model-names "gpt-3.5-turbo,text-davinci-003" \    --model-path embedding_model_math \    --model-names "text-embedding-ada-002"

Finally, launch the RESTful API server:

python3 -m fastchat.serve.openai_api_server --host 0.0.0.0 --port 6006

You should see terminal output like:

INFO:     Started server process [1301]INFO:     Waiting for application startup.INFO:     Application startup complete.INFO:     Uvicorn running on http://0.0.0.0:6006 (Press CTRL+C to quit)

See more details inhttps://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md

3. Starting the web UI

streamlit run main.py

Then replace theAPI Base with your api base (in this case ishttp://0.0.0.0:6006/v1)
apibase

About

LLM as Agent

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp