Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Open source codebase powering the HuggingChat app

License

NotificationsYou must be signed in to change notification settings

huggingface/chat-ui

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chat UI repository thumbnail

A chat interface for LLMs. It is a SvelteKit app and it powers theHuggingChat app on hf.co/chat.

  1. Quickstart
  2. Database Options
  3. Launch
  4. Optional Docker Image
  5. Extra parameters
  6. Building

Note

Chat UI only supports OpenAI-compatible APIs viaOPENAI_BASE_URL and the/models endpoint. Provider-specific integrations (legacyMODELS env var, GGUF discovery, embeddings, web-search helpers, etc.) are removed, but any service that speaks the OpenAI protocol (llama.cpp server, Ollama, OpenRouter, etc. will work by default).

Note

The old version is still available on thelegacy branch

Quickstart

Chat UI speaks to OpenAI-compatible APIs only. The fastest way to get running is with the Hugging Face Inference Providers router plus your personal Hugging Face access token.

Step 1 – Create.env.local:

OPENAI_BASE_URL=https://router.huggingface.co/v1OPENAI_API_KEY=hf_************************# Fill in once you pick a database option belowMONGODB_URL=

OPENAI_API_KEY can come from any OpenAI-compatible endpoint you plan to call. Pick the combo that matches your setup and drop the values into.env.local:

ProviderExampleOPENAI_BASE_URLExample key env
Hugging Face Inference Providers routerhttps://router.huggingface.co/v1OPENAI_API_KEY=hf_xxx (orHF_TOKEN legacy alias)
llama.cpp server (llama.cpp --server --api)http://127.0.0.1:8080/v1OPENAI_API_KEY=sk-local-demo (any string works; llama.cpp ignores it)
Ollama (with OpenAI-compatible bridge)http://127.0.0.1:11434/v1OPENAI_API_KEY=ollama
OpenRouterhttps://openrouter.ai/api/v1OPENAI_API_KEY=sk-or-v1-...
Poehttps://api.poe.com/v1OPENAI_API_KEY=pk_...

Check the root.env template for the full list of optional variables you can override.

Step 2 – Choose where MongoDB lives: Either provision a managed cluster (for example MongoDB Atlas) or run a local container. Both approaches are described inDatabase Options. After you have the URI, drop it intoMONGODB_URL (and, if desired, setMONGODB_DB_NAME).

Step 3 – Install and launch the dev server:

git clone https://github.com/huggingface/chat-uicd chat-uinpm installnpm run dev -- --open

You now have Chat UI running against the Hugging Face router without needing to host MongoDB yourself.

Database Options

Chat history, users, settings, files, and stats all live in MongoDB. You can point Chat UI at any MongoDB 6/7 deployment.

MongoDB Atlas (managed)

  1. Create a free cluster atmongodb.com.
  2. Add your IP (or0.0.0.0/0 for development) to the network access list.
  3. Create a database user and copy the connection string.
  4. Paste that string intoMONGODB_URL in.env.local. Keep the defaultMONGODB_DB_NAME=chat-ui or change it per environment.

Atlas keeps MongoDB off your laptop, which is ideal for teams or cloud deployments.

Local MongoDB (container)

If you prefer to run MongoDB locally:

docker run -d -p 27017:27017 --name mongo-chatui mongo:latest

Then setMONGODB_URL=mongodb://localhost:27017 in.env.local. You can also supplyMONGO_STORAGE_PATH if you want Chat UI’s fallback in-memory server to persist under a specific folder.

Launch

After configuring your environment variables, start Chat UI with:

npm installnpm run dev

The dev server listens onhttp://localhost:5173 by default. Usenpm run build /npm run preview for production builds.

Optional Docker Image

Prefer containerized setup? You can run everything in one container as long as you supply a MongoDB URI (local or hosted):

docker run \  -p 3000 \  -e MONGODB_URL=mongodb://host.docker.internal:27017 \  -e OPENAI_BASE_URL=https://router.huggingface.co/v1 \  -e OPENAI_API_KEY=hf_*** \  -v db:/data \  ghcr.io/huggingface/chat-ui-db:latest

host.docker.internal lets the container reach a MongoDB instance on your host machine; swap it for your Atlas URI if you use the hosted option. All environment variables accepted in.env.local can be provided as-e flags.

Extra parameters

Theming

You can use a few environment variables to customize the look and feel of chat-ui. These are by default:

PUBLIC_APP_NAME=ChatUIPUBLIC_APP_ASSETS=chatuiPUBLIC_APP_DESCRIPTION="Making the community's best AI chat models available to everyone."PUBLIC_APP_DATA_SHARING=
  • PUBLIC_APP_NAME The name used as a title throughout the app.
  • PUBLIC_APP_ASSETS Is used to find logos & favicons instatic/$PUBLIC_APP_ASSETS, current options arechatui andhuggingchat.
  • PUBLIC_APP_DATA_SHARING Can be set to 1 to add a toggle in the user settings that lets your users opt-in to data sharing with models creator.

Models

This build does not use theMODELS env var or GGUF discovery. Configure models viaOPENAI_BASE_URL only; Chat UI will fetch${OPENAI_BASE_URL}/models and populate the list automatically. Authorization usesOPENAI_API_KEY (preferred).HF_TOKEN remains a legacy alias.

LLM Router (Optional)

Chat UI can perform client-side routingkatanemo/Arch-Router-1.5B as the routing model without running a separate router service. The UI exposes a virtual model alias called "Omni" (configurable) that, when selected, chooses the best route/model for each message.

  • Provide a routes policy JSON viaLLM_ROUTER_ROUTES_PATH. No sample file ships with this branch, so you must point the variable to a JSON array you create yourself (for example, commit one in your project likeconfig/routes.chat.json). Each route entry needsname,description,primary_model, and optionalfallback_models.
  • Configure the Arch router selection endpoint withLLM_ROUTER_ARCH_BASE_URL (OpenAI-compatible/chat/completions) andLLM_ROUTER_ARCH_MODEL (e.g.router/omni). The Arch call reusesOPENAI_API_KEY for auth.
  • Mapother to a concrete route viaLLM_ROUTER_OTHER_ROUTE (default:casual_conversation). If Arch selection fails, calls fall back toLLM_ROUTER_FALLBACK_MODEL.
  • Selection timeout can be tuned viaLLM_ROUTER_ARCH_TIMEOUT_MS (default 10000).
  • Omni alias configuration:PUBLIC_LLM_ROUTER_ALIAS_ID (defaultomni),PUBLIC_LLM_ROUTER_DISPLAY_NAME (defaultOmni), and optionalPUBLIC_LLM_ROUTER_LOGO_URL.

When you select Omni in the UI, Chat UI will:

  • Call the Arch endpoint once (non-streaming) to pick the best route for the last turns.
  • Emit RouterMetadata immediately (route and actual model used) so the UI can display it.
  • Stream from the selected model via your configuredOPENAI_BASE_URL. On errors, it tries route fallbacks.

Building

To create a production version of your app:

npm run build

You can preview the production build withnpm run preview.

To deploy your app, you may need to install anadapter for your target environment.


[8]ページ先頭

©2009-2025 Movatter.jp