- Notifications
You must be signed in to change notification settings - Fork3
License
huggingface/inference-playground
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
title | emoji | colorFrom | colorTo | sdk | pinned | app_port |
---|---|---|---|---|---|---|
Inference Playground | 🔋 | blue | pink | docker | false | 3000 |
This application provides a user interface to interact with various large language models, leveraging the@huggingface/inference
library. It allows you to easily test and compare models hosted on Hugging Face, connect to different third-party Inference Providers, and even configure your own custom OpenAI-compatible endpoints.
TL;DR: After cloning, runpnpm i && pnpm run dev --open
Before you begin, ensure you have the following installed:
- Node.js: Version 20 or later is recommended.
- pnpm: Install it globally via
npm install -g pnpm
. - Hugging Face Account & Token: You'll need a free Hugging Face account and an access token to interact with models. Generate a token with at least
read
permissions fromhf.co/settings/tokens.
Follow these steps to get the Inference Playground running on your local machine:
Clone the Repository:
git clone https://github.com/huggingface/inference-playground.gitcd inference-playground
Install Dependencies:
pnpm install
Start the Development Server:
pnpm run dev
Access the Playground:
- Open your web browser and navigate to
http://localhost:5173
(or the port indicated in your terminal).
- Open your web browser and navigate to
- Model Interaction: Chat with a wide range of models available through Hugging Face Inference.
- Provider Support: Connect to various third-party inference providers (like Together, Fireworks, Replicate, etc.).
- Custom Endpoints: Add and use your own OpenAI-compatible API endpoints.
- Comparison View: Run prompts against two different models or configurations side-by-side.
- Configuration: Adjust generation parameters like temperature, max tokens, and top-p.
- Session Management: Save and load your conversation setups using Projects and Checkpoints.
- Code Snippets: Generate code snippets for various languages to replicate your inference calls.
We hope you find the Inference Playground useful for exploring and experimenting with language models!
About
Resources
License
Code of conduct
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors5
Uh oh!
There was an error while loading.Please reload this page.