- Notifications
You must be signed in to change notification settings - Fork0
This repository has code to securely run SLM (Small language models) locally using nodejs (servers side) or inside browser .
code2k13/onnx_javascript_browser_inference
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This repository is part ofplaybook for experiments on fine tuning small language models using LoRA, exporting them to ONNX and running them locally using ONNX compatibale runtime like javascript(node js) and WASM (browser)
- Clone the repository.
- Fromhttps://huggingface.co/code2022/SmolLM2-135M-Instruct-Paraphrase/tree/maincopy
model.onnxto themodel_filesdirectory of the repository. - Run
npm install
- Simple run
node app.js
This is what you should see
- Simply access
web.htmlfrom a local server (examplehttp://localhost:3000/web.html)
This is what you should see
https://www.kaggle.com/code/finalepoch/fine-tuning-smollm2-135m-for-paraphrasing-tasks
https://www.kaggle.com/code/finalepoch/smollm-360-lora-onnx-inference
@misc{allal2024SmolLM, title={SmolLM - blazingly fast and remarkably powerful}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Leandro von Werra and Thomas Wolf}, year={2024},}About
This repository has code to securely run SLM (Small language models) locally using nodejs (servers side) or inside browser .
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.

