#
offline-inference
Here are 4 public repositories matching this topic...
Language:All
Filter by language
Run large language models like Qwen and LLaMA locally on Android for offline, private, real-time question answering and chat - powered by ONNX Runtime.
androidchatbotandroid-appon-device-aimobile-aionnx-runtimehuggingface-tokenizerslocal-llmqwenllama3local-llm-integrationoffline-inference
- Updated
Sep 9, 2025 - Kotlin
run llms and slms on your hardware & browser
desktop-appwebglelectron-appwebgpuprivacy-enhancing-technologiesinference-engineondeviceaiai-labon-device-aillamacpplocal-aiollamanode-llama-cppollama-appquantizedaioffline-inferenceprivateai
- Updated
Nov 11, 2025 - TypeScript
A comprehensive toolkit for streamlining and simplifying the offline inference process for LLMs across various models and libraries.
- Updated
Feb 14, 2025 - Python
Мультимодальная офлайновая система детекции контрафакта (текст+изображение+таблица).
dockerocrscikit-learntransformerstesseractpytorchclipmultimodaloffline-inferencecounterfeit-detection
- Updated
Sep 10, 2025 - Python
Improve this page
Add a description, image, and links to theoffline-inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theoffline-inference topic, visit your repo's landing page and select "manage topics."