- Notifications
You must be signed in to change notification settings - Fork32
Local AI Assistant on Android
License
timmyy123/LLM-Hub
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
LLM Hub is an open-source Android app for on-device LLM chat and image generation. It's optimized for mobile usage (CPU/GPU/NPU acceleration) and supports multiple model formats so you can run powerful models locally and privately.

| Tool | Description |
|---|---|
| 💬 Chat | Multi-turn conversations with RAG memory, web search, TTS auto-readout, and multimodal input (text, images, audio) |
| ✍️ Writing Aid | Summarize, expand, rewrite, improve grammar, or generate code from descriptions |
| 🎨 Image Generator | Create images from text prompts using Stable Diffusion 1.5 with swipeable gallery for variations |
| 🌍 Translator | Translate text, images (OCR), and audio across 50+ languages - works offline |
| 🎙️ Transcriber | Convert speech to text with on-device processing |
| 🛡️ Scam Detector | Analyze messages and images for phishing with risk assessment |
- 100% on-device processing - no internet required for inference
- Zero data collection - conversations never leave your device
- No accounts, no tracking - completely private
- Open-source - fully transparent
- GPU/NPU acceleration for fast performance
- Text-to-Speech with auto-readout
- RAG with global memory for enhanced responses
- Import custom models (.task, .litertlm, .mnn, .gguf)
- Direct downloads from HuggingFace
- 16 language interfaces
Quick Start
- Download from Google Play or build from source
- Open Settings → Download Models → Download or Import a model
- Select a model and start chatting or generating images
Supported Model Families (summary)
- Gemma (LiteRT Task)
- Llama (Task + GGUF variants)
- Phi (LiteRT LM)
- LiquidAI LFM (LFM 2.5 1.2B + LFM VL 1.6B vision-enabled)
- Ministral / Mistral family (GGUF / ONNX)
- IBM Granite (GGUF)
Model Formats
- Task / LiteRT (.task): MediaPipe/LiteRT optimized models (GPU/NPU capable)
- LiteRT LM (.litertlm): LiteRT language models
- GGUF (.gguf): Quantized models — CPU inference powered by Nexa SDK; some vision-capable GGUF models require an additional
mmprojvision project file - ONNX (.onnx): Cross-platform model runtime
GGUF Compatibility Notes
- Not all Android devices can load GGUF models in this app.
- GGUF loading/runtime depends on Nexa SDK native libraries and device/ABI support; on unsupported devices, GGUF model loading can fail even if the model file is valid.
- In this app, the GGUF NPU option is intentionally shown only for Snapdragon 8 Gen 4-class devices.
Importing models
- Settings → Download Models → Import Model → choose
.task,.litertlm,.mnn,.gguf, or.onnx - The full model list and download links live in
app/src/.../data/ModelData.kt(do not exhaustively list variants in the README)
Technology
- Kotlin + Jetpack Compose (Material 3)
- LLM Runtime: MediaPipe, LiteRT, Nexa SDK
- Image Gen: MNN / Qualcomm QNN
- Quantization: INT4/INT8
Acknowledgments
- Nexa SDK — GGUF model inference support (credit shown in-app About) ⚡
- Google, Meta, Microsoft, IBM, LiquidAI, Mistral, HuggingFace — model and tooling contributions
Development Setup
git clone https://github.com/timmyy123/LLM-Hub.gitcd LLM-Hub./gradlew assembleDebug./gradlew installDebugTo use private or gated models, add your HuggingFace token tolocal.properties (do NOT commit this file):
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxSave and sync Gradle in Android Studio; the app will readBuildConfig.HF_TOKEN at build time.
Contributing
- Fork → branch → PR. See CONTRIBUTING.md (or open an issue/discussion if unsure).
License
- MIT (see LICENSE)
Support
- Email:timmyboy0623@gmail.com
- Issues & Discussions: GitHub
Notes
- This README is intentionally concise — consult
ModelData.ktfor exact model variants, sizes, and format details.
If you want, I can also add a short “Release notes / changelog” section and a quick performance guide for device profiles.
About
Local AI Assistant on Android
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Contributors2
Uh oh!
There was an error while loading.Please reload this page.


