Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Local AI Assistant on Android

License

NotificationsYou must be signed in to change notification settings

timmyy123/LLM-Hub

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

296 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Hub is an open-source Android app for on-device LLM chat and image generation. It's optimized for mobile usage (CPU/GPU/NPU acceleration) and supports multiple model formats so you can run powerful models locally and privately.

Download

Get it on Google Play

📸 Screenshots

AI ModelsAI FeaturesChat Interface

🚀 Features

🛠️ Six AI Tools

ToolDescription
💬 ChatMulti-turn conversations with RAG memory, web search, TTS auto-readout, and multimodal input (text, images, audio)
✍️ Writing AidSummarize, expand, rewrite, improve grammar, or generate code from descriptions
🎨 Image GeneratorCreate images from text prompts using Stable Diffusion 1.5 with swipeable gallery for variations
🌍 TranslatorTranslate text, images (OCR), and audio across 50+ languages - works offline
🎙️ TranscriberConvert speech to text with on-device processing
🛡️ Scam DetectorAnalyze messages and images for phishing with risk assessment

🔐 Privacy First

  • 100% on-device processing - no internet required for inference
  • Zero data collection - conversations never leave your device
  • No accounts, no tracking - completely private
  • Open-source - fully transparent

⚡ Advanced Capabilities

  • GPU/NPU acceleration for fast performance
  • Text-to-Speech with auto-readout
  • RAG with global memory for enhanced responses
  • Import custom models (.task, .litertlm, .mnn, .gguf)
  • Direct downloads from HuggingFace
  • 16 language interfaces

Quick Start

  1. Download from Google Play or build from source
  2. Open Settings → Download Models → Download or Import a model
  3. Select a model and start chatting or generating images

Supported Model Families (summary)

  • Gemma (LiteRT Task)
  • Llama (Task + GGUF variants)
  • Phi (LiteRT LM)
  • LiquidAI LFM (LFM 2.5 1.2B + LFM VL 1.6B vision-enabled)
  • Ministral / Mistral family (GGUF / ONNX)
  • IBM Granite (GGUF)

Model Formats

  • Task / LiteRT (.task): MediaPipe/LiteRT optimized models (GPU/NPU capable)
  • LiteRT LM (.litertlm): LiteRT language models
  • GGUF (.gguf): Quantized models — CPU inference powered by Nexa SDK; some vision-capable GGUF models require an additionalmmproj vision project file
  • ONNX (.onnx): Cross-platform model runtime

GGUF Compatibility Notes

  • Not all Android devices can load GGUF models in this app.
  • GGUF loading/runtime depends on Nexa SDK native libraries and device/ABI support; on unsupported devices, GGUF model loading can fail even if the model file is valid.
  • In this app, the GGUF NPU option is intentionally shown only for Snapdragon 8 Gen 4-class devices.

Importing models

  • Settings → Download Models → Import Model → choose.task,.litertlm,.mnn,.gguf, or.onnx
  • The full model list and download links live inapp/src/.../data/ModelData.kt (do not exhaustively list variants in the README)

Technology

  • Kotlin + Jetpack Compose (Material 3)
  • LLM Runtime: MediaPipe, LiteRT, Nexa SDK
  • Image Gen: MNN / Qualcomm QNN
  • Quantization: INT4/INT8

Acknowledgments

  • Nexa SDK — GGUF model inference support (credit shown in-app About) ⚡
  • Google, Meta, Microsoft, IBM, LiquidAI, Mistral, HuggingFace — model and tooling contributions

Development Setup

Building from source

git clone https://github.com/timmyy123/LLM-Hub.gitcd LLM-Hub./gradlew assembleDebug./gradlew installDebug

Setting up Hugging Face Token for Development

To use private or gated models, add your HuggingFace token tolocal.properties (do NOT commit this file):

HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Save and sync Gradle in Android Studio; the app will readBuildConfig.HF_TOKEN at build time.

Contributing

  • Fork → branch → PR. See CONTRIBUTING.md (or open an issue/discussion if unsure).

License

  • MIT (see LICENSE)

Support

Notes

  • This README is intentionally concise — consultModelData.kt for exact model variants, sizes, and format details.

Star History

Star History Chart


If you want, I can also add a short “Release notes / changelog” section and a quick performance guide for device profiles.


[8]ページ先頭

©2009-2026 Movatter.jp