Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.

NotificationsYou must be signed in to change notification settings

PocketLLM/PocketLLM

Repository files navigation

PocketLLM is a cross-platform assistant that pairs a Flutter application with a FastAPI backend to deliver secure, low-latency access to large language models. Users can connect their own provider accounts, browse real-time catalogues, import models, and chat across mobile and desktop targets with a shared experience.

Development StatusFlutterFastAPI

Overview

PocketLLM focuses on three pillars:

  1. Unified catalogue – aggregate models from OpenAI, Groq, OpenRouter, and ImageRouter using official SDKs with per-user API keys.
  2. Bring your own keys – users activate providers securely; secrets are encrypted at rest and never fall back to environment credentials.
  3. Consistent chat experience – Flutter renders the same responsive interface on Android, iOS, macOS, Windows, Linux, and the web.

The backend exposes REST APIs that the Flutter client consumes. A Supabase instance stores provider configurations, encrypted secrets, and chat history.

Key Features

AreaHighlights
Model managementDynamic/v1/models endpoint returns live catalogues with helpful status messaging when keys are missing or filters remove all results. Users can import, favourite, and set defaults.
Provider operationsGranular activation flows validate API keys with official SDKs, support base URL overrides, and expose a status dashboard.
Chat experienceStreaming responses, Markdown rendering, inline code blocks, and token accounting.
SecuritySecrets encrypted using Fernet + project key, strict error messages when configuration is incomplete, and no environment fallback for user operations.
ObservabilityStructured logging across services and catalogue caching with per-provider metrics.
Onboarding & referralsInvite-gated signup,/v1/waitlist applications, backend-validated invite codes, and a Flutter referral center for sharing codes and tracking rewards.

Architecture

PocketLLM├── lib/                     # Flutter client (Riverpod, GoRouter, Material 3)│   ├── component/           # Shared widgets and UI primitives│   ├── pages/               # Screens including Library, API Keys, Chat│   └── services/            # State management, API bridges, secure storage├── pocketllm-backend/       # FastAPI application│   ├── app/api/             # Versioned routes (/v1)│   ├── app/services/        # Provider catalogue, auth, jobs, models│   ├── app/utils/           # Crypto helpers, security utilities│   └── database/            # Dataclasses mirroring Supabase tables└── docs/                    # Operational guides and API references

Prerequisites

ComponentRequirement
Flutter3.19.6 (seeAGENTS.md setup script)
DartIncluded with Flutter SDK
Python3.11+ for FastAPI backend
Node / pnpmOptional for tooling around Supabase migrations
SupabaseService-role key and project URL configured in.env

Quick Start

Backend

cd pocketllm-backendpython -m venv .venvsource .venv/bin/activatepip install -r requirements.txtcp .env.example .env# configure Supabase credentials and ENCRYPTION_KEYuvicorn main:app --reload

Key endpoint:GET /v1/models

GET /v1/modelsAuthorization: Bearer <JWT>{"models": [    {"provider":"openai","id":"gpt-4o","name":"GPT-4 Omni","metadata": {"owned_by":"openai"}    }  ],"message":null,"configured_providers": ["openai"],"missing_providers": ["groq","openrouter","imagerouter"]}

When no API keys are stored the endpoint responds with an emptymodels array and a descriptivemessage, enabling the Flutter UI to prompt users to add credentials.

Flutter Client

cd ..flutter pub getflutter run# chooses a connected device or emulator

The API Keys page surfaces provider status, preview masks, and validation results. The Model Library consumes the unified/v1/models response and displays grouped catalogues with filtering options.

Testing

LayerCommand
Flutterflutter analyze && flutter test
Backendcd pocketllm-backend && pytest

Note: Some integration suites stub external SDKs; installopenai,groq, andopenrouter packages locally for full coverage.

Documentation

Contributing

Contributions are welcome! Please reviewCONTRIBUTING.md and ensure:

  1. New features include unit or widget tests.
  2. Backend changes run throughpytest with optional SDKs installed.
  3. Documentation and changelogs reflect API or workflow updates.
  4. Secrets and API keys are never committed.

License

PocketLLM is released under theMIT License.


Have questions or ideas? Open an issue or join the discussion — we’d love to hear how you are using PocketLLM.

About

🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.

Topics

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp