Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

local-llm-integration

Here are 29 public repositories matching this topic...

recommendarr

An LLM driven recommendation system based on Radarr and Sonarr library or watch history information

  • UpdatedApr 14, 2025
  • Vue

Run large language models like Qwen and LLaMA locally on Android for offline, private, real-time question answering and chat - powered by ONNX Runtime.

  • UpdatedSep 9, 2025
  • Kotlin

🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.

  • UpdatedNov 19, 2025
  • Dart

A framework for using local LLMs (Qwen2.5-coder 7B) that are fine-tuned using RL to generate, debug, and optimize code solutions through iterative refinement.

  • UpdatedMar 14, 2025
  • Python

A fully customizable, super light-weight, cross-platform GenAI based Personal Assistant that can be run locally on your private hardware!

  • UpdatedMar 15, 2025
  • Python

🖼️ Python Image and 🎥 Video Generator using LLM providers and models — built in Claude Code 💻 CLI, Codex CLI 📝, Gemini CLI 🌌, and others — FREE & Open-Source forever 🚀

  • UpdatedNov 28, 2025
  • Python

🤖 An Intelligent Chatbot: Powered by the locally hosted Ollama 3.2 LLM 🧠 and ChromaDB 🗂️, this chatbot offers semantic search 🔍, session-aware responses 🗨️, and an interactive Streamlit interface 🎨 for seamless user interaction. 🚀

  • UpdatedDec 12, 2024
  • Python

An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.

  • UpdatedAug 11, 2025
  • Python

An AI-powered assistant to streamline knowledge management, member discovery, and content generation across Telegram and Twitter, while ensuring privacy with local LLM deployment.

  • UpdatedMar 24, 2025
  • Python

An autonomous AI agent for intelligently updating, maintaining, and curating a LightRAG knowledge base.

  • UpdatedAug 28, 2025
  • Python

This repository has code to securely run SLM (Small language models) locally using nodejs (servers side) or inside browser .

  • UpdatedNov 25, 2025
  • JavaScript

Local-first, AI-assisted API testing generator: turns OpenAPI/Swagger + Gherkin into ready-to-run Postman collections, a unified environment, and Markdown analysis — offline.

  • UpdatedOct 5, 2025
  • TypeScript

Python CLI/TUI for intelligent media file organization. Features atomic operations, rollback safety, and integrity checks, with a local LLM workflow for context-aware renaming and categorization from API-sourced metadata.

  • UpdatedJul 6, 2025
  • Python

PlantDeck is an offline herbal RAG that indexes your PDF books and monographs, extracts text/images with OCR, and answers questions with page-level citations using a local LLM via Ollama. Runs on your machine; no cloud. Field guide only; not medical advice.

  • UpdatedAug 11, 2025
  • Python

**Ask CLI** is a command-line tool for interacting with a local LLM (Large Language Model) server. It allows you to send queries and receive concise command-line responses.

  • UpdatedDec 22, 2024
  • Python

AI-powered code and idea assistant for developers: local-first, doc-aware, and fully test-automated.

  • UpdatedAug 9, 2025
  • Python

Local Retrieval-Augmented Generation (RAG) pipeline using LangChain and ChromaDB to query PDF files with LLMs.

  • UpdatedMay 5, 2025
  • Python

Improve this page

Add a description, image, and links to thelocal-llm-integration topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thelocal-llm-integration topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp