local-llm-integration
Here are 29 public repositories matching this topic...
Language:All
Sort:Most stars
An LLM driven recommendation system based on Radarr and Sonarr library or watch history information
- Updated
Apr 14, 2025 - Vue
Run large language models like Qwen and LLaMA locally on Android for offline, private, real-time question answering and chat - powered by ONNX Runtime.
- Updated
Sep 9, 2025 - Kotlin
🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.
- Updated
Nov 19, 2025 - Dart
Local LLM proxy, DevOps friendly
- Updated
Nov 25, 2025 - Go
A framework for using local LLMs (Qwen2.5-coder 7B) that are fine-tuned using RL to generate, debug, and optimize code solutions through iterative refinement.
- Updated
Mar 14, 2025 - Python
A fully customizable, super light-weight, cross-platform GenAI based Personal Assistant that can be run locally on your private hardware!
- Updated
Mar 15, 2025 - Python
🖼️ Python Image and 🎥 Video Generator using LLM providers and models — built in Claude Code 💻 CLI, Codex CLI 📝, Gemini CLI 🌌, and others — FREE & Open-Source forever 🚀
- Updated
Nov 28, 2025 - Python
🤖 An Intelligent Chatbot: Powered by the locally hosted Ollama 3.2 LLM 🧠 and ChromaDB 🗂️, this chatbot offers semantic search 🔍, session-aware responses 🗨️, and an interactive Streamlit interface 🎨 for seamless user interaction. 🚀
- Updated
Dec 12, 2024 - Python
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
- Updated
Aug 11, 2025 - Python
An AI-powered assistant to streamline knowledge management, member discovery, and content generation across Telegram and Twitter, while ensuring privacy with local LLM deployment.
- Updated
Mar 24, 2025 - Python
An autonomous AI agent for intelligently updating, maintaining, and curating a LightRAG knowledge base.
- Updated
Aug 28, 2025 - Python
This repository has code to securely run SLM (Small language models) locally using nodejs (servers side) or inside browser .
- Updated
Nov 25, 2025 - JavaScript
Local-first, AI-assisted API testing generator: turns OpenAPI/Swagger + Gherkin into ready-to-run Postman collections, a unified environment, and Markdown analysis — offline.
- Updated
Oct 5, 2025 - TypeScript
Python CLI/TUI for intelligent media file organization. Features atomic operations, rollback safety, and integrity checks, with a local LLM workflow for context-aware renaming and categorization from API-sourced metadata.
- Updated
Jul 6, 2025 - Python
PlantDeck is an offline herbal RAG that indexes your PDF books and monographs, extracts text/images with OCR, and answers questions with page-level citations using a local LLM via Ollama. Runs on your machine; no cloud. Field guide only; not medical advice.
- Updated
Aug 11, 2025 - Python
JV-Archon is my personal offline LLM ecosystem.
- Updated
Nov 14, 2025 - Shell
**Ask CLI** is a command-line tool for interacting with a local LLM (Large Language Model) server. It allows you to send queries and receive concise command-line responses.
- Updated
Dec 22, 2024 - Python
WoolyChat - open-source AI chat app for locally hosted Ollama models. Written in Flask/JavaScript.
- Updated
Oct 16, 2025 - Python
AI-powered code and idea assistant for developers: local-first, doc-aware, and fully test-automated.
- Updated
Aug 9, 2025 - Python
Local Retrieval-Augmented Generation (RAG) pipeline using LangChain and ChromaDB to query PDF files with LLMs.
- Updated
May 5, 2025 - Python
Improve this page
Add a description, image, and links to thelocal-llm-integration topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thelocal-llm-integration topic, visit your repo's landing page and select "manage topics."