- Notifications
You must be signed in to change notification settings - Fork181
Unified interface for AI chat, Agentic workflows and more ...
License
trendy-design/llmchat
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation

LLMChat.co is a sophisticated AI-powered chatbot platform that prioritizes privacy while offering powerful research and agentic capabilities. Built as a monorepo with Next.js, TypeScript, and cutting-edge AI technologies, it provides multiple specialized chat modes including Pro Search and Deep Research for in-depth analysis of complex topics.
LLMChat.co stands out with its workflow orchestration system and focus on privacy, storing all user data locally in the browser using IndexedDB, ensuring your conversations never leave your device.
Advanced Research Modes
- Deep Research: Comprehensive analysis of complex topics with in-depth exploration
- Pro Search: Enhanced search with web integration for real-time information
Multiple LLM Provider Support
- OpenAI
- Anthropic
- Fireworks
- Together AI
- xAI
Privacy-Focused
- Local Storage: All user data stored in browser using IndexedDB via Dexie.js
- No Server-Side Storage: Chat history never leaves your device
Agentic Capabilities
- Workflow Orchestration: Complex task coordination via custom workflow engine
- Reflective Analysis: Self-improvement through analysis of prior reasoning
- Structured Output: Clean presentation of research findings
LLMChat.co is built as a monorepo with a clear separation of concerns:
├── apps/│ ├── web/ # Next.js web application│ └── desktop/ # Desktop application│└── packages/ ├── ai/ # AI models and workflow orchestration ├── actions/ # Shared actions and API handlers ├── common/ # Common utilities and hooks ├── orchestrator/# Workflow engine and task management ├── prisma/ # Database schema and client ├── shared/ # Shared types and constants ├── ui/ # Reusable UI components ├── tailwind-config/ # Shared Tailwind configuration └── typescript-config/ # Shared TypeScript configurationLLMChat.co's workflow orchestration enables powerful agentic capabilities through a modular, step-by-step approach. Here's how to create a research agent:
First, establish the data structure for events and context:
// Define the events emitted by each tasktypeAgentEvents={taskPlanner:{tasks:string[];query:string;};informationGatherer:{searchResults:string[];};informationAnalyzer:{analysis:string;insights:string[];};reportGenerator:{report:string;};};// Define the shared context between taskstypeAgentContext={query:string;tasks:string[];searchResults:string[];analysis:string;insights:string[];report:string;};
Next, set up the event emitter, context, and workflow builder:
import{OpenAI}from'openai';import{createTask}from'task';import{WorkflowBuilder}from'./builder';import{Context}from'./context';import{TypedEventEmitter}from'./events';// Initialize event emitter with proper typingconstevents=newTypedEventEmitter<AgentEvents>();// Create the workflow builder with proper contextconstbuilder=newWorkflowBuilder<AgentEvents,AgentContext>('research-agent',{ events,context:newContext<AgentContext>({query:'',tasks:[],searchResults:[],analysis:'',insights:[],report:'',}),});// Initialize LLM clientconstllm=newOpenAI({apiKey:process.env.OPENAI_API_KEY,});
Create specialized tasks for each step of the research process:
// Task Planner: Breaks down a research query into specific tasksconsttaskPlanner=createTask({name:'taskPlanner',execute:async({ context, data})=>{constuserQuery=data?.query||'Research the impact of AI on healthcare';constplanResponse=awaitllm.chat.completions.create({model:'gpt-4o',messages:[{role:'system',content:'You are a task planning assistant that breaks down research queries into specific search tasks.',},{role:'user',content:`Break down this research query into specific search tasks: "${userQuery}". Return a JSON array of tasks.`,},],response_format:{type:'json_object'},});constcontent=planResponse.choices[0].message.content||'{"tasks": []}';constparsedContent=JSON.parse(content);consttasks=parsedContent.tasks||[];context?.set('query',userQuery);context?.set('tasks',tasks);return{ tasks,query:userQuery,};},route:()=>'informationGatherer',});
// Information Gatherer: Searches for information based on tasksconstinformationGatherer=createTask({name:'informationGatherer',dependencies:['taskPlanner'],execute:async({ context, data})=>{consttasks=data.taskPlanner.tasks;constsearchResults:string[]=[];// Process each task to gather informationfor(consttaskoftasks){constsearchResponse=awaitllm.chat.completions.create({model:'gpt-4o',messages:[{role:'system',content:'You are a search engine that returns factual information.',},{role:'user',content:`Search for information about:${task}. Return relevant facts and data.`,},],});constresult=searchResponse.choices[0].message.content||'';if(result){searchResults.push(result);}}context?.set('searchResults',searchResults);return{ searchResults,};},route:()=>'informationAnalyzer',});
// Information Analyzer: Analyzes gathered information for insightsconstinformationAnalyzer=createTask({name:'informationAnalyzer',dependencies:['informationGatherer'],execute:async({ context, data})=>{constsearchResults=data.informationGatherer.searchResults;constquery=context?.get('query')||'';constanalysisResponse=awaitllm.chat.completions.create({model:'gpt-4o',messages:[{role:'system',content:'You are an analytical assistant that identifies patterns and extracts insights from information.',},{role:'user',content:`Analyze the following information regarding "${query}" and provide a coherent analysis with key insights:\n\n${searchResults.join('\n\n')}`,},],response_format:{type:'json_object'},});constcontent=analysisResponse.choices[0].message.content||'{"analysis": "", "insights": []}';constparsedContent=JSON.parse(content);constanalysis=parsedContent.analysis||'';constinsights=parsedContent.insights||[];context?.set('analysis',analysis);context?.set('insights',insights);return{ analysis, insights,};},route:()=>'reportGenerator',});
// Report Generator: Creates a comprehensive reportconstreportGenerator=createTask({name:'reportGenerator',dependencies:['informationAnalyzer'],execute:async({ context, data})=>{const{ analysis, insights}=data.informationAnalyzer;const{ query, searchResults}=context?.getAll()||{query:'',searchResults:[]};constreportResponse=awaitllm.chat.completions.create({model:'gpt-4o',messages:[{role:'system',content:'You are a report writing assistant that creates comprehensive, well-structured reports.',},{role:'user',content:`Create a comprehensive report on "${query}" using the following analysis and insights.\n\nAnalysis:${analysis}\n\nInsights:${insights.join('\n- ')}\n\nStructure the report with an executive summary, key findings, detailed analysis, and conclusions.`,},],});constreport=reportResponse.choices[0].message.content||'';context?.set('report',report);return{ report,};},route:()=>'end',});
Finally, assemble and run the workflow:
// Add all tasks to the workflowbuilder.addTask(taskPlanner);builder.addTask(informationGatherer);builder.addTask(informationAnalyzer);builder.addTask(reportGenerator);// Build the workflowconstworkflow=builder.build();// Start the workflow with an initial queryworkflow.start('taskPlanner',{query:'Research the impact of AI on healthcare'});// Export the workflow for external useexportconstresearchAgent=workflow;
The workflow processes through these stages:
- Planning: Breaks down complex questions into specific research tasks
- Information Gathering: Collects relevant data for each task
- Analysis: Synthesizes information and identifies key insights
- Report Generation: Produces a comprehensive, structured response
Each step emits events that can update the UI in real-time, allowing users to see the research process unfold.
LLMChat.co prioritizes user privacy by storing all data locally
- Next.js 14: React framework with server components
- TypeScript: Type-safe development
- Tailwind CSS: Utility-first styling
- Framer Motion: Smooth animations
- Shadcn UI: Component library
- Tiptap: Rich text editor
- Zustand: State management
- Dexie.js: Used for IndexedDB interaction with a simple and powerful API
- AI SDK: Unified interface for multiple AI providers
- Turborepo: Monorepo management
- Bun: JavaScript runtime and package manager
- ESLint & Prettier: Code quality tools
- Husky: Git hooks
- Ensure you have
buninstalled (recommended) oryarn
- Clone the repository:
git clone https://github.com/your-repo/llmchat.gitcd llmchat- Install dependencies:
bun install# oryarn install- Start the development server:
bun dev# oryarn dev- Open your browser and navigate to
http://localhost:3000
About
Unified interface for AI chat, Agentic workflows and more ...
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.