Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
⌘K
Up or down tonavigateEnter toselectEscape toclose
On this page

Build a Real-time LLM Chat App with Deno

Large Language Models (LLMs) like OpenAI's GPT and Anthropic's Claude arepowerful tools for creating intelligent, conversational applications. In thistutorial, we'll build a real-time chat application where AI characters poweredby LLMs interact with users in a roleplay game setting.

You can see the code for thefinished app on GitHub.

Deploy your own

Want to skip the tutorial and deploy the finished app right now? Click thebutton below to instantly deploy your own copy of the complete LLM chatapplication to Deno Deploy. You'll get a live, working application that you cancustomize and modify as you learn!

Deploy on Deno

Once you have deployed, add yourOPENAI_API_KEY orANTHROPIC_API_KEY in theproject "Settings".

Initialize a new projectJump to heading

First, create a new directory for your project and initialize it:

mkdir deno-llm-chatcd deno-llm-chatdeno init

Project structureJump to heading

We'll create a modular structure that separates concerns between LLMintegration, game logic, and server management:

├── main.ts# Main server entry point├── main_test.ts# Test file├── deno.json# Deno configuration├── .env# Environment variables (API keys)├── src/│   ├── config/│   │   ├── characters.ts# Character configurations and presets│   │   └── scenarios.ts# Pre-defined scenario templates│   ├── game/│   │   ├── GameManager.ts# Core game logic and state management│   │   └── Character.ts# AI character implementation│   ├── llm/│   │   └── LLMProvider.ts# LLM integration layer (OpenAI/Anthropic)│   └── server/│       └── WebSocketHandler.ts# Real-time communication└── static/    ├── index.html# Web interface    ├── app.js# Frontend JavaScript    └── styles.css# Application styling

Set up dependenciesJump to heading

Add the required dependencies to yourdeno.json:

deno.json
{"tasks":{"dev":"deno run -A --env-file --watch main.ts","start":"deno run --allow-net --allow-env --allow-read main.ts","test":"deno test --allow-net --allow-env"},"imports":{"@std/assert":"jsr:@std/assert@1","@std/http":"jsr:@std/http@1","@std/uuid":"jsr:@std/uuid@1","@std/json":"jsr:@std/json@1"},"compilerOptions":{"lib":["dom","dom.asynciterable","deno.ns"]}}

Configure environment variablesJump to heading

Create a.env file for your API keys. The application supports both OpenAI andAnthropic. Comment out the config that you won't be using with a#.

.env
# Choose one of the following LLM providers:# OpenAI ConfigurationOPENAI_API_KEY=your-openai-api-key-here# OR Anthropic Configuration# ANTHROPIC_API_KEY=your-anthropic-api-key-here# Server Configuration (optional)PORT=8000

You can get API keys from:

Build the LLM ProviderJump to heading

The core of our application is the LLM provider that handles communication withAI services. Createsrc/llm/LLMProvider.ts:

src/llm/LLMProvider.ts
exportinterfaceLLMConfig{  provider:"openai"|"anthropic"|"mock";  apiKey?:string;  model?:string;  maxTokens?:number;  temperature?:number;}exportclassLLMProvider{private config: LLMConfig;private rateLimitedUntil:number=0;private retryCount:number=0;private maxRetries:number=3;constructor(config?: Partial<LLMConfig>){const apiKey= config?.apiKey||      Deno.env.get("OPENAI_API_KEY")||      Deno.env.get("ANTHROPIC_API_KEY");// Auto-detect provider based on available API keyslet provider= config?.provider;if(!provider&& apiKey){if(Deno.env.get("OPENAI_API_KEY")){        provider="openai";}elseif(Deno.env.get("ANTHROPIC_API_KEY")){        provider="anthropic";}}this.config={      provider: provider||"mock",      model: provider==="anthropic"?"claude-3-haiku-20240307":"gpt-3.5-turbo",      maxTokens:150,      temperature:0.8,...config,      apiKey,};console.log(`LLM Provider initialized:${this.config.provider}`);}asyncgenerateResponse(prompt:string):Promise<string>{// Check rate limitingif(this.rateLimitedUntil> Date.now()){console.warn("Rate limited, using mock response");returnthis.mockResponse(prompt);}try{switch(this.config.provider){case"openai":returnawaitthis.callOpenAI(prompt);case"anthropic":returnawaitthis.callAnthropic(prompt);case"mock":default:returnthis.mockResponse(prompt);}}catch(error){console.error("LLM API error:", error);if(this.shouldRetry(error)){this.retryCount++;if(this.retryCount<=this.maxRetries){console.log(`Retrying... (${this.retryCount}/${this.maxRetries})`);awaitthis.delay(1000*this.retryCount);returnthis.generateResponse(prompt);}}returnthis.mockResponse(prompt);}}privateasynccallOpenAI(prompt:string):Promise<string>{const response=awaitfetch("https://api.openai.com/v1/chat/completions",{      method:"POST",      headers:{"Authorization":`Bearer${this.config.apiKey}`,"Content-Type":"application/json",},      body:JSON.stringify({        model:this.config.model,        messages:[{ role:"user", content: prompt}],        max_tokens:this.config.maxTokens,        temperature:this.config.temperature,}),});if(!response.ok){thrownewError(`OpenAI API error:${response.status}`);}const data=await response.json();this.retryCount=0;// Reset on successreturn data.choices[0].message.content.trim();}privateasynccallAnthropic(prompt:string):Promise<string>{const response=awaitfetch("https://api.anthropic.com/v1/messages",{      method:"POST",      headers:{"x-api-key":this.config.apiKey!,"Content-Type":"application/json","anthropic-version":"2023-06-01",},      body:JSON.stringify({        model:this.config.model,        max_tokens:this.config.maxTokens,        messages:[{ role:"user", content: prompt}],        temperature:this.config.temperature,}),});if(!response.ok){thrownewError(`Anthropic API error:${response.status}`);}const data=await response.json();this.retryCount=0;// Reset on successreturn data.content[0].text.trim();}privatemockResponse(prompt:string):string{const responses=["I understand! Let me think about this...","That's an interesting approach to the situation.","I see what you're getting at. Here's what I think...","Fascinating! I would approach it this way...","Good point! That gives me an idea...",];return responses[Math.floor(Math.random()* responses.length)];}privateshouldRetry(error:any):boolean{// Retry on rate limits and temporary server errorsconst errorMessage= error.message?.toLowerCase()||"";return errorMessage.includes("rate limit")||      errorMessage.includes("429")||      errorMessage.includes("500")||      errorMessage.includes("502")||      errorMessage.includes("503");}privatedelay(ms:number):Promise<void>{returnnewPromise((resolve)=>setTimeout(resolve, ms));}}

In this file we set an LLM provider, this allows us to easily switch betweendifferent LLM APIs or mock responses for testing. We also add a retry mechanismfor handling API errors.

Create AI CharactersJump to heading

Characters are the heart of our roleplay application. Createsrc/game/Character.ts:

src/game/Character.ts
import{ LLMProvider}from"../llm/LLMProvider.ts";exportclassCharacter{public name:string;publicclass:string;public personality:string;public conversationHistory:string[]=[];private llmProvider: LLMProvider;constructor(    name:string,    characterClass:string,    personality:string,    llmProvider: LLMProvider,){this.name= name;this.class= characterClass;this.personality= personality;this.llmProvider= llmProvider;}asyncgenerateResponse(    context:string,    userMessage:string,):Promise<string>{// Build the character's prompt with personality and contextconst characterPrompt=`You are${this.name}, a${this.class} with this personality:${this.personality}Context:${context}Recent conversation:${this.conversationHistory.slice(-3).join("\n")}User message:${userMessage}Respond as${this.name} in character. Keep responses under 150 words and maintain your personality traits. Be engaging and helpful to advance the roleplay scenario.`.trim();try{const response=awaitthis.llmProvider.generateResponse(characterPrompt);// Add to conversation historythis.conversationHistory.push(`User:${userMessage}`);this.conversationHistory.push(`${this.name}:${response}`);// Keep history manageableif(this.conversationHistory.length>20){this.conversationHistory=this.conversationHistory.slice(-10);}return response;}catch(error){console.error(`Error generating response for${this.name}:`, error);return`*${this.name} seems lost in thought and doesn't respond*`;}}getCharacterInfo(){return{      name:this.name,class:this.class,      personality:this.personality,};}clearHistory(){this.conversationHistory=[];}}

Here we define theCharacter class, which represents each player character inthe game. This class will handle generating responses based on the character'spersonality and the current game context.

Set up character configurationsJump to heading

Create predefined character templates insrc/config/characters.ts:

src/config/characters.ts
exportinterfaceCharacterConfig{  name:string;class:string;  personality:string;  emoji?:string;  backstory?:string;}exportconst defaultCharacters: CharacterConfig[]=[{    name:"Tharin",    emoji:"⚔️",class:"Fighter",    personality:"Brave and loyal team leader, always ready to protect allies. Takes charge in dangerous situations but listens to party input.",    backstory:"A former city guard seeking adventure and justice.",},{    name:"Lyra",    emoji:"🔮",class:"Wizard",    personality:"Curious and analytical strategist, loves solving puzzles. Uses magic creatively to support the party.",    backstory:"A scholar of ancient magic seeking forgotten spells.",},{    name:"Finn",    emoji:"🗡️",class:"Rogue",    personality:"Witty and sneaky scout, prefers clever solutions. Acts quickly and adapts to what allies need.",    backstory:"A former street thief now using skills for good.",},];

These templates are what theCharacter class will use to instantiate eachcharacter with their unique traits. The LLM will use these traits to generateresponses that are consistent with each character's personality and backstory.

Build the Game ManagerJump to heading

The Game Manager coordinates characters and maintains game state. Createsrc/game/GameManager.ts:

src/game/GameManager.ts
import{ Character}from"./Character.ts";import{ LLMProvider}from"../llm/LLMProvider.ts";exportinterfaceGameState{  id:string;  gmPrompt:string;  characters: Character[];  messages: GameMessage[];  currentTurn:number;  isActive:boolean;  createdAt: Date;}exportinterfaceGameMessage{  id:string;  speaker:string;  message:string;  timestamp: Date;  type:"gm"|"character"|"system";}exportinterfaceStartGameRequest{  gmPrompt:string;  characters:Array<{    name:string;class:string;    personality:string;}>;}exportclassGameManager{private games: Map<string, GameState>=newMap();private llmProvider: LLMProvider;constructor(){this.llmProvider=newLLMProvider();}asyncstartNewGame(    gmPrompt:string,    characterConfigs: StartGameRequest["characters"],):Promise<string>{const gameId= crypto.randomUUID();// Create characters with their LLM personalitiesconst characters= characterConfigs.map((config)=>newCharacter(        config.name,        config.class,        config.personality,this.llmProvider,));const gameState: GameState={      id: gameId,      gmPrompt,      characters,      messages:[],      currentTurn:0,      isActive:true,      createdAt:newDate(),};this.games.set(gameId, gameState);// Add initial system messagethis.addMessage(gameId,{      speaker:"System",      message:`Game started! Players:${        characters.map((c)=> c.name).join(", ")}`,      type:"system",});console.log(`New game started:${gameId}`);return gameId;}asynchandlePlayerMessage(    gameId:string,    message:string,):Promise<GameMessage[]>{const game=this.games.get(gameId);if(!game||!game.isActive){thrownewError("Game not found or inactive");}// Add player messagethis.addMessage(gameId,{      speaker:"Player",      message,      type:"gm",});// Generate responses from each characterconst responses: GameMessage[]=[];for(const characterof game.characters){try{const context=this.buildContext(game);const response=await character.generateResponse(context, message);const characterMessage=this.addMessage(gameId,{          speaker: character.name,          message: response,          type:"character",});        responses.push(characterMessage);// Small delay between character responses for realismawaitnewPromise((resolve)=>setTimeout(resolve,500));}catch(error){console.error(`Error getting response from${character.name}:`, error);}}    game.currentTurn++;return responses;}privatebuildContext(game: GameState):string{const recentMessages= game.messages.slice(-5);const context=[`Scenario:${game.gmPrompt}`,`Current turn:${game.currentTurn}`,"Recent events:",...recentMessages.map((m)=>`${m.speaker}:${m.message}`),].join("\n");return context;}privateaddMessage(    gameId:string,    messageData: Omit<GameMessage,"id"|"timestamp">,): GameMessage{const game=this.games.get(gameId);if(!game)thrownewError("Game not found");const message: GameMessage={      id: crypto.randomUUID(),      timestamp:newDate(),...messageData,};    game.messages.push(message);return message;}getGame(gameId:string): GameState|undefined{returnthis.games.get(gameId);}getActiveGames():string[]{returnArray.from(this.games.entries()).filter(([_, game])=> game.isActive).map(([id, _])=> id);}endGame(gameId:string):boolean{const game=this.games.get(gameId);if(game){      game.isActive=false;console.log(`Game ended:${gameId}`);returntrue;}returnfalse;}}

The game manager will handle all game-related logic, including starting newgames, processing player messages, and managing game state. When a player sendsa message, the game manager will route it to the appropriate character forresponse generation.

Add WebSocket SupportJump to heading

Real-time communication makes the roleplay experience more engaging. Createsrc/server/WebSocketHandler.ts:

src/server/WebSocketHandler.ts
import{ GameManager}from"../game/GameManager.ts";exportinterfaceWebSocketMessage{  type:"start_game"|"send_message"|"join_game"|"get_game_state";  gameId?:string;  data?:any;}exportclassWebSocketHandler{private gameManager: GameManager;private connections: Map<string, WebSocket>=newMap();constructor(gameManager: GameManager){this.gameManager= gameManager;}handleConnection(request: Request): Response{const{ socket, response}= Deno.upgradeWebSocket(request);const connectionId= crypto.randomUUID();this.connections.set(connectionId, socket);    socket.onopen=()=>{console.log(`WebSocket connection opened:${connectionId}`);this.sendMessage(socket,{        type:"connection",        data:{ connectionId, message:"Connected to LLM Chat server"},});};    socket.onmessage=async(event)=>{try{const message: WebSocketMessage=JSON.parse(event.data);awaitthis.handleMessage(socket, message);}catch(error){console.error("Error handling WebSocket message:", error);this.sendError(socket,"Invalid message format");}};    socket.onclose=()=>{console.log(`WebSocket connection closed:${connectionId}`);this.connections.delete(connectionId);};    socket.onerror=(error)=>{console.error(`WebSocket error for${connectionId}:`, error);};return response;}privateasynchandleMessage(socket: WebSocket, message: WebSocketMessage){switch(message.type){case"start_game":awaitthis.handleStartGame(socket, message.data);break;case"send_message":awaitthis.handleSendMessage(socket, message);break;case"get_game_state":awaitthis.handleGetGameState(socket, message.gameId!);break;default:this.sendError(socket,`Unknown message type:${message.type}`);}}privateasynchandleStartGame(socket: WebSocket, data:any){try{const{ gmPrompt, characters}= data;const gameId=awaitthis.gameManager.startNewGame(gmPrompt, characters);this.sendMessage(socket,{        type:"game_started",        data:{          gameId,          message:"Game started successfully! You can now send messages to interact with your characters.",},});}catch(error){this.sendError(socket,`Failed to start game:${error.message}`);}}privateasynchandleSendMessage(    socket: WebSocket,    message: WebSocketMessage,){try{const{ gameId, data}= message;if(!gameId){this.sendError(socket,"Game ID required");return;}const responses=awaitthis.gameManager.handlePlayerMessage(        gameId,        data.message,);this.sendMessage(socket,{        type:"character_responses",        data:{ gameId, responses},});}catch(error){this.sendError(socket,`Failed to process message:${error.message}`);}}privateasynchandleGetGameState(socket: WebSocket, gameId:string){try{const game=this.gameManager.getGame(gameId);if(!game){this.sendError(socket,"Game not found");return;}this.sendMessage(socket,{        type:"game_state",        data:{          gameId,          characters: game.characters.map((c)=> c.getCharacterInfo()),          messages: game.messages.slice(-10),// Last 10 messages          isActive: game.isActive,},});}catch(error){this.sendError(socket,`Failed to get game state:${error.message}`);}}privatesendMessage(socket: WebSocket, message:any){if(socket.readyState=== WebSocket.OPEN){      socket.send(JSON.stringify(message));}}privatesendError(socket: WebSocket, error:string){this.sendMessage(socket,{      type:"error",      data:{ error},});}}

Here we set up the WebSocket server to handle connections and messages.Websockets allow for real-time communication between the client and server,making them ideal for interactive applications like a chat app, or game. We sendmessages back and forth between the client and server to keep the game state insync.

Create the main serverJump to heading

Now let's tie everything together inmain.ts:

main.ts
import{ GameManager}from"./src/game/GameManager.ts";import{ WebSocketHandler}from"./src/server/WebSocketHandler.ts";import{ defaultCharacters}from"./src/config/characters.ts";const gameManager=newGameManager();const wsHandler=newWebSocketHandler(gameManager);asyncfunctionhandler(req: Request):Promise<Response>{const url=newURL(req.url);// Handle WebSocket connectionsif(req.headers.get("upgrade")==="websocket"){return wsHandler.handleConnection(req);}// Serve static files and API endpointsswitch(url.pathname){case"/":returnnewResponse(awaitgetIndexHTML(),{        headers:{"content-type":"text/html"},});case"/api/characters":returnnewResponse(JSON.stringify(defaultCharacters),{        headers:{"content-type":"application/json"},});case"/api/game/start":if(req.method==="POST"){try{const body=await req.json();const gameId=await gameManager.startNewGame(            body.gmPrompt,            body.characters,);returnnewResponse(JSON.stringify({ gameId}),{            headers:{"content-type":"application/json"},});}catch(error){returnnewResponse(JSON.stringify({ error: error.message}),{              status:400,              headers:{"content-type":"application/json"},},);}}break;case"/api/game/message":if(req.method==="POST"){try{const body=await req.json();const responses=await gameManager.handlePlayerMessage(            body.gameId,            body.message,);returnnewResponse(JSON.stringify({ responses}),{            headers:{"content-type":"application/json"},});}catch(error){returnnewResponse(JSON.stringify({ error: error.message}),{              status:400,              headers:{"content-type":"application/json"},},);}}break;default:returnnewResponse("Not Found",{ status:404});}returnnewResponse("Method Not Allowed",{ status:405});}asyncfunctiongetIndexHTML():Promise<string>{try{returnawait Deno.readTextFile("./static/index.html");}catch{// Return a basic HTML template if file doesn't existreturn`<!DOCTYPE html><html lang="en"><head>    <title>LLM Roleplay Chat</title></head><body>   <h1>Oops! Something went wrong.</h1></body></html>`;}}const port=parseInt(Deno.env.get("PORT")||"8000");console.log(`🎭 LLM Chat server starting on http://localhost:${port}`);Deno.serve({ port}, handler);

In themain.ts file we set up an HTTP server and a WebSocket server to handlereal-time communication. We use the HTTP server to serve static files andprovide API endpoints, while the WebSocket server manages real-time interactionsbetween clients.

Add a frontendJump to heading

The frontend of our app will live in thestatic directory. Create anindex.html,app.js and astyle.css file in thestatic directory.

index.htmlJump to heading

We'll create a very basic layout with a textarea to collect the user's scenarioinput and a section to show the response messages with a text input to sendmessages. Copy the content fromthis html fileinto yourindex.html.

app.jsJump to heading

Inapp.js, we'll add the JavaScript to handle user input and displayresponses. Copy the content fromthis js fileinto yourapp.js.

style.cssJump to heading

We'll add some basic styles to make our app look nicer. Copy the content fromthis css fileinto yourstyle.css.

Run your applicationJump to heading

Start your development server:

deno task dev

Your LLM chat application will be available athttp://localhost:8000. Theapplication will:

  1. Auto-detect your LLM provider based on available API keys
  2. Fall back to mock responses if no API keys are configured
  3. Handle rate limiting gracefully with retries and fallbacks
  4. Provide real-time interaction through WebSockets

Deploy your application to the cloudJump to heading

Now that you have your working LLM chat application, you can deploy it to thecloud with Deno Deploy.

For the best experience, you can deploy your app directly from GitHub, whichwill set up automated deployments. Create a GitHub repository and push your appthere.

Create a new GitHub repository, then initialize andpush your app to GitHub:

git init-b maingit remoteadd origin https://github.com/<your_github_username>/<your_repo_name>.gitgitadd.git commit-am'initial commit'git push-u origin main

Once your app is on GitHub, you candeploy it to Deno Deploy.

Don't forget to add yourOPENAI_API_KEY orANTHROPIC_API_KEY environmentvariables in the project "Settings".

For a walkthrough of deploying your app, check out theDeno Deploy tutorial.

TestingJump to heading

We've provided tests to verify your setup, copy themain.test.tsfile to your project directory and run the included tests to verify your setup:

deno tasktest

🦕 You now have a working LLM chat application, with realtime interaction, ratelimiting and error handling. Next you can customise it to your own play style!Consider giving the LLM instructions on how to behave in different scenarios, orhow to respond to specific user inputs. You can add these into the characterconfiguration files.

You could also consider adding a database to store the conversation history forlong-term character and story development.

Did you find what you needed?

What can we do to improve this page?

If provided, you'll be @mentioned in the created GitHub issue

Privacy policy

[8]ページ先頭

©2009-2025 Movatter.jp