Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for Building an AI agent for your frontend project
LogRocket profile imageMegan Lee
Megan Lee forLogRocket

Posted on • Originally published atblog.logrocket.com

     

Building an AI agent for your frontend project

Every day, new AI products and tools emerge, making it feel like AI is taking the world by storm — and for good reason. AI assistants can be incredibly useful across various domains, including ecommerce, customer support, media and content creation, marketing, education, and more. The significance and utility of AI are undeniable.

Having expertise in AI in today’s rapidly evolving landscape can give you a huge advantage. The skill to build and ship AI agents is becoming increasingly sought after as the demand for AI-powered solutions continues to grow. The good news is that you don’t need to be an AI/ML expert to build AI agents and products. With the right toolset, building AI agents can be both accessible and enjoyable.

This tutorial will guide you through building AI agents from scratch. We’ll build, deploy, implement, and test a webpage FAQ generator AI agent within a frontend project. This agent will generate a selected number of FAQs based on specified topics and keywords. We’ll also explore how to enhance the accuracy of an AI agent by instructing it to use a predefined set of documents instead of relying solely on web data.

As with any development project, success hinges on choosing the right platform and tools. Here’s our powerful tech stack for building AI agents:

  • OpenAI: The most used AI model provider. To follow along with this tutorial you’ll need to have an account with it
  • BaseAI: A free and open source web AI framework for building serverless AI agents with Node.js and TypeScript
  • Langbase: A powerful serverless platform for building and deploying AI products

The BaseAI and Langbase duo is a powerful and flexible professional toolset for building and deploying AI agents and products with great DX. Developers can build, mix and match, test, and deploy AI agents and use them to create powerful AI products fast, easy, and at low cost. All major LLMs are supported and can be used with one unified API.

Get excited, because by the end of this post, you'll be well on your way to creating your very own AI agents like a pro. Let's get started!

What is an AI agent?

An AI agent is a software program that uses artificial intelligence to perform tasks or make decisions on its own, often interacting with users or systems. It can be a chatbot, virtual assistant, or any tool that learns from data and automates processes, making things easier and faster.

What are the benefits of building your own AI agent?

  • Being able to leverage all the power of LLMs without the limits of third-party tools
  • Flexible customization tailored to your specific project needs/use cases
  • Seamless integration with your workflow and toolset
  • Full control over the processed data
  • Better privacy and security
  • Scalability that meets your changing/growing needs
  • Cost efficiency in the long run
  • Full alignment with your brand identity

An introduction to the BaseAI framework

To use BaseAI efficiently, you need to understand the main functionalities it offers:

  • AI pipes: These are serverless AI agents. They provide a unified API for all LLMs and work with any language and framework
  • AI memory: This is a serverless RAG (Retrieval Augmented Generation) agent that provides long-term memory functionality with the ability to obtain, process, retain, and retrieve information
  • AI tool: This is a function inside your codebase used to perform tasks that the AI model can't handle alone

In this tutorial, we’ll explore the first two: AI pipes and memory.

Getting started

BaseAI works in close relation with Langbase, which provides a versatile AI Studio for building, testing, and deploying AI agents. The first step is to create afree account with Langbase. Once you have an account, you need to set up two things:

Now you are ready to start using BaseAI!

Let’s create a new Node project:

mkdirbuilding-ai-agents&&cdbuilding-ai-agentsnpminit-ynpminstalldotenv
Enter fullscreen modeExit fullscreen mode

Now, let’s initialize the new BaseAI project inside:

npxbaseai@latestinit
Enter fullscreen modeExit fullscreen mode

Normally, the base project structure looks like this:

ROOT(ofyourapp)├──baseai|├──baseai.config.ts|├──memory|├──pipes|└──tools├──.env(yourenvfile)└──package.json
Enter fullscreen modeExit fullscreen mode

Right now, your project can differentiate a bit. You may notice that in your project, thememory,pipes, andtools directories are missing. Don’t worry — these are auto-generated when you create at least one memory, pipe, or tool respectively.

Also before you start building AI agents, you need to add theLangbase API key andOpenAI API key in the project’s.env file. Rename theenv.baseai.example file to.env and put the API keys in the appropriate places:

#!!SERVERSIDEONLY!!#KeepallyourAPIkeyssecretuseonlyontheserverside.#TODO:ADD:Bothinyourproductionandlocalenvfiles.#LangbaseAPIkeyforyourUserorOrgaccount.#HowtogetthisAPIkeyhttps://langbase.com/docs/api-reference/api-keysLANGBASE_API_KEY="YOUR-LANGBASE-KEY"#TODO:ADD:LOCALONLY.Addonlytolocalenvfiles.#Followingkeysareneededforlocalpiperuns.Forprovidersyouareusing.#ForLangbase,pleaseaddthekeytoyourLLMkeysets.#Readmore:LangbaseLLMKeysetshttps://langbase.com/docs/features/keysetsOPENAI_API_KEY="YOUR-OPENAI-KEY"ANTHROPIC_API_KEY=COHERE_API_KEY=FIREWORKS_API_KEY=GOOGLE_API_KEY=GROQ_API_KEY=MISTRAL_API_KEY=PERPLEXITY_API_KEY=TOGETHER_API_KEY=XAI_API_KEY=
Enter fullscreen modeExit fullscreen mode

N.B., thebaseai.config.ts file provides several configuration settings, one of which is to change the name of your.env file to suit your needs. You can do this by setting theenvFilePath property.

Building a webpage FAQ generator AI agent locally using BaseAI

In this section, we’ll create your first AI agent — a webpage FAQ generator that generates a specified number of question-answer pairs about specific topics and keywords, with the selected tone.

Creating and configuring an AI pipe

To create a new pipe, run the following:

npxbaseai@latestpipe
Enter fullscreen modeExit fullscreen mode

The CLI will ask you for the name and description of the pipe, and whether it will be public or private. Set the name to “faqs-generator” and the description to “A webpage FAQs generator”. Finally, make the pipe private. Once the pipe is created, you can find it inbaseai/pipes/faqs-generator.ts. Open it and replace the content with this:

import{PipeI}from'@baseai/core';constpipeFaqsGenerator=():PipeI=>({// Replace with your API key https://langbase.com/docs/api-reference/api-keysapiKey:process.env.LANGBASE_API_KEY!,name:'faqs-generator',description:'A webpage FAQs generator',status:'private',model:'openai:gpt-4o-mini',stream:true,json:false,store:true,moderate:true,top_p:1,max_tokens:1000,temperature:0.7,presence_penalty:1,frequency_penalty:1,stop:[],tool_choice:'auto',parallel_tool_calls:true,messages:[{role:'system',content:`You're a helpful AI assistant. Generate {{count}} frequently asked questions (FAQs) about {{topic}} using the keywords {{keywords}}. Each FAQ should consist of a question followed by a concise answer. Ensure the answers are clear, accurate, and helpful for someone who is unfamiliar with the topic. Keep the tone {{tone}}.`}],variables:[{name:'count',value:''},{name:'topic',value:''},{name:'keywords',value:''},{name:'tone',value:''}],memory:[],tools:[]});exportdefaultpipeFaqsGenerator;
Enter fullscreen modeExit fullscreen mode

As you can see, the system prompt has now changed to suit our specific needs for FAQ generation:

"You're a helpful AI assistant. Generate {{count}} frequently asked questions (FAQs) about {{topic}} using the keywords {{keywords}}. Each FAQ should consist of a question followed by a concise answer. Ensure the answers are clear, accurate, and helpful for someone who is unfamiliar with the topic. Keep the tone {{tone}}."
Enter fullscreen modeExit fullscreen mode

BaseAI allows you to use variables in your prompts. You can turn any text into a variable by putting it between{{}}. So in our case, we need to create four variables:

  • count: Sets the number of the FAQs we want to be generated
  • topic: Sets the main topic of the FAQs
  • keywords: Adds additional keywords to make the topic more specific
  • tone: Defines the tone of the generated content

These variables are provided when you run the pipe. We’ll explore this in a moment.

Integrating the pipe

Once we have created the pipe, we need to put it into action. Create aindex.ts file in the root and add this content:

import'dotenv/config';import{Pipe,getRunner}from'@baseai/core';importpipeFaqsGeneratorfrom'./baseai/pipes/faqs-generator';constpipe=newPipe(pipeFaqsGenerator());asyncfunctionmain(){const{stream}=awaitpipe.run({messages:[],variables:[{name:'count',value:'3'},{name:'topic',value:'money'},{name:'keywords',value:'investment'},{name:'tone',value:'informative'}],stream:true});construnner=getRunner(stream);runner.on('connect',()=>{console.log('Stream started.\n');});runner.on('content',content=>{process.stdout.write(content);});runner.on('end',()=>{console.log('\nStream ended.');});runner.on('error',error=>{console.error('Error:',error);});}main();
Enter fullscreen modeExit fullscreen mode

Here, we run the pipe with the variables we want to use. We want to stream the response so we also setstream property totrue. We use the extracted stream from the response and turn it into a runner. Next, we use it to stream the content. Let’s try it out.

Running and testing the pipe

To run the pipe, you first need to start the dev server:

npxbaseai@latestdev
Enter fullscreen modeExit fullscreen mode

Then, in a new terminal, run theindex.ts file:

npxtsxindex.ts
Enter fullscreen modeExit fullscreen mode

In a moment you should see the streamed content in your CLI. Congratulations! You have just built your first AI agent with ease.

Deploying the FAQ generator AI agent to Langbase

BaseAI gives you the ability to build and test AI agents locally but to use it in production, you need to deploy it to Langbase. Here’s how to do so. First, you need to authenticate with your Langbase account:

npxbaseai@latestauth
Enter fullscreen modeExit fullscreen mode

Once you have successfully authenticated, deploying your pipe is a matter of running the following command:

npxbaseai@latestdeploy
Enter fullscreen modeExit fullscreen mode

Once deployed, you can access your pipe and explore all its settings and features in theLangbase AI Studio. This gives you much more power to explore and experiment with your AI agent in a user-friendly environment.

Building an AI agent with RAG

The FAQ generator is great for general questions but what if customers want to ask specific questions about your products or services? Then you can create a pipe with memory implementing the RAG technology.

What is RAG?

RAG, or Retrieval Augmented Generation, allows you to chat with your data. Imagine that I have read a book and then you ask me questions related to the book. I would use my memories of the book’s content to answer your questions. Similarly, when you ask a RAG AI agent a question, it uses its embedded memory to retrieve the necessary information about the answer. This reduces AI hallucinations and provides more accurate and relevant responses. In our project, we’re going to create a pipe with memory where we’ll embed a set of documents to be used as a knowledge base.

Creating AI memory

To create a new memory, run the following:

npxbaseai@latestmemory
Enter fullscreen modeExit fullscreen mode

The CLI will ask you for the memory name and description. You can call it “knowledge-base” and use whatever description you want. Leave the answer for “Do you want to create memory from current project git repository?” as “no”. This will create abaseai/memory/knowledge-base directory with anindex.ts file inside:

import{MemoryI}from'@baseai/core';constmemoryKnowledgeBase=():MemoryI=>({name:'knowledge-base',description:"My knowledge base",git:{enabled:false,include:['documents/**/*'],gitignore:false,deployedAt:'',embeddedAt:''}});exportdefaultmemoryKnowledgeBase;
Enter fullscreen modeExit fullscreen mode

The next step is to add your data. Openthis tutorial of mine, copy all the text, and put it in atailwind-libraries.txt file. Next, add the file inbaseai/memory/knowledge-base/documents.N.B., Langbase currently supports.txt,.pdf,.md,.csv, and all major plain code files. A single file size can be a maximum of 10MB. Now we need to embed the memory to generate embeddings for the documents. To create memory embeddings, run the following:

npxbaseai@latestembed-mknowledge-base
Enter fullscreen modeExit fullscreen mode

Make sure to addOPENAI_API_KEY to the.env file at the root of your project. This is required to generate embeddings for the documents in the memory. BaseAI will generate embeddings for the documents and create a semantic index for search. Now let’s create a new pipe and add the memory we’ve just created to it:

npxbaseai@latestpipe
Enter fullscreen modeExit fullscreen mode

Set the pipe name to “knowledge-base-rag”. BaseAI automatically detects when you have memory so it will ask you which one you want to use in your pipe. Selectknowledge-base, and use this for the system prompt:

YouareahelpfulAIassistant.Youprovidethebest,concise,andcorrectanswerstotheuser's questions.
Enter fullscreen modeExit fullscreen mode

Here is the generated pipe:

import{PipeI}from'@baseai/core';importknowledgeBaseMemoryfrom'../memory/knowledge-base';constpipeKnowledgeBaseRag=():PipeI=>({// Replace with your API key https://langbase.com/docs/api-reference/api-keysapiKey:process.env.LANGBASE_API_KEY!,name:'knowledge-base-rag',description:'A knowledge base with RAG functionality',status:'private',model:'openai:gpt-4o-mini',stream:true,json:false,store:true,moderate:true,top_p:1,max_tokens:1000,temperature:0.7,presence_penalty:1,frequency_penalty:1,stop:[],tool_choice:'auto',parallel_tool_calls:true,messages:[{role:'system',content:`You are a helpful AI assistant.You provide the best, concise, and correct answers to the user's questions.`},{role:'system',name:'rag',content:"Below is some CONTEXT for you to answer the questions. ONLY answer from the CONTEXT. CONTEXT consists of multiple information chunks. Each chunk has a source mentioned at the end.\n\nFor each piece of response you provide, cite the source in brackets like so: [1].\n\nAt the end of the answer, always list each source with its corresponding number and provide the document name. like so [1] Filename.doc.\n\nIf you don't know the answer, just say that you don't know. Ask for more context and better questions if needed."}],variables:[],memory:[knowledgeBaseMemory()],tools:[]});exportdefaultpipeKnowledgeBaseRag;
Enter fullscreen modeExit fullscreen mode

BaseAI automatically adds a RAG system prompt that is suitable for most use cases but you can customize it to your needs. It helps the AI model understand the context of the conversation and generate responses that are relevant, accurate, and grammatically correct. Now, let’s test it. Create anindex-rag.ts file in the root and add the following content:

import'dotenv/config';import{Pipe,getRunner}from'@baseai/core';importpipeKnowledgeBaseRagfrom'./baseai/pipes/knowledge-base-rag';constpipe=newPipe(pipeKnowledgeBaseRag());asyncfunctionmain(){const{stream}=awaitpipe.run({messages:[{role:'user',content:'Which Tailwind CSS component library provides the most components?'}],stream:true});construnner=getRunner(stream);runner.on('connect',()=>{console.log('Stream started.\n');});runner.on('content',content=>{process.stdout.write(content);});runner.on('end',()=>{console.log('\nStream ended.');});runner.on('error',error=>{console.error('Error:',error);});}main();
Enter fullscreen modeExit fullscreen mode

Now, to run the pipe, make sure the dev server is running. Then run theindex-rag.ts file:

npxtsxindex-rag.ts
Enter fullscreen modeExit fullscreen mode

After a moment, you should see something similar in your terminal:

**TailwindElements**providesthemostcomponents,withahugesetofmorethan500UIcomponents.Thesecomponentsrangefromverysimpleelementslikeheadingsandiconstomorecomplexoneslikechartsandcompleteforms,makingitsuitableforalmostanykindofproject[1].Sources:[1]tailwind-libraries.txt
Enter fullscreen modeExit fullscreen mode

Here, the AI agent uses the provided data to answer the question.

Building a basic AI-powered Next.js app

In this section, we’ll explore a simple example of how you can use AI agents in a Next.js frontend app. Start by running the following:

npxcreate-next-app@latest
Enter fullscreen modeExit fullscreen mode

Accept all default settings. When the app is set up, create anactions.ts file in theapp directory with the following content:

'use server';exportasyncfunctiongenerateCompletion(count:string,topic:string,keywords:string,tone:string){consturl='https://api.langbase.com/v1/pipes/run';constapiKey='PIPE-API-KEY';constdata={messages:[],variables:[{name:'count',value:count},{name:'topic',value:topic},{name:'keywords',value:keywords},{name:'tone',value:tone}]};constresponse=awaitfetch(url,{method:'POST',headers:{'Content-Type':'application/json',Authorization:`Bearer${apiKey}`},body:JSON.stringify(data)});constresText=awaitresponse.json();returnresText;}
Enter fullscreen modeExit fullscreen mode

Here, we have a function that generates an AI answer completion. You need to replace<PIPE_API_KEY> with your Pipe API key. To get it, open your pipe in Langbase and click on the API tab next to the selected Pipe tab, and copy the API key from there. Now, openpage.tsx and replace its contents with the following:

'use client';import{useState}from'react';import{generateCompletion}from'./actions';exportdefaultfunctionHome(){consturl='https://api.langbase.com/v1/pipes/run';constapiKey='YOUR-PIPE-KEY';const[count,setCount]=useState('');const[topic,setTopic]=useState('');const[keywords,setKeywords]=useState('');const[tone,setTone]=useState('');const[completion,setCompletion]=useState('');const[loading,setLoading]=useState(false);consthandleGenerateCompletion=async()=>{setLoading(true);const{completion}=awaitgenerateCompletion(count,topic,keywords,tone)setCompletion(completion);setLoading(false)};return(<mainclassName="flex min-h-screen flex-col items-center justify-between p-24"><divclassName="flex flex-col items-center"><h1className="text-4xl font-bold">GenerateFAQs</h1><pclassName="mt-4 text-lg">EnteratopicandclickthebuttontogenerateFAQsusingLLM</p><inputtype="text"placeholder="Enter a topic"className="w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base"value={topic}onChange={e=>setTopic(e.target.value)}/><inputtype="text"placeholder="Enter keywords"className="w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base"value={keywords}onChange={e=>setKeywords(e.target.value)}/><inputtype="text"placeholder="Enter a tone"className="w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base"value={tone}onChange={e=>setTone(e.target.value)}/><inputtype="text"placeholder="Enter a count"className="w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base"value={count}onChange={e=>setCount(e.target.value)}/><buttononClick={handleGenerateCompletion}className="inline-flex items-center gap-x-2 m-3 rounded-lg bg-blue-600 px-4 py-2.5 text-center text-base font-medium text-slate-50 hover:bg-blue-800 focus:ring-4 focus:ring-blue-200 dark:focus:ring-blue-900">GenerateFAQs<svgxmlns="http://www.w3.org/2000/svg"className="h-4 w-4"viewBox="0 0 24 24"strokeWidth="2"stroke="currentColor"fill="none"strokeLinecap="round"strokeLinejoin="round"><pathstroke="none"d="M0 0h24v24H0z"fill="none"></path><pathd="M10 14l11 -11"></path><pathd="M21 3l-6.5 18a.55 .55 0 0 1 -1 0l-3.5 -7l-7 -3.5a.55 .55 0 0 1 0 -1l18 -6.5"></path></svg></button>{loading&&<pclassName="mt-4">Loading...</p>}{completion&&(<textareareadOnlyvalue={completion}cols={100}rows={20}className="w-full bg-slate-50 p-10 text-base text-slate-900 focus:outline-none dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400"/>)}</div></main>);}
Enter fullscreen modeExit fullscreen mode

Here we created the necessary inputs for the pipe’s variables and added a textarea for the AI-generated response. Before running the app, go to the pipe in Langbase and, in the right sidebar in the Meta panel, turn the Stream mode toOff. Now run the app and test it:

npmrundev
Enter fullscreen modeExit fullscreen mode

Here is what it should look like:FAQ Generator Here is a prompt example:FAQ Generator Prompt Example And here is the AI completion response:FAQ Generator's AI Response

Conclusion

In this tutorial, we explored the benefits of building your own AI agents. We did so by building a simple but powerful webpage FAQ generator. We also learned how to add memory to an AI agent to take advantage of RAG technology. Finally, we integrated the FAQ generator AI agent into a Next.js app. The future belongs to AI and gaining expertise in this area will offer you a big advantage. To learn more about building AI agents, don’t forget to check out theBaseAI andLangbase documentation.


Get set up with LogRocket's modern error tracking in minutes:

  1. Visithttps://logrocket.com/signup/ to get an app ID.
  2. Install LogRocket via NPM or script tag.LogRocket.init() must be called client-side, not server-side.

NPM:

$npm i--save logrocket // Code:import LogRocket from'logrocket'; LogRocket.init('app/id');
Enter fullscreen modeExit fullscreen mode

Script Tag:

AddtoyourHTML:<scriptsrc="https://cdn.lr-ingest.com/LogRocket.min.js"></script><script>window.LogRocket&&window.LogRocket.init('app/id');</script>
Enter fullscreen modeExit fullscreen mode

3.(Optional) Install plugins for deeper integrations with your stack:

  • Redux middleware
  • ngrx middleware
  • Vuex plugin

Get started now.

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

Rather than spending hours/days trying to reproduce an elusive bug, you can see the reproduction in seconds with LogRocket.Try it yourself — get in touch today.

More fromLogRocket

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp