- Notifications
You must be signed in to change notification settings - Fork8
OpenAI API bindings for Lua
License
leafo/lua-openai
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Bindings to theOpenAI HTTPAPI for Lua. Compatible withany HTTP library that supports LuaSocket's http request interface. Compatiblewith OpenResty usinglapis.nginx.http.
AI Generated Disclaimer
Although this library was written by hand, large portions of the documentation,the test suite, and the GitHub workflow were fully generated by GPT-4 bypasting the source code of the entire library in as a prompt and asking togenerate documentation and tests. The final output was edited for clarity andsyntax (in cases where GPT4 struggled writing MoonScript)
Install using LuaRocks:
luarocks install lua-openai
localopenai=require("openai")localclient=openai.new(os.getenv("OPENAI_API_KEY"))localstatus,response=client:chat({ {role="system",content="You are a Lua programmer"}, {role="user",content="Write a 'Hello world' program in Lua"}}, {model="gpt-3.5-turbo",-- this is the default modeltemperature=0.5})ifstatus==200then-- the JSON response is automatically parsed into a Lua objectprint(response.choices[1].message.content)end
A chat session instance can be created to simplify managing the state of a backand forth conversation with the ChatGPT API. Note that chat state is storedlocally in memory, each new message is appended to the list of messages, andthe output is automatically appended to the list for the next request.
localopenai=require("openai")localclient=openai.new(os.getenv("OPENAI_API_KEY"))localchat=client:new_chat_session({-- provide an initial set of messagesmessages= { {role="system",content="You are an artist who likes colors"} }})-- returns the string responseprint(chat:send("List your top 5 favorite colors"))-- the chat history is sent on subsequent requests to continue the conversationprint(chat:send("Excluding the colors you just listed, tell me your favorite color"))-- the entire chat history is stored in the messages fieldforidx,messageinipairs(chat.messages)doprint(message.role,message.content)end-- You can stream the output by providing a callback as the second argument-- the full response concatenated is also returned by the functionlocalresponse=chat:send("What's the most boring color?",function(chunk)io.stdout:write(chunk.content)io.stdout:flush()end)
OpenAI allowssending a list of functiondeclarationsthat the LLM can decide to call based on the prompt. The function callinginterface must be used with chat completions and thegpt-4-0613 orgpt-3.5-turbo-0613 models or later.
Seehttps://github.com/leafo/lua-openai/blob/main/examples/example5.lua fora full example that implements basic math functions to compute the standarddeviation of a list of numbers
Here's a quick example of how to use functions in a chat exchange. First youwill need to create a chat session with thefunctions option containing anarray of available functions.
The functions are stored on the
functionsfield on the chat object. If thefunctions need to be adjusted for future message, the field can be modified.
localchat=openai:new_chat_session({model="gpt-3.5-turbo-0613",functions= { {name="add",description="Add two numbers together",parameters= {type="object",properties= {a= {type="number"},b= {type="number"} } } } }})
Any prompt you send will be aware of all available functions, and may requestany of them to be called. If the response contains a function call request,then an object will be returned instead of the standard string return value.
localres=chat:send("Using the provided function, calculate the sum of 2923 + 20839")iftype(res)=="table"andres.function_callthen-- The function_call object has the following fields:-- function_call.name --> name of function to be called-- function_call.arguments --> A string in JSON format that should match the parameter specification-- Note that res may also include a content field if the LLM produced a textual output as welllocalcjson=require"cjson"localname=res.function_call.namelocalarguments=cjson.decode(res.function_call.arguments)-- ... compute the result and send it back ...end
You can evaluate the requested function & arguments and send the result back tothe client so it can resume operation with arole=function message object:
Since the LLM can hallucinate every part of the function call, you'll want todo robust type validation to ensure that function name and arguments matchwhat you expect. Assume every stage can fail, including receiving malformedJSON for the arguments.
localname,arguments=...-- the name and arguments extracted from aboveifname=="add"thenlocalvalue=arguments.a+arguments.b-- send the response back to the chat bot using a `role = function` messagelocalcjson=require"cjson"localres=chat:send({role="function",name=name,content=cjson.encode(value) })print(res)-- Print the final outputelseerror("Unknown function:"..name)end
Under normal circumstances the API will wait until the entire response isavailable before returning the response. Depending on the prompt this may takesome time. The streaming API can be used to read the output one chunk at atime, allowing you to display content in real time as it is generated.
localopenai=require("openai")localclient=openai.new(os.getenv("OPENAI_API_KEY"))client:chat({ {role="system",content="You work for Streak.Club, a website to track daily creative habits"}, {role="user",content="Who do you work for?"}}, {stream=true},function(chunk)io.stdout:write(chunk.content)io.stdout:flush()end)print()-- print a newline
Theopenai module returns a table with the following fields:
OpenAI: A client for sending requests to the OpenAI API.new: An alias toOpenAIto create a new instance of the OpenAI clientChatSession: A class for managing chat sessions and history with the OpenAI API.VERSION = "1.1.0": The current version of the library
This class initializes a new OpenAI API client.
Constructor for the OpenAI client.
api_key: Your OpenAI API key.config: An optional table of configuration options, with the following shape:http_provider: A string specifying the HTTP module name used for requests, ornil. If not provided, the library will automatically use "lapis.nginx.http" in an ngx environment, or "ssl.https" otherwise.
localopenai=require("openai")localapi_key="your-api-key"localclient=openai.new(api_key)
Creates a newChatSession instance. A chat session is anabstraction over the chat completions API that stores the chat history. You canappend new messages to the history and request completions to be generated fromit. By default, the completion is appended to the history.
Sends a request to the/chat/completions endpoint.
messages: An array of message objects.opts: Additional options for the chat, passed directly to the API (eg. model, temperature, etc.)https://platform.openai.com/docs/api-reference/chatchunk_callback: A function to be called for parsed streaming output whenstream = trueis passed toopts.
Returns HTTP status, response object, and output headers. The response objectwill be decoded from JSON if possible, otherwise the raw string is returned.
Sends a request to the/completions endpoint.
prompt: The prompt for the completion.opts: Additional options for the completion, passed directly to the API (eg. model, temperature, etc.)https://platform.openai.com/docs/api-reference/completions
Returns HTTP status, response object, and output headers. The response objectwill be decoded from JSON if possible, otherwise the raw string is returned.
Sends a request to the/embeddings endpoint.
input: A single string or an array of stringsopts: Additional options for the completion, passed directly to the API (eg. model)https://platform.openai.com/docs/api-reference/embeddings
Returns HTTP status, response object, and output headers. The response objectwill be decoded from JSON if possible, otherwise the raw string is returned.
This class manages chat sessions and history with the OpenAI API. Typicallycreated withnew_chat_session
The fieldmessages stores an array of chat messages representing the chathistory. Each message object must conform to the following structure:
role: A string representing the role of the message sender. It must be one of the following values: "system", "user", or "assistant".content: A string containing the content of the message.name: An optional string representing the name of the message sender. If not provided, it should benil.
For example, a valid message object might look like this:
{role="user",content="Tell me a joke",name="John Doe"}Constructor for the ChatSession.
client: An instance of the OpenAI client.opts: An optional table of options.messages: An initial array of chat messagesfunctions: A list of function declarationstemperature: temperature settingmodel: Which chat completion model to use, eg.gpt-4,gpt-3.5-turbo
Appends a message to the chat history.
m: A message object.
Returns the last message in the chat history.
Appends a message to the chat history and triggers a completion withgenerate_response and returns the response as a string. On failure, returnsnil, an error message, and the raw request response.
If the response includes afunction_call, then the entire message object isreturned instead of a string of the content. You can return the result of thefunction by passingrole = "function" object to thesend method
message: A message object or a string.stream_callback: (optional) A function to enable streaming output.
By providing astream_callback, the request will runin streaming mode. Thisfunction receives chunks as they are parsed from the response.
These chunks have the following format:
content: A string containing the text of the assistant's generated response.
For example, a chunk might look like this:
{content="This is a part of the assistant's response.",}Calls the OpenAI API to generate the next response for the stored chat history.Returns the response as a string. On failure, returnsnil, an error message,and the raw request response.
append_response: Whether the response should be appended to the chat history (default: true).stream_callback: (optional) A function to enable streaming output.
Seechat:send for details on thestream_callback
About
OpenAI API bindings for Lua
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Uh oh!
There was an error while loading.Please reload this page.
Contributors2
Uh oh!
There was an error while loading.Please reload this page.