- Notifications
You must be signed in to change notification settings - Fork0
A simple bot server to run in your home network with super easy command definition and a sprinkle of intelligence.
License
MedAouadhi/Polybot
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
An async bot server with straightforward commands definition, with intelligent chat mode using LLMs (currently OpenAI, Llama2 in mind), and can be hosted anywhere.
- Async server with dynamic ip support, periodic monitoring and update of the self signed certificate of the server based on ip changes:
- Keep all accesses local to your network, don't need a third party hosting/routing service.
- Gives the ability to use async webhooks vs polling for the updates.
- Intuitive and simple addition of commands, simply add a new function that returns a string, that's literally it.
- Command mode to serve back the handlers you define.
- Chat Mode ask your large language model, using:
- OpenAI models (through their API).
- Self hosted LLM such as llama2(To come).
Adding a command is as easy as annotating the handler function with thehandler(cmd = "/my_command")
attribute.The commands need to be also under a module annotated with#[bot_commands]
.
use bot_commands_macro::{bot_commands, handler};#[bot_commands]pubmod commands{usesuper::*;use polybot::types::BotUserActions;use rand::Rng;usecrate::utils::get_ip;#[handler(cmd ="/ip")]asyncfnip(_user_tx:implBotUserActions, _:String) ->String{ifletOk(ip) =get_ip().await{return ip;}"Error getting the Ip address".to_string()}#[handler(cmd ="/dice")]asyncfndice(_:implBotUserActions, _:String) ->String{ rand::thread_rng().gen_range(1..=6).to_string()}}
mod bot_commands;use anyhow::Result;use bot_commands::commands::MyCommands;use polybot::polybot::Polybot;use polybot::telegram::bot::TelegramBot;use std::error::Error;use std::time::Duration;use tracing::info;// MyCommands is the macro generated struct that holds the list of commands// defined in bot_commands.rs and implements BotCommands trait.typeMyBot =TelegramBot<MyCommands>;#[tokio::main]asyncfnmain() ->Result<(),Box<dynError>>{// Configure tracing tracing_subscriber::fmt().with_env_filter(tracing_subscriber::EnvFilter::from_default_env()).init();let config = polybot::utils::get_config("config.toml").await?;let telegrambot =Polybot::<MyBot>::new(config).with_webhook_monitoring(Duration::from_secs(60));info!("Starting Telegram Bot ..."); telegrambot.start_loop().await?;Ok(())}
Note that we can opt-in/out of the webhook monitoring, which will periodically check for the validity of the self signed certificate in thebot provider servers (e.g: Telegram), and makes sure it remains valid, by generating and uploading a new one if the ip has changed.
If you choose to opt out (assuming you have a static ip and already have a certificate), then it's your job to set the webhook manually, e.g:
curl -F"url=https://11.22.33.44/" -F"certificate=@YOURPUBLIC.pem" \"https://api.telegram.org/bot212132232:12345678912345/setWebhook"
Chat mode is simply the LLM request command (if provided) but without typing the command prefix each time, so once in the mode, you can chat with the llm just like any normal conversation.
- To inform polybot of that, we just need to tell it about the command that
starts
the chat mode, the one thatexists
it and thellm request
command.We do that by adding the boolean attributes in thehandler
procedural macro:
#[handler(cmd ="/ask", llm_request =true)]asyncfnask(_user_tx:implBotUserActions,request:String) ->String{if request.is_empty(){return"Ask something!".to_string();}ifletOk(agent) =OpenAiModel::try_new(){ifletOk(answer) = agent.request(&request).await{return answer;}"Problem getting the agent response".to_string()}else{"Could not create the llm agent, check the API key".to_string()}}#[handler(cmd ="/chat", chat_start =true)]asyncfnchat(_user_tx:implBotUserActions, _:String) ->String{"Let's chat!".to_string()}#[handler(cmd ="/endchat", chat_exit =true)]asyncfnendchat(_user_tx:implBotUserActions,_request:String) ->String{"See ya!".to_string()}
/ip
: Gives back the current public ipv4 of the bot's network./affirm
Sends back motivational quotes./dice
Generates a random number between 1 and 6./temp [city]
Gives back the current temprature of any city in the world./ask [prompt]
Prompts the LLM agent for any single shot request./chat
Startschat mode which will interpret any following messages as prompts./endchat
Exits the chat mode.
The main driver for this project, was the simple idea, that I wanted to ssh to my workstation (installed in my home) from anywhere,without paying for a static ip or a domain, and without using any 3rd party software, I simply need the public ip address of my home network.
Well the problem is that the ip address can change at any time, so I needed a software that is running locally in the network, which publishes the ip address whenever I ask it.
Come social media bots! (just telegram for now). Because what is a better interface than a chat conversation in an app that I already use in my day to day life. With the bonus of adding more functionnality to the bot whenever suitable.
I chose Rust as I am already on its learning journey, and I decided this is the perfect didactic exercise.I initially started this as a Telegram bot server, but then to further push my trait system understanding, I decided to abstract it more, to support multiple bots (in theory).
- Make sure to forward the port 443 in the settings of your router or firewall. For my case I forwarded all incoming requests to the port 443 to my local 4443 port.
- To make use of the llm logic, you need to run the application, with the
OPENAI_API_KEY
environment variable containing your OpenAI token, asPolyBot is making use ofllm-chain.
export OPENAI_API_KEY="sk-YOUR_OPEN_AI_KEY_HERE"
First of all you need to create your own bot with the help of BotFather.
- Just send a
/newbot
message toBotFather
bot using your normal telegram account. (find more informationshere.). This will give you the API token.
- Just send a
Create a
config.toml
file in the root directory of the project, with this layout:
[bot]name ="superbot"token ="11111111112222222222333333333"[server]ip ="0.0.0.0"port =4443privkeyfile ="YOURPRIVATE.key"pubkeyfile ="YOURPUBLIC.pem"
You can also create a background service to run your bot, to do that:
- Create the file
/etc/systemd/system/homebot.service
with the contentsof therespective file of this repo (change the paths accordingly).