- Notifications
You must be signed in to change notification settings - Fork18.2k
Description
I have some concerns about the way some of this code is implemented.
To name the two I've noticed so far, the llm_math and sql_database chains.
It seems these two will blindly execute any code that is fed to it from the llm
This is a major security risk, since this opens anyone who uses these up for remote code execution. (The python one more then the sql one).
With a mitm attack, anyone can just return back a piece of code in the reply, pretending it is the bot. And if that's not enough, with some well crafted prompt, you can probably make it execute code as well (by making the llm return text with the same prompt pattern but custom python code)
I understand that this is in very early beta, but I've already seen this used in different places, due to ChatGPT's popularity.
In any case, it might be beneficial to switch from exec() to eval() for the python calculator, since eval() is build for the purpose of evaluating math expressions.