Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
#

llm-security

Here are 83 public repositories matching this topic...

llm-app

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

  • UpdatedMar 18, 2025
  • Jupyter Notebook
giskard

the LLM vulnerability scanner

  • UpdatedMar 17, 2025
  • Python

[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).

  • UpdatedDec 24, 2024
  • Jupyter Notebook
agentic_securitybeelzebub

An easy-to-use Python framework to generate adversarial jailbreak prompts.

  • UpdatedSep 2, 2024
  • Python

Papers and resources related to the security and privacy of LLMs 🤖

  • UpdatedNov 27, 2024
  • Python
FuzzyAI

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

  • UpdatedMar 12, 2025
  • Jupyter Notebook

⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs

  • UpdatedJan 31, 2024
  • Python

This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses

  • UpdatedJan 22, 2025
  • Python

Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.

  • UpdatedNov 28, 2024
  • Svelte

Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.

  • UpdatedFeb 14, 2025
  • Python
fast-llm-security-guardrails

The fastest && easiest LLM security guardrails for CX AI Agents and applications.

  • UpdatedMar 7, 2025
  • Python

Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily

  • UpdatedJul 28, 2024
  • Python

Framework for LLM evaluation, guardrails and security

  • UpdatedSep 9, 2024
  • Python

Improve this page

Add a description, image, and links to thellm-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thellm-security topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp