ai-transparency
Here are 11 public repositories matching this topic...
Language:All
Sort:Most stars
AI Chat Watch (AICW) - free open-source tool for GEO marketers that track what & how AI mentions brands, products, companies.
- Updated
Oct 30, 2025 - TypeScript
Indice d'implication de l'intelligence artificielle
- Updated
Jun 17, 2025
The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.
- Updated
Dec 16, 2025 - Python
Code for the paper "ClipMind: A Framework for Auditing Short-Format Video Recommendations Using Multimodal AI Models"
- Updated
Jun 26, 2025 - Jupyter Notebook
pRISM is a repository that combines Retrieval-Augmented Generation (RAG) with a multi-LLM voting approach to create accurate and reliable AI-generated outputs. It integrates multiple language models, including Mistral, Claude 3.5, and OpenAI, to enhance performance through advanced consensus techniques
- Updated
Jun 20, 2025 - Python
A simple, universal system that labels ai responses so anyone can instantly tell what the output is and how it’s meant to be used.
- Updated
Nov 15, 2025
Pragmatic Existentialism & Antagonistic Cooperation: A formal theory positing that truth and ethics emerge from the need for coherent systems to overcome shared obstacles, defining belief utility by survival fitness and cooperation.
- Updated
Nov 2, 2025
Human-centered AI interview prototype that generates follow-up questions and lets participants rate fairness, relevance, comfort, and trust.
- Updated
Dec 15, 2025 - Python
Official website and thought leadership platform for The Human Channel.
- Updated
Jun 7, 2025 - TypeScript
🪐 7- Social Buss: A black box model is an AI or machine learning system whose internal decision-making processes are hidden, providing only inputs and outputs without revealing how outcomes are derived. These models offer high accuracy for complex tasks but pose challenges for interpretability and trust.
- Updated
Nov 8, 2025 - Jupyter Notebook
Simple graphics intended to serve as a "this was vibe coded FYI" and attribute model
- Updated
Dec 15, 2025 - HTML
Improve this page
Add a description, image, and links to theai-transparency topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theai-transparency topic, visit your repo's landing page and select "manage topics."