Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

fairness-ml

Here are 224 public repositories matching this topic...

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

  • UpdatedFeb 6, 2026
  • TypeScript
awesome-imbalanced-learning

😎 Everything about class-imbalanced/long-tail learning: papers, codes, frameworks, and libraries | 有关类别不平衡/长尾学习的一切:论文、代码、框架与库

  • UpdatedFeb 25, 2025
synthcity

A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.

  • UpdatedFeb 11, 2026
  • Python

Tensorflow's Fairness Evaluation and Visualization Toolkit

  • UpdatedAug 4, 2025
  • Python

Fair Resource Allocation in Federated Learning (ICLR '20)

  • UpdatedDec 2, 2023
  • Python

Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

  • UpdatedOct 25, 2021
  • Jupyter Notebook

WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!

  • UpdatedNov 24, 2025
  • Python

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.

  • UpdatedDec 19, 2025
  • Scala
deep-explanation-penalization

Code accompanying our papers on the "Generative Distributional Control" framework

  • UpdatedDec 7, 2022
  • Python

Train Gradient Boosting models that are both high-performance *and* Fair!

  • UpdatedFeb 20, 2026
  • C++

Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper

  • UpdatedMar 2, 2021
fairmodels

Flexible tool for bias detection, visualization, and mitigation

  • UpdatedOct 31, 2025
  • R

A Python toolkit for analyzing machine learning models and datasets.

  • UpdatedSep 8, 2023
  • Python

Papers and online resources related to machine learning fairness

  • UpdatedMay 11, 2023

Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.

  • UpdatedApr 4, 2025
  • Python

FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by@firmai)

  • UpdatedOct 20, 2021
  • Jupyter Notebook

Improve this page

Add a description, image, and links to thefairness-ml topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thefairness-ml topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2026 Movatter.jp