Movatterモバイル変換


[0]ホーム

URL:


Wayback Machine
341 captures
22 Sep 2018 - 22 Feb 2024
MayJUNJul
Previous capture29Next capture
202120222023
success
fail
COLLECTED BY
Organization:Archive Team
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.

The main site for Archive Team is atarchiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.

This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by theWayback Machine, providing a path back to lost websites and work.

Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.

The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.

TIMESTAMPS
loading
The Wayback Machine - https://web.archive.org/web/20220629170411/https://aif360.mybluemix.net/

AI Fairness 360


This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. We invite you to use and improve it.

   

Not sure what to do first? Start here!

Read More

Learn more about fairness and bias mitigation concepts, terminology, and tools before you begin.


*

Try a Web Demo

Step through the process of checking and remediating bias in an interactive web demo that shows a sample of capabilities available in this toolkit.


*

Watch Videos

Watch videos to learn more about AI Fairness 360.


*

Read a paper

Read a paper describing how we designed AI Fairness 360.


*

Use Tutorials

Step through a set of in- depth examples that introduces developers to code that checks and mitigates bias in different industry and application domains.


*

Ask a Question

Join our AIF360 Slack Channel to ask questions, make comments and tell stories about how you use the toolkit.


*

View Notebooks

Open a directory of Jupyter Notebooks in GitHub that provide working examples of bias detection and mitigation in sample datasets. Then share your own notebooks!


*

Contribute

You can add new metrics and algorithms in GitHub. Share Jupyter notebooks show-casing how you have examined and mitigated bias in your machine learning application.


*

Learn how to put this toolkit to work for your application or industry problem. Try these tutorials.

These are ten state-of-the-art bias mitigation algorithms that can address bias throughout AI systems. Add more!

Optimized Pre-processing

Use to mitigate bias in training data. Modifies training data features and labels.


*

Reweighing

Use to mitgate bias in training data. Modifies the weights of different training examples.


*

Adversarial Debiasing

Use to mitigate bias in classifiers. Uses adversarial techniques to maximize accuracy and reduce evidence of protected attributes in predictions.


*

Reject Option Classification

Use to mitigate bias in predictions. Changes predictions from a classifier to make them fairer.


*

Disparate Impact Remover

Use to mitigate bias in training data. Edits feature values to improve group fairness.


*

Learning Fair Representations

Use to mitigate bias in training data. Learns fair representations by obfuscating information about protected attributes.


*

Prejudice Remover

Use to mitigate bias in classifiers. Adds a discrimination-aware regularization term to the learning objective.


*

Calibrated Equalized Odds Post-processing

Use to mitigate bias in predictions. Optimizes over calibrated classifier score outputs that lead to fair output labels.


*

Equalized Odds Post-processing

Use to mitigate bias in predictions. Modifies the predicted labels using an optimization scheme to make predictions fairer.


*

Meta Fair Classifier

Use to mitigate bias in classifier. Meta algorithm that takes the fairness metric as part of the input and returns a classifier optimized for that metric.


*

Are individuals treated similarly? Are privileged and unprivileged groups treated similarly? Find out by using metrics like these that measure individual and group fairness.

Statistical Parity Difference

The difference of the rate of favorable outcomes received by the unprivileged group to the privileged group.


*

Equal Opportunity Difference

The difference of true positive rates between the unprivileged and the privileged groups.


*

Average Odds Difference

The average difference of false positive rate (false positives/negatives) and true positive rate (true positives/positives) between unprivileged and privileged groups.


*

Disparate Impact

The ratio of rate of favorable outcome for the unprivileged group to that of the privileged group.


*

Theil Index

Measures the inequality in benefit allocation for individuals.


*

Euclidean Distance

The average Euclidean distance between the samples from the two datasets.


*

Mahalanobis Distance

The average Mahalanobis distance between the samples from the two datasets.


*

Manhattan Distance

The average Manhattan distance between the samples from the two datasets.


*


There are more than 70 metrics in the GitHub repository already. Add new metrics to the repository and use the Slack channel to let the community know about them.

About this site

AI Fairness 360 was created byIBM Research anddonated by IBM to theLinux Foundation AI & Data.

Additional research sites that advance other aspects ofTrusted AI include:

AI Explainability 360
AI Privacy 360
Adversarial Robustness 360
Uncertainty Quantification 360
AI FactSheets 360



[8]ページ先頭

©2009-2025 Movatter.jp