bandit-algorithms
Here are 96 public repositories matching this topic...
Language:All
Sort:Most stars
🔬 Research Framework for Single and Multi-Players 🎰 Multi-Arms Bandits (MAB) Algorithms, implementing all the state-of-the-art algorithms for single-player (UCB, KL-UCB, Thompson...) and multi-player (MusicalChair, MEGA, rhoRand, MCTop/RandTopM etc).. Available on PyPI:https://pypi.org/project/SMPyBandits/ and documentation on
- Updated
Apr 30, 2024 - Jupyter Notebook
A hyperparameter optimization framework, inspired by Optuna.
- Updated
Aug 12, 2025 - Go
PyXAB - A Python Library for X-Armed Bandit and Online Blackbox Optimization Algorithms
- Updated
Oct 24, 2024 - Python
Yahoo! news article recommendation system by linUCB
- Updated
Feb 1, 2018 - Python
Big Data's open seminars: An Interactive Introduction to Reinforcement Learning
- Updated
Jun 7, 2021 - Jupyter Notebook
My solutions to Yandex Practical Reinforcement Learning course in PyTorch and Tensorflow
- Updated
Dec 22, 2021 - Jupyter Notebook
Bandit algorithms
- Updated
Oct 12, 2017 - Python
Python implementation of UCB, EXP3 and Epsilon greedy algorithms
- Updated
Oct 4, 2018 - Python
Code for our ACML and INTERSPEECH papers: "Speaker Diarization as a Fully Online Bandit Learning Problem in MiniVox".
- Updated
Sep 20, 2021 - Cuda
More about the exploration-exploitation tradeoff with harder bandits
- Updated
May 12, 2019 - Jupyter Notebook
Privacy-Preserving Bandits (MLSys'20)
- Updated
Dec 8, 2022 - Jupyter Notebook
A curated list on papers about combinatorial multi-armed bandit problems.
- Updated
May 10, 2021
A comprehensive Python library implementing a variety of contextual and non-contextual multi-armed bandit algorithms—including LinUCB, Epsilon-Greedy, Upper Confidence Bound (UCB), Thompson Sampling, KernelUCB, NeuralLinearBandit, and DecisionTreeBandit—designed for reinforcement learning applications
- Updated
Dec 31, 2024 - Python
Building recommender Systems using contextual bandit methods to address cold-start issue and online real-time learning
- Updated
Jul 1, 2021 - Jupyter Notebook
🐯REPLICA of "Auction-based combinatorial multi-armed bandit mechanisms with strategic arms"
- Updated
Dec 17, 2023 - Python
This is a collection of interesting papers that I have read so far or want to read. Note that the list is not up-to-date. Topics: reinforcement learning, deep learning, mathematics, statistics, bandit algorithms, optimization.
- Updated
Apr 3, 2025
Personalized and Interactive Music Recommendation with Bandit approach
- Updated
Sep 15, 2019 - Jupyter Notebook
Deep contextual bandits in PyTorch: Neural Bandits, Neural Linear, and Linear Full Posterior Sampling with comprehensive benchmarking on synthetic and real datasets
- Updated
Jun 29, 2025 - Python
This repository aims at learning most popular MAB and CMAB algorithms and watch how they run. It is interesting for those wishing to start learning these topics.
- Updated
Dec 7, 2021 - Python
Improve this page
Add a description, image, and links to thebandit-algorithms topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thebandit-algorithms topic, visit your repo's landing page and select "manage topics."