TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
-
Updated
Oct 9, 2024 - Python
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Open Bandit Pipeline: a python library for bandit algorithms and off-policy evaluation
[IJAIT 2021] MABWiser: Contextual Multi-Armed Bandits Library
An easy-to-use reinforcement learning library for research and education.
[AAAI 2024] Mab2Rec: Multi-Armed Bandits Recommender
Multi Armed Bandits implementation using the Yahoo! Front Page Today Module User Click Log Dataset
A Pythonic microframework for multi-armed bandit problems
Contextual Bandits in R - simulation and evaluation of Multi-Armed Bandit Policies
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.
This project is created for the simulations of the paper: [Wang2021] Wenbo Wang, Amir Leshem, Dusit Niyato and Zhu Han, "Decentralized Learning for Channel Allocation inIoT Networks over Unlicensed Bandwidth as aContextual Multi-player Multi-armed Bandit Game", to appear in IEEE Transactions on Wireless Communications, 2021.
Python implementation of UCB, EXP3 and Epsilon greedy algorithms
Learning Multi-Armed Bandits by Examples. Currently covering MAB, UCB, Boltzmann Exploration, Thompson Sampling, Contextual MAB, Deep MAB.
Software for the experiments reported in the RecSys 2019 paper "A Simple Multi-Armed Nearest-Neighbor Bandit for Interactive Recommendation"
Study of the paper 'Neural Thompson Sampling' published in October 2020
Curated materials for different machine learning related summer schools
Bayesian Optimization for Categorical and Continuous Inputs
Online Ranking with Multi-Armed-Bandits
Decentralized Intelligent Resource Allocation for LoRaWAN Networks
Implementations of basic concepts dealt under the Reinforcement Learning umbrella. This project is collection of assignments in CS747: Foundations of Intelligent and Learning Agents (Autumn 2017) at IIT Bombay
Add a description, image, and links to the multi-armed-bandits topic page so that developers can more easily learn about it.
To associate your repository with the multi-armed-bandits topic, visit your repo's landing page and select "manage topics."