multiRL: Reinforcement Learning Tools for Multi-Armed Bandit
A flexible general-purpose toolbox for implementing Rescorla-Wagner models
in multi-armed bandit tasks.
As the successor and functional extension of the 'binaryRL' package,
'multiRL' modularizes the Markov Decision Process (MDP) into six core
components. This framework enables users to construct custom models via
intuitive if-else syntax and define latent learning rules for agents.
For parameter estimation, it provides both likelihood-based
inference (MLE and MAP) and simulation-based inference (ABC and
RNN), with full support for parallel processing across subjects.
The workflow is highly standardized, featuring four main functions
that strictly follow the four-step protocol (and ten rules)
proposed by Wilson & Collins (2019) <doi:10.7554/eLife.49547>.
Beyond the three built-in models (TD, RSTD, and Utility), users
can easily derive new variants by declaring which variables are
treated as free parameters.
| Version: |
0.2.3 |
| Depends: |
R (≥ 4.1.0) |
| Imports: |
methods, utils, Rcpp, compiler, future, doFuture, foreach, doRNG, progressr, ggplot2, scales, grDevices |
| LinkingTo: |
Rcpp |
| Suggests: |
stats, GenSA, GA, DEoptim, pso, mlrMBO, mlr, ParamHelpers, smoof, lhs, DiceKriging, rgenoud, cmaes, nloptr, abc, tensorflow, keras, reticulate |
| Published: |
2026-01-26 |
| DOI: |
10.32614/CRAN.package.multiRL (may not be active yet) |
| Author: |
YuKi [aut, cre],
Xinyu [aut] |
| Maintainer: |
YuKi <hmz1969a at gmail.com> |
| BugReports: |
https://github.com/yuki-961004/multiRL/issues |
| License: |
GPL-3 |
| URL: |
https://yuki-961004.github.io/multiRL/ |
| NeedsCompilation: |
yes |
| CRAN checks: |
multiRL results |
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=multiRL
to link to this page.