binaryRL: Reinforcement Learning Tools for Two-Alternative Forced Choice
Tasks
Tools for building reinforcement learning (RL) models
specifically tailored for Two-Alternative Forced Choice (TAFC) tasks,
commonly employed in psychological research. These models build upon
the foundational principles of model-free reinforcement learning detailed in
Sutton and Barto (1998) <ISBN:0262039249>. The package allows
for the intuitive definition of RL models using simple if-else
statements. Our approach to constructing and evaluating these
computational models is informed by the guidelines proposed in
Wilson & Collins (2019) <doi:10.7554/eLife.49547>. Example
datasets included with the package are sourced from the work of
Mason et al. (2024) <doi:10.3758/s13423-023-02415-x>.
Version: |
0.8.0 |
Depends: |
R (≥ 4.0.0) |
Imports: |
future, doFuture, foreach, doRNG, progressr |
Suggests: |
stats, GenSA, GA, DEoptim, mlrMBO, mlr, ParamHelpers, smoof, lhs, pso, cmaes |
Published: |
2025-05-13 |
DOI: |
10.32614/CRAN.package.binaryRL |
Author: |
YuKi [aut, cre] |
Maintainer: |
YuKi <hmz1969a at gmail.com> |
BugReports: |
https://github.com/yuki-961004/binaryRL/issues |
License: |
GPL-3 |
URL: |
https://github.com/yuki-961004/binaryRL |
NeedsCompilation: |
no |
Materials: |
README |
CRAN checks: |
binaryRL results |
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=binaryRL
to link to this page.