Markov Decision Processes (MDP) Toolbox

Functions related to the resolution of discrete-time Markov Decision Processes.
15.1K Downloads
Updated 20 Jan 2015

View License

The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: backwards induction, value iteration, policy iteration, linear programming algorithms with some variants.
The functions were developped with MATLAB (note that one of the functions requires the Mathworks Optimization Toolbox) by Iadine Chadès, Marie-Josée Cros, Frédérick Garcia, Régis Sabbadin of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).
Toolbox page: http://www.inra.fr/mia/T/MDPtoolbox

Cite As

Marie-Josee Cros (2024). Markov Decision Processes (MDP) Toolbox (https://www.mathworks.com/matlabcentral/fileexchange/25786-markov-decision-processes-mdp-toolbox), MATLAB Central File Exchange. Retrieved .

MATLAB Release Compatibility
Created with R2014b
Compatible with any release
Platform Compatibility
Windows macOS Linux
Acknowledgements

Inspired: Betavol(x,R,fig)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
Version Published Release Notes
1.6

Add the possibility to download as a toolbox (.mltbx file).

1.5.0.0

Complete Other Requirements.

1.4.0.0

Mainly improve documentation (Jan. 2014)

1.3.0.0

Update the zip file !

1.2.0.0

The version 4.0 (October 2012) is entirely compatible with GNU Octave (version 3.6), the output of several functions: mdp_relative_value_iteration, mdp_value_iteration and mdp_eval_policy_iterative, were modified.

1.1.0.0

Add all authors names

1.0.0.0