Package jmarkov.jmdp

jMDP is used to solve Markov Decision Processes.

See:
          Description

Class Summary
CT2DTConverter<S extends State,A extends Action> This class formulates a DTMDP equivalent to a CTMDP.
CTMDP<S extends State,A extends Action> This class represents a continuous time MDP.
CTMDPEv<S extends State,A extends Action,E extends Event> This class represents an Infinite horizon, continuous time Markov Decision Process with events.
CTMDPEvA<S extends State,A extends Action,E extends Event> This class represents an Infinite horizon, continuous time Markov Decision Process with events where actions depend on events.
DTMDP<S extends State,A extends Action> This class represents a discrete time infnite horizon MDP problem.
DTMDPEv<S extends State,A extends Action,E extends Event> This class represents an infinite horizon, discrete time, Markov Decision Process with events.
DTMDPEvA<S extends State,A extends Action,E extends Event> This class represents an infinite horizon, discrete time, Markov Decision Process with events, where actions depend on events.
FiniteDP<S extends State,A extends Action> This class should ONLY be used in FINITE horizondeterministic problems.
FiniteMDP<S extends State,A extends Action> This class should ONLY be used in FINITE horizon problems.
FiniteMDPEv<S extends State,A extends Action,E extends Event> This class represents a finite horizon discrete time MDP with events.
InfiniteMDP<S extends State,A extends Action> This class is a structural class and is.
MDP<S extends State,A extends Action> This class is the main framework to build a Dynamic Programming Problem.
StochasticShortestPath<S extends StateC,A extends Action> This class represents an infinite horizon shortest path problem.
 

Package jmarkov.jmdp Description

jMDP is used to solve Markov Decision Processes. See the jMDP manual for details.