Class Summary |
CTMDP<S extends State,A extends Action> |
This class represents a continuous time MDP. |
CTMDPEv<S extends State,A extends Action,E extends Event> |
This class represents an Infinite horizon, continuous time Markov Decision Process
with events. |
CTMDPEvA<S extends State,A extends Action,E extends Event> |
This class represents an Infinite horizon, continuous time Markov Decision Process
with events where actions depend on events. |
DTMDP<S extends State,A extends Action> |
This class represents a discrete time INFINITE horizon MDP Problems. |
DTMDPEv<S extends State,A extends Action,E extends Event> |
This class represents an infinite horizon, discrete time, Markov Decision Process
with events. |
DTMDPEvA<S extends State,A extends Action,E extends Event> |
This class represents an infinite horizon, discrete time, Markov Decision Process
with events, where actions depend on events. |
FiniteDP<S extends State,A extends Action> |
This class should ONLY be used in FINITE horizondeterministic problems. |
FiniteMDP<S extends State,A extends Action> |
This class should ONLY be used in FINITE horizon problems. |
FiniteMDPEv<S extends State,A extends Action,E extends Event> |
This class represents a finite horizon discrete time MDP with events. |
InfiniteMDP<S extends State,A extends Action> |
This class is a structural class and is not intended to be extended. |
MDP<S extends State,A extends Action> |
This class is the main framework to build a Dynamic Programming Problem. |
StochasticShortestPath<S extends StateC,A extends Action> |
This class represents an infinite horizon shortest path problem. |