|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use jmarkov.basic | |
---|---|
jmarkov | Provides the basic elements to model continuous time Markov chains (CTMC). |
jmarkov.basic | This package contains basic elements such as State, Event, Action that are used in jMarkov and jMDP. |
jmarkov.jmdp | jMDP is used to solve Markov Decision Processes. |
jmarkov.jmdp.solvers | This package contins the framwork of solvers used by jMDP to solve Markov Decision Processes. |
jmarkov.solvers | Provides classes for customizing a solver used by JMarkov to solve transient and steady state probabilities in different models. |
jphase | This package provides capabilities for modeling Phase type distributions. |
Classes in jmarkov.basic used by jmarkov | |
---|---|
Event
The class Event allows the user to define the implementation of the Events that can alter the States of the Markov Chain. |
|
EventsSet
This class represent a set of Events. |
|
JMarkovElement
All the elements in JMarkov implement this interface, so they can be easily described in the interface. |
|
State
The Class State represent a state in a MarkovProcess or MDP. |
|
States
This interface represents a set of objects State. |
|
StatesSet
This class represent a set of States. |
|
Transitions
|
Classes in jmarkov.basic used by jmarkov.basic | |
---|---|
Action
This class represents a single Action in Markov Decision Process (MDP). |
|
Actions
This interface represents a set of objects |
|
DecisionRule
This class represents a deterministic decision rule which assigns an action to every state. |
|
Event
The class Event allows the user to define the implementation of the Events that can alter the States of the Markov Chain. |
|
Events
This class represents a set of objects Event. |
|
EventsSet
This class represent a set of Events. |
|
JMarkovElement
All the elements in JMarkov implement this interface, so they can be easily described in the interface. |
|
Policy
Policy is a set of "Decision Rules". |
|
PropertiesAction
This class is an easy way to use a Action that is represented by an integer valued array. |
|
PropertiesElement
This interface is a wrapper for elements (States, Actions Events) that can be represented by an arry of integers. |
|
PropertiesEvent
This class is an easy way to use an event that is represented by an array of int. |
|
PropertiesState
The states are characterized by an array of integer-valued properties, whose meaning will change from implementation to implementation. |
|
State
The Class State represent a state in a MarkovProcess or MDP. |
|
States
This interface represents a set of objects State. |
|
Transition
This class represent a transition to a given state. |
|
Transitions
|
|
ValueFunction
This structure matches each state with a double number representing its value function, or in some cases the steady state probabilities. |
Classes in jmarkov.basic used by jmarkov.jmdp | |
---|---|
Action
This class represents a single Action in Markov Decision Process (MDP). |
|
Actions
This interface represents a set of objects |
|
Event
The class Event allows the user to define the implementation of the Events that can alter the States of the Markov Chain. |
|
Events
This class represents a set of objects Event. |
|
Policy
Policy is a set of "Decision Rules". |
|
Solution
This class represents the joint information of a value function and a policy which summarizes the solution to a problem. |
|
State
The Class State represent a state in a MarkovProcess or MDP. |
|
StateC
State to model shortest path problems. |
|
StateEvent
This class represents a state compounded of a state and an event. |
|
States
This interface represents a set of objects State. |
|
StatesSet
This class represent a set of States. |
|
ValueFunction
This structure matches each state with a double number representing its value function, or in some cases the steady state probabilities. |
Classes in jmarkov.basic used by jmarkov.jmdp.solvers | |
---|---|
Action
This class represents a single Action in Markov Decision Process (MDP). |
|
DecisionRule
This class represents a deterministic decision rule which assigns an action to every state. |
|
JMarkovElement
All the elements in JMarkov implement this interface, so they can be easily described in the interface. |
|
Policy
Policy is a set of "Decision Rules". |
|
Solution
This class represents the joint information of a value function and a policy which summarizes the solution to a problem. |
|
State
The Class State represent a state in a MarkovProcess or MDP. |
|
StateC
State to model shortest path problems. |
|
ValueFunction
This structure matches each state with a double number representing its value function, or in some cases the steady state probabilities. |
Classes in jmarkov.basic used by jmarkov.solvers | |
---|---|
JMarkovElement
All the elements in JMarkov implement this interface, so they can be easily described in the interface. |
|
State
The Class State represent a state in a MarkovProcess or MDP. |
Classes in jmarkov.basic used by jphase | |
---|---|
JMarkovElement
All the elements in JMarkov implement this interface, so they can be easily described in the interface. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |