From cb1ffad4c9ba35f205cb0a5a42972035db03753e Mon Sep 17 00:00:00 2001 From: hannah Date: Wed, 1 Jul 2020 00:13:18 +0200 Subject: [PATCH] small corrections --- doc/source/doc/building_dtmcs.rst | 2 +- doc/source/doc/building_mas.rst | 10 ++++++---- doc/source/doc/building_mdps.rst | 8 ++++---- 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/doc/source/doc/building_dtmcs.rst b/doc/source/doc/building_dtmcs.rst index a2dc7bf..c39350a 100644 --- a/doc/source/doc/building_dtmcs.rst +++ b/doc/source/doc/building_dtmcs.rst @@ -67,7 +67,7 @@ Instead of using the SparseMatrixBuilder, a sparse matrix can be build from a nu Labeling ==================== -States can be labelled with sets of propositions, for example state 0 can be labelled with 'init'. +States can be labeled with sets of propositions, for example state 0 can be labeled with 'init'. In order to specify the state labeling we create an empty labeling for the given number of states and add the labels to the labeling:: >>> state_labeling = stormpy.storage.StateLabeling(13) diff --git a/doc/source/doc/building_mas.rst b/doc/source/doc/building_mas.rst index f9dae29..8906559 100644 --- a/doc/source/doc/building_mas.rst +++ b/doc/source/doc/building_mas.rst @@ -7,7 +7,9 @@ Background ===================== We already saw the process of building CTMCs with MDPs via Stormpy in :doc:`building_ctmcs` and :doc:`building_mdps`. -In this section, we build a small Markarov automata with five states from which the first four are markovian. +Markov automata use states that are probabilistic i.e. like the states of an MDP or markovian that are like the states of a CTMC. + +In this section, we build a small MA with five states from which the first four are markovian. Since we covered labeling and exit rates already in the previous examples we omit the description of these components. The full example can be found here: @@ -20,7 +22,7 @@ First, we import Stormpy:: Transition Matrix ================== In :doc:`building_mdps`, we used the SparseMatrixBuilder to create a matrix with a custom row grouping. -In this example, we use the numpy library: +In this example, we use the numpy library. In the beginning, we create a numpy array that will be used to build the transition matrix of our model.:: @@ -55,8 +57,8 @@ When building the matrix we define a custom row grouping by passing a list conta Markovian States ================== -Feature of MA: Markovian states. -Those are represented by a BitVector that containing the respective states:: +In order to define which states have only one probability distribution over the successor states, +we build a BitVector that contains the respective markovian states:: >>> markovian_states = stormpy.BitVector(5, [1, 2, 3, 4]) diff --git a/doc/source/doc/building_mdps.rst b/doc/source/doc/building_mdps.rst index 232b514..3f07e6b 100644 --- a/doc/source/doc/building_mdps.rst +++ b/doc/source/doc/building_mdps.rst @@ -16,7 +16,7 @@ First, we import Stormpy:: Transition Matrix ===================== -Since we want build a nondeterminstic model, we create a transition matrix with a custom row group for each state:: +Since we want to build a nondeterminstic model, we create a transition matrix with a custom row group for each state:: >>> builder = stormpy.SparseMatrixBuilder(rows=0, columns=0, entries=0, force_dimensions=False, has_custom_row_grouping=True, row_groups=0) @@ -64,8 +64,7 @@ Labeling We have seen the construction of a state labeling in previous examples. Therefore we omit the description here. Instead we focus on the choices. -Since in state 0 a nondeterministic choice over two actions is available -The number of choices is 14. +Since in state 0 a nondeterministic choice over two actions is available, the number of choices is 14. To distinguish those we can define a choice labeling:: >>> choice_labeling = stormpy.storage.ChoiceLabeling(14) @@ -87,7 +86,8 @@ Recall that those actions where defined in row one and two of the transition mat Reward models ================== -Reward models, length of vector coincides with number of choices:: + +In this reward models the length of vector coincides with number of choices:: >>> reward_models = {} >>> action_reward = [0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]