Sebastian Junges
4 years ago
42 changed files with 3458 additions and 1468 deletions
-
2CHANGELOG.md
-
45binder/Dockerfile
-
22doc/source/conf.py
-
196doc/source/doc/analysis.ipynb
-
69doc/source/doc/analysis.rst
-
136doc/source/doc/building_models.ipynb
-
44doc/source/doc/building_models.rst
-
132doc/source/doc/dfts.ipynb
-
51doc/source/doc/dfts.rst
-
221doc/source/doc/engines.ipynb
-
82doc/source/doc/engines.rst
-
203doc/source/doc/exploration.ipynb
-
113doc/source/doc/exploration.rst
-
260doc/source/doc/gspns.ipynb
-
84doc/source/doc/gspns.rst
-
189doc/source/doc/models/building_ctmcs.ipynb
-
103doc/source/doc/models/building_ctmcs.rst
-
328doc/source/doc/models/building_dtmcs.ipynb
-
151doc/source/doc/models/building_dtmcs.rst
-
211doc/source/doc/models/building_mas.ipynb
-
120doc/source/doc/models/building_mas.rst
-
309doc/source/doc/models/building_mdps.ipynb
-
151doc/source/doc/models/building_mdps.rst
-
174doc/source/doc/parametric_models.ipynb
-
67doc/source/doc/parametric_models.rst
-
133doc/source/doc/reward_models.ipynb
-
64doc/source/doc/reward_models.rst
-
202doc/source/doc/schedulers.ipynb
-
99doc/source/doc/schedulers.rst
-
167doc/source/doc/shortest_paths.ipynb
-
63doc/source/doc/shortest_paths.rst
-
480doc/source/getting_started.ipynb
-
188doc/source/getting_started.rst
-
7examples/highlevel_models/01-highlevel-models.py
-
5lib/stormpy/examples/files/mdp/maze_2.nm
-
2setup.cfg
-
3setup.py
-
3src/logic/formulae.cpp
-
2src/storage/expressions.cpp
-
22src/storage/jani.cpp
-
17src/storage/prism.cpp
-
6travis/build-helper.sh
@ -0,0 +1,45 @@ |
|||
FROM movesrwth/stormpy:1.6.2 |
|||
MAINTAINER Matthias Volk <matthias.volk@cs.rwth-aachen.de> |
|||
|
|||
|
|||
########## |
|||
# Create user |
|||
########## |
|||
|
|||
ARG NB_USER=jovyan |
|||
ARG NB_UID=1000 |
|||
ENV USER ${NB_USER} |
|||
ENV NB_UID ${NB_UID} |
|||
ENV HOME /home/${NB_USER} |
|||
|
|||
RUN adduser --disabled-password \ |
|||
--gecos "Default user" \ |
|||
--uid ${NB_UID} \ |
|||
${NB_USER} |
|||
|
|||
# Change the owner of the virtual environment |
|||
WORKDIR /opt |
|||
USER root |
|||
RUN chown -R ${NB_UID} venv |
|||
USER ${NB_USER} |
|||
|
|||
WORKDIR ${HOME} |
|||
# Add missing path |
|||
ENV PATH="$HOME/.local/bin:$PATH" |
|||
|
|||
|
|||
########## |
|||
# Install dependencies |
|||
########## |
|||
|
|||
RUN pip install --no-cache-dir notebook==5.7.9 |
|||
RUN pip install --no-cache-dir numpy==1.19.0 |
|||
|
|||
########## |
|||
# Copy files for notebooks |
|||
########## |
|||
|
|||
RUN mkdir notebooks |
|||
COPY doc/source/*.ipynb notebooks/ |
|||
COPY doc/source/doc/*.ipynb notebooks/doc/ |
|||
COPY doc/source/doc/models/*.ipynb notebooks/doc/models/ |
@ -0,0 +1,196 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Analysis" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"Storm supports various model checking approaches that we consider in this section on analysis.\n", |
|||
"\n", |
|||
"As always:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> path = stormpy.examples.files.prism_dtmc_die\n", |
|||
">>> prism_program = stormpy.parse_prism_program(path)\n", |
|||
">>> formula_str = \"P=? [F s=7 & d=2]\"\n", |
|||
">>> properties = stormpy.parse_properties(formula_str, prism_program)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Qualitative Analysis" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Adapting the model checking engine\n", |
|||
"\n", |
|||
"[02-analysis.py](https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/02-analysis.py)\n", |
|||
"\n", |
|||
"Instead of using the sparse representation, models can also be built symbolically:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> model = stormpy.build_symbolic_model(prism_program, properties)\n", |
|||
">>> result = stormpy.model_checking(model, properties[0])" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"To access the result, the result has to be filtered:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> filter = stormpy.create_filter_initial_states_symbolic(model)\n", |
|||
">>> result.filter(filter)\n", |
|||
">>> assert result.min == result.max" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Then, result.min (or result.max) contains the result. Notice that if there are multiple initial states, result.min will not be equal to result.max.\n", |
|||
"\n", |
|||
"Besides this analysis on the DD, there are approaches that combine both representation.\n", |
|||
"Stormpy does support them, but we have not yet documented them." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Adapting model checking algorithms\n", |
|||
"\n", |
|||
"[03-analysis.py](https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/03-analysis.py)\n", |
|||
"\n", |
|||
"Reconsider the model checking example from the getting started guide:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> model = stormpy.build_model(prism_program, properties)\n", |
|||
">>> result = stormpy.model_checking(model, properties[0])" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We can also vary the model checking algorithm:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> env = stormpy.Environment()\n", |
|||
">>> env.solver_environment.set_linear_equation_solver_type(stormpy.EquationSolverType.native)\n", |
|||
">>> env.solver_environment.native_solver_environment.method = stormpy.NativeLinearEquationSolverMethod.power_iteration\n", |
|||
">>> result = stormpy.model_checking(model, properties[0], environment=env)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Finally, we allow to change some parameters of the algorithms. E.g., in iterative approaches,\n", |
|||
"we allow to change the number of iterations:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> env.solver_environment.native_solver_environment.maximum_iterations = 3\n", |
|||
">>> result = stormpy.model_checking(model, properties[0])" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Notice that by setting such parameters, the result may be off from the actual model checking algorithm.\n", |
|||
"\n", |
|||
"Environments can be used likewise for symbolic model checking. See the example for more information." |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.1206837, |
|||
"filename": "analysis.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Analysis" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,69 +0,0 @@ |
|||
*************** |
|||
Analysis |
|||
*************** |
|||
|
|||
Background |
|||
===================== |
|||
|
|||
Storm supports various model checking approaches that we consider in this section on analysis. |
|||
|
|||
As always:: |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> path = stormpy.examples.files.prism_dtmc_die |
|||
>>> prism_program = stormpy.parse_prism_program(path) |
|||
>>> formula_str = "P=? [F s=7 & d=2]" |
|||
>>> properties = stormpy.parse_properties(formula_str, prism_program) |
|||
|
|||
|
|||
Qualitative Analysis |
|||
====================== |
|||
|
|||
|
|||
Adapting the model checking engine |
|||
================================== |
|||
.. seealso:: `02-analysis.py <https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/02-analysis.py>`_ |
|||
|
|||
Instead of using the sparse representation, models can also be built symbolically:: |
|||
|
|||
>>> model = stormpy.build_symbolic_model(prism_program, properties) |
|||
>>> result = stormpy.model_checking(model, properties[0]) |
|||
|
|||
To access the result, the result has to be filtered:: |
|||
|
|||
>>> filter = stormpy.create_filter_initial_states_symbolic(model) |
|||
>>> result.filter(filter) |
|||
>>> assert result.min == result.max |
|||
|
|||
Then, result.min (or result.max) contains the result. Notice that if there are multiple initial states, result.min will not be equal to result.max. |
|||
|
|||
Besides this analysis on the DD, there are approaches that combine both representation. |
|||
Stormpy does support them, but we have not yet documented them. |
|||
|
|||
Adapting model checking algorithms |
|||
================================== |
|||
.. seealso:: `03-analysis.py <https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/03-analysis.py>`_ |
|||
|
|||
Reconsider the model checking example from the getting started guide:: |
|||
|
|||
>>> model = stormpy.build_model(prism_program, properties) |
|||
>>> result = stormpy.model_checking(model, properties[0]) |
|||
|
|||
We can also vary the model checking algorithm:: |
|||
|
|||
>>> env = stormpy.Environment() |
|||
>>> env.solver_environment.set_linear_equation_solver_type(stormpy.EquationSolverType.native) |
|||
>>> env.solver_environment.native_solver_environment.method = stormpy.NativeLinearEquationSolverMethod.power_iteration |
|||
>>> result = stormpy.model_checking(model, properties[0], environment=env) |
|||
|
|||
Finally, we allow to change some parameters of the algorithms. E.g., in iterative approaches, |
|||
we allow to change the number of iterations:: |
|||
|
|||
>>> env.solver_environment.native_solver_environment.maximum_iterations = 3 |
|||
>>> result = stormpy.model_checking(model, properties[0]) |
|||
|
|||
Notice that by setting such parameters, the result may be off from the actual model checking algorithm. |
|||
|
|||
Environments can be used likewise for symbolic model checking. See the example for more information. |
@ -0,0 +1,136 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Building Models" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"Storm supports a wide range of formalisms. Stormpy can be used to build models from some of these formalisms.\n", |
|||
"Moreover, during construction, various options can be set. This document yields information about the most important options." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building different formalisms\n", |
|||
"\n", |
|||
"We use some standard examples:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Storm supports the explicit DRN format.\n", |
|||
"From this, models can be built directly:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path = stormpy.examples.files.drn_ctmc_dft\n", |
|||
">>> model = stormpy.build_model_from_drn(path)\n", |
|||
">>> print(model.model_type)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"And the same for parametric models:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path = stormpy.examples.files.drn_pdtmc_die\n", |
|||
">>> model = stormpy.build_parametric_model_from_drn(path)\n", |
|||
">>> print(model.model_type)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Another option are JANI descriptions. These are another high-level description format.\n", |
|||
"Building models from JANI is done in two steps. First the Jani-description is parsed, and then the model is built from this description:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path = stormpy.examples.files.jani_dtmc_die\n", |
|||
">>> jani_program, properties = stormpy.parse_jani_model(path)\n", |
|||
">>> model = stormpy.build_model(jani_program)\n", |
|||
">>> print(model.model_type)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Notice that parsing JANI files also returns properties. In JANI, properties can be embedded in the model file." |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598188121.587518, |
|||
"filename": "building_models.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Building Models" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,44 +0,0 @@ |
|||
*************** |
|||
Building Models |
|||
*************** |
|||
|
|||
Background |
|||
===================== |
|||
|
|||
Storm supports a wide range of formalisms. Stormpy can be used to build models from some of these formalisms. |
|||
Moreover, during construction, various options can be set. This document yields information about the most important options. |
|||
|
|||
|
|||
Building different formalisms |
|||
=============================== |
|||
|
|||
We use some standard examples:: |
|||
|
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
|
|||
Storm supports the explicit DRN format. |
|||
From this, models can be built directly:: |
|||
|
|||
>>> path = stormpy.examples.files.drn_ctmc_dft |
|||
>>> model = stormpy.build_model_from_drn(path) |
|||
>>> print(model.model_type) |
|||
ModelType.CTMC |
|||
|
|||
And the same for parametric models:: |
|||
|
|||
>>> path = stormpy.examples.files.drn_pdtmc_die |
|||
>>> model = stormpy.build_parametric_model_from_drn(path) |
|||
>>> print(model.model_type) |
|||
ModelType.DTMC |
|||
|
|||
Another option are JANI descriptions. These are another high-level description format. |
|||
Building models from JANI is done in two steps. First the Jani-description is parsed, and then the model is built from this description:: |
|||
|
|||
>>> path = stormpy.examples.files.jani_dtmc_die |
|||
>>> jani_program, properties = stormpy.parse_jani_model(path) |
|||
>>> model = stormpy.build_model(jani_program) |
|||
>>> print(model.model_type) |
|||
ModelType.DTMC |
|||
|
|||
Notice that parsing JANI files also returns properties. In JANI, properties can be embedded in the model file. |
@ -0,0 +1,132 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Dynamic Fault Trees" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building DFTs\n", |
|||
"\n", |
|||
"[01-dfts.py](https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py)\n", |
|||
"\n", |
|||
"Dynamic fault trees can be loaded from either the Galileo format or from a custom JSON form.\n", |
|||
"A file containing the DFT in the Galileo format can be loaded via `load_dft_galileo_file(path)`.\n", |
|||
"The custom JSON format can be loaded from a file via `load_dft_json_file(path)` or directly from a string via `load_dft_json_string(path)`.\n", |
|||
"We start by loading a simple DFT containing an AND gate from JSON:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.dft\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> path_json = stormpy.examples.files.dft_json_and\n", |
|||
">>> dft_small = stormpy.dft.load_dft_json_file(path_json)\n", |
|||
">>> print(dft_small)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Next we load a more complex DFT from the Galileo format:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path_galileo = stormpy.examples.files.dft_galileo_hecs\n", |
|||
">>> dft = stormpy.dft.load_dft_galileo_file(path_galileo)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"After loading the DFT, we can display some common statistics about the model:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"DFT with {} elements.\".format(dft.nr_elements()))\n", |
|||
">>> print(\"DFT has {} BEs and {} dynamic elements.\".format(dft.nr_be(), dft.nr_dynamic()))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Analyzing DFTs\n", |
|||
"\n", |
|||
"[01-dfts.py](https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py)\n", |
|||
"\n", |
|||
"The next step is to analyze the DFT via `analyze_dft(dft, formula)`.\n", |
|||
"Here we can use all standard properties as described in [Building properties](../getting_started.ipynb#building-properties).\n", |
|||
"In our example we compute the Mean-time-to-failure (MTTF):" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> formula_str = \"T=? [ F \\\"failed\\\" ]\"\n", |
|||
">>> formulas = stormpy.parse_properties(formula_str)\n", |
|||
">>> results = stormpy.dft.analyze_dft(dft, [formulas[0].raw_formula])\n", |
|||
">>> result = results[0]\n", |
|||
">>> print(\"MTTF: {:.2f}\".format(result))" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.1422036, |
|||
"filename": "dfts.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Dynamic Fault Trees" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,51 +0,0 @@ |
|||
******************* |
|||
Dynamic Fault Trees |
|||
******************* |
|||
|
|||
|
|||
Building DFTs |
|||
============= |
|||
.. seealso:: `01-dfts.py <https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py>`_ |
|||
|
|||
Dynamic fault trees can be loaded from either the Galileo format or from a custom JSON form. |
|||
A file containing the DFT in the Galileo format can be loaded via ``load_dft_galileo_file(path)``. |
|||
The custom JSON format can be loaded from a file via ``load_dft_json_file(path)`` or directly from a string via ``load_dft_json_string(path)``. |
|||
We start by loading a simple DFT containing an AND gate from JSON:: |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.dft |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> path_json = stormpy.examples.files.dft_json_and |
|||
>>> dft_small = stormpy.dft.load_dft_json_file(path_json) |
|||
>>> print(dft_small) |
|||
Top level index: 2, Nr BEs2 |
|||
|
|||
Next we load a more complex DFT from the Galileo format:: |
|||
|
|||
>>> path_galileo = stormpy.examples.files.dft_galileo_hecs |
|||
>>> dft = stormpy.dft.load_dft_galileo_file(path_galileo) |
|||
|
|||
After loading the DFT, we can display some common statistics about the model:: |
|||
|
|||
>>> print("DFT with {} elements.".format(dft.nr_elements())) |
|||
DFT with 23 elements. |
|||
>>> print("DFT has {} BEs and {} dynamic elements.".format(dft.nr_be(), dft.nr_dynamic())) |
|||
DFT has 13 BEs and 2 dynamic elements. |
|||
|
|||
|
|||
Analyzing DFTs |
|||
============== |
|||
.. seealso:: `01-dfts.py <https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py>`_ |
|||
|
|||
The next step is to analyze the DFT via ``analyze_dft(dft, formula)``. |
|||
Here we can use all standard properties as described in :ref:`getting-started-building-properties`. |
|||
In our example we compute the `Mean-time-to-failure (MTTF)`:: |
|||
|
|||
>>> formula_str = "T=? [ F \"failed\" ]" |
|||
>>> formulas = stormpy.parse_properties(formula_str) |
|||
>>> results = stormpy.dft.analyze_dft(dft, [formulas[0].raw_formula]) |
|||
>>> result = results[0] |
|||
>>> print("MTTF: {:.2f}".format(result)) |
|||
MTTF: 363.89 |
|||
|
@ -0,0 +1,221 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Engines" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"Storm supports different engines for building and checking a model. A detailed comparison of the different engines provided in Storm can be found on the [Storm website](http://www.stormchecker.org/documentation/usage/engines.html)." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Sparse engine\n", |
|||
"\n", |
|||
"In all of the examples so far we used the default sparse engine:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> prism_program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)\n", |
|||
">>> properties = stormpy.parse_properties('P=? [F \"one\"]', prism_program)\n", |
|||
">>> sparse_model = stormpy.build_sparse_model(prism_program, properties)\n", |
|||
">>> print(type(sparse_model))\n" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of states: {}\".format(sparse_model.nr_states))\n", |
|||
"\n" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of transitions: {}\".format(sparse_model.nr_transitions))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> sparse_result = stormpy.check_model_sparse(sparse_model, properties[0])\n", |
|||
">>> initial_state = sparse_model.initial_states[0]\n", |
|||
">>> print(sparse_result.at(initial_state))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Symbolic engine\n", |
|||
"\n", |
|||
"Instead of using the sparse engine, one can also use a symbolic representation in terms of binary decision diagrams (BDDs).\n", |
|||
"To use the symbolic (dd) engine, we use the symbolic versions for the building and model checking:\n" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> symbolic_model = stormpy.build_symbolic_model(prism_program, properties)\n", |
|||
">>> print(type(symbolic_model))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of states: {}\".format(symbolic_model.nr_states))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of transitions: {}\".format(symbolic_model.nr_transitions))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> symbolic_result = stormpy.check_model_dd(symbolic_model, properties[0])\n", |
|||
">>> print(symbolic_result)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We can also filter the computed results and only consider the initial states:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model)\n", |
|||
">>> symbolic_result.filter(filter)\n", |
|||
">>> print(symbolic_result.min)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"It is also possible to first build the model symbolically and then transform it into a sparse model:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(type(symbolic_model))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> transformed_model = stormpy.transform_to_sparse_model(symbolic_model)\n", |
|||
">>> print(type(transformed_model))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Hybrid engine\n", |
|||
"\n", |
|||
"A third possibility is to use the hybrid engine, a combination of sparse and dd engines.\n", |
|||
"It first builds the model symbolically.\n", |
|||
"The actual model checking is then performed with the engine which is deemed most suitable for the given task.\n" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(type(symbolic_model))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> hybrid_result = stormpy.check_model_hybrid(symbolic_model, properties[0])\n", |
|||
">>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model)\n", |
|||
">>> hybrid_result.filter(filter)\n", |
|||
">>> print(hybrid_result)" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.148, |
|||
"filename": "engines.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Engines" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,82 +0,0 @@ |
|||
*************** |
|||
Engines |
|||
*************** |
|||
|
|||
Background |
|||
===================== |
|||
|
|||
Storm supports different engines for building and checking a model. A detailed comparison of the different engines provided in Storm can be found on the `Storm website <http://www.stormchecker.org/documentation/usage/engines.html>`_. |
|||
|
|||
|
|||
Sparse engine |
|||
=============================== |
|||
|
|||
In all of the examples so far we used the default sparse engine: |
|||
|
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> prism_program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die) |
|||
>>> properties = stormpy.parse_properties('P=? [F "one"]', prism_program) |
|||
>>> sparse_model = stormpy.build_sparse_model(prism_program, properties) |
|||
>>> print(type(sparse_model)) |
|||
<class 'stormpy.storage.storage.SparseDtmc'> |
|||
>>> print("Number of states: {}".format(sparse_model.nr_states)) |
|||
Number of states: 13 |
|||
>>> print("Number of transitions: {}".format(sparse_model.nr_transitions)) |
|||
Number of transitions: 20 |
|||
|
|||
The model checking was also done in the sparse engine: |
|||
|
|||
>>> sparse_result = stormpy.check_model_sparse(sparse_model, properties[0]) |
|||
>>> initial_state = sparse_model.initial_states[0] |
|||
>>> print(sparse_result.at(initial_state)) |
|||
0.16666666666666666 |
|||
|
|||
|
|||
Symbolic engine |
|||
=============================== |
|||
|
|||
Instead of using the sparse engine, one can also use a symbolic representation in terms of `binary decision diagrams (BDDs)`. |
|||
To use the symbolic (dd) engine, we use the symbolic versions for the building and model checking: |
|||
|
|||
>>> symbolic_model = stormpy.build_symbolic_model(prism_program, properties) |
|||
>>> print(type(symbolic_model)) |
|||
<class 'stormpy.storage.storage.SymbolicSylvanDtmc'> |
|||
>>> print("Number of states: {}".format(symbolic_model.nr_states)) |
|||
Number of states: 13 |
|||
>>> print("Number of transitions: {}".format(symbolic_model.nr_transitions)) |
|||
Number of transitions: 20 |
|||
>>> symbolic_result = stormpy.check_model_dd(symbolic_model, properties[0]) |
|||
>>> print(symbolic_result) |
|||
[0, 1] (range) |
|||
|
|||
We can also filter the computed results and only consider the initial states: |
|||
|
|||
>>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model) |
|||
>>> symbolic_result.filter(filter) |
|||
>>> print(symbolic_result.min) |
|||
0.16666650772094727 |
|||
|
|||
It is also possible to first build the model symbolically and then transform it into a sparse model: |
|||
|
|||
>>> print(type(symbolic_model)) |
|||
<class 'stormpy.storage.storage.SymbolicSylvanDtmc'> |
|||
>>> transformed_model = stormpy.transform_to_sparse_model(symbolic_model) |
|||
>>> print(type(transformed_model)) |
|||
<class 'stormpy.storage.storage.SparseDtmc'> |
|||
|
|||
|
|||
Hybrid engine |
|||
=============================== |
|||
|
|||
A third possibility is to use the hybrid engine, a combination of sparse and dd engines. |
|||
It first builds the model symbolically. |
|||
The actual model checking is then performed with the engine which is deemed most suitable for the given task. |
|||
|
|||
>>> print(type(symbolic_model)) |
|||
<class 'stormpy.storage.storage.SymbolicSylvanDtmc'> |
|||
>>> hybrid_result = stormpy.check_model_hybrid(symbolic_model, properties[0]) |
|||
>>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model) |
|||
>>> hybrid_result.filter(filter) |
|||
>>> print(hybrid_result) |
|||
0.166667 |
@ -0,0 +1,203 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Exploring Models" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"Often, stormpy is used as a testbed for new algorithms.\n", |
|||
"An essential step is to transfer the (low-level) descriptions of an MDP or other state-based model into\n", |
|||
"an own algorithm. In this section, we discuss some of the functionality." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Reading MDPs\n", |
|||
"\n", |
|||
"[01-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/01-exploration.py)\n", |
|||
"\n", |
|||
"In [Getting Started](../getting_started.ipynb), we briefly iterated over a DTMC. In this section, we explore an MDP:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false, |
|||
"scrolled": true |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import doctest\n", |
|||
">>> doctest.ELLIPSIS_MARKER = '-etc-' \n", |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_mdp_maze)\n", |
|||
">>> prop = \"R=? [F \\\"goal\\\"]\"\n", |
|||
"\n", |
|||
">>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)\n", |
|||
">>> model = stormpy.build_model(program, properties)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The iteration over the model is as before, but now, for every action, we can have several transitions:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false, |
|||
"scrolled": true |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for state in model.states:\n", |
|||
"... if state.id in model.initial_states:\n", |
|||
"... print(\"State {} is initial\".format(state.id))\n", |
|||
"... for action in state.actions:\n", |
|||
"... for transition in action.transitions:\n", |
|||
"... print(\"From state {} by action {}, with probability {}, go to state {}\".format(state, action, transition.value(), transition.column))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Internally, storm can hold hints to the origin of the actions, which may be helpful to give meaning and for debugging.\n", |
|||
"As the availability and the encoding of this data depends on the input model, we discuss these features in highlevel_models.\n", |
|||
"\n", |
|||
"Storm currently supports deterministic rewards on states or actions. More information can be found in [Reward Models](reward_models.ipynb)." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Reading POMDPs\n", |
|||
"\n", |
|||
"[02-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/02-exploration.py)\n", |
|||
"\n", |
|||
"Internally, POMDPs extend MDPs. Thus, iterating over the MDP is done as before.\n", |
|||
"\n" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"scrolled": true |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_pomdp_maze)\n", |
|||
">>> prop = \"R=? [F \\\"goal\\\"]\"\n", |
|||
">>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)\n", |
|||
">>> model = stormpy.build_model(program, properties)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Indeed, all that changed in the code above is the example we use.\n", |
|||
"And, that the model type now is a POMDP:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false, |
|||
"scrolled": true |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(model.model_type)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Additionally, POMDPs have a set of observations, which are internally just numbered by an integer from 0 to the number of observations -1" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false, |
|||
"scrolled": true |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(model.nr_observations)\n", |
|||
">>> for state in model.states:\n", |
|||
"... print(\"State {} has observation id {}\".format(state.id, model.observations[state.id]))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Sorting states\n", |
|||
"\n", |
|||
"[03-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/03-exploration.py)\n", |
|||
"\n", |
|||
"Often, one may sort the states according to the graph structure.\n", |
|||
"Storm supports some of these sorting algorithms, e.g., topological sort." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Reading MAs\n", |
|||
"\n", |
|||
"To be continued…" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.1595793, |
|||
"filename": "exploration.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Exploring Models" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,113 +0,0 @@ |
|||
**************** |
|||
Exploring Models |
|||
**************** |
|||
|
|||
Background |
|||
===================== |
|||
|
|||
Often, stormpy is used as a testbed for new algorithms. |
|||
An essential step is to transfer the (low-level) descriptions of an MDP or other state-based model into |
|||
an own algorithm. In this section, we discuss some of the functionality. |
|||
|
|||
Reading MDPs |
|||
===================== |
|||
|
|||
.. seealso:: `01-exploration.py <https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/01-exploration.py>`_ |
|||
|
|||
In :doc:`../getting_started`, we briefly iterated over a DTMC. In this section, we explore an MDP:: |
|||
|
|||
>>> import doctest |
|||
>>> doctest.ELLIPSIS_MARKER = '-etc-' # doctest:+ELLIPSIS |
|||
>>> import stormpy |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_mdp_maze) |
|||
>>> prop = "R=? [F \"goal\"]" |
|||
|
|||
>>> properties = stormpy.parse_properties_for_prism_program(prop, program, None) |
|||
>>> model = stormpy.build_model(program, properties) |
|||
|
|||
The iteration over the model is as before, but now, for every action, we can have several transitions:: |
|||
|
|||
>>> for state in model.states: |
|||
... if state.id in model.initial_states: |
|||
... print("State {} is initial".format(state.id)) |
|||
... for action in state.actions: |
|||
... for transition in action.transitions: |
|||
... print("From state {} by action {}, with probability {}, go to state {}".format(state, action, transition.value(), transition.column)) |
|||
-etc- |
|||
|
|||
The output (omitted for brevity) contains sentences like:: |
|||
|
|||
From state 1 by action 0, with probability 1.0, go to state 2 |
|||
From state 1 by action 1, with probability 1.0, go to state 1 |
|||
|
|||
|
|||
|
|||
Internally, storm can hold hints to the origin of the actions, which may be helpful to give meaning and for debugging. |
|||
As the availability and the encoding of this data depends on the input model, we discuss these features in :doc:`highlevel_models`. |
|||
|
|||
|
|||
Storm currently supports deterministic rewards on states or actions. More information can be found in :doc:`reward_models`. |
|||
|
|||
|
|||
Reading POMDPs |
|||
====================== |
|||
.. seealso:: `02-exploration.py <https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/02-exploration.py>`_ |
|||
|
|||
|
|||
Internally, POMDPs extend MDPs. Thus, iterating over the MDP is done as before. |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_pomdp_maze) |
|||
>>> prop = "R=? [F \"goal\"]" |
|||
>>> properties = stormpy.parse_properties_for_prism_program(prop, program, None) |
|||
>>> model = stormpy.build_model(program, properties) |
|||
|
|||
Indeed, all that changed in the code above is the example we use. |
|||
And, that the model type now is a POMDP:: |
|||
|
|||
>>> print(model.model_type) |
|||
ModelType.POMDP |
|||
|
|||
Additionally, POMDPs have a set of observations, which are internally just numbered by an integer from 0 to the number of observations -1 :: |
|||
|
|||
>>> print(model.nr_observations) |
|||
8 |
|||
>>> for state in model.states: |
|||
... print("State {} has observation id {}".format(state.id, model.observations[state.id])) |
|||
State 0 has observation id 6 |
|||
State 1 has observation id 1 |
|||
State 2 has observation id 4 |
|||
State 3 has observation id 7 |
|||
State 4 has observation id 4 |
|||
State 5 has observation id 3 |
|||
State 6 has observation id 0 |
|||
State 7 has observation id 0 |
|||
State 8 has observation id 0 |
|||
State 9 has observation id 0 |
|||
State 10 has observation id 0 |
|||
State 11 has observation id 0 |
|||
State 12 has observation id 2 |
|||
State 13 has observation id 2 |
|||
State 14 has observation id 5 |
|||
|
|||
|
|||
Sorting states |
|||
============== |
|||
.. seealso:: `03-exploration.py <https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/03-exploration.py>`_ |
|||
|
|||
|
|||
Often, one may sort the states according to the graph structure. |
|||
Storm supports some of these sorting algorithms, e.g., topological sort. |
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Reading MAs |
|||
====================== |
|||
|
|||
To be continued... |
@ -0,0 +1,260 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Generalized Stochastic Petri Nets" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Loading GSPNs\n", |
|||
"\n", |
|||
"[01-gspns.py](https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/01-gspns.py)\n", |
|||
"\n", |
|||
"Generalized stochastic Petri nets can be given either in the PNPRO format or in the PNML format.\n", |
|||
"We start by loading a GSPN stored in the PNML format:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.gspn\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
"\n", |
|||
">>> import_path = stormpy.examples.files.gspn_pnml_simple\n", |
|||
">>> gspn_parser = stormpy.gspn.GSPNParser()\n", |
|||
">>> gspn = gspn_parser.parse(import_path)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"After loading, we can display some properties of the GSPN:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Name of GSPN: {}.\".format(gspn.get_name()))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of places: {}.\".format(gspn.get_number_of_places()))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of immediate transitions: {}.\".format(gspn.get_number_of_immediate_transitions()))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of timed transitions: {}.\".format(gspn.get_number_of_timed_transitions()))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building GSPNs\n", |
|||
"\n", |
|||
"[02-gspns.py](https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/02-gspns.py)\n", |
|||
"\n", |
|||
"In the following, we describe how to construct GSPNs via the `GSPNBuilder`.\n", |
|||
"First, we create an instance of the `GSPNBuilder` and set the name of the GSPN:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder = stormpy.gspn.GSPNBuilder()\n", |
|||
">>> builder.set_name(\"my_gspn\")" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"In the next step, we add an immediate transition to the GSPN.\n", |
|||
"Additionally, we define the position of the transition by setting its layout information:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> it_1 = builder.add_immediate_transition(1, 0.0, \"it_1\")\n", |
|||
">>> it_layout = stormpy.gspn.LayoutInfo(1.5, 2.0)\n", |
|||
">>> builder.set_transition_layout_info(it_1, it_layout)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We add a timed transition and set its layout information:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> tt_1 = builder.add_timed_transition(0, 0.4, \"tt_1\")\n", |
|||
">>> tt_layout = stormpy.gspn.LayoutInfo(12.5, 2.0)\n", |
|||
">>> builder.set_transition_layout_info(tt_1, tt_layout)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Next, we add two places to the GSPN and set their layouts:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> place_1 = builder.add_place(1, 1, \"place_1\")\n", |
|||
">>> p1_layout = stormpy.gspn.LayoutInfo(6.5, 2.0)\n", |
|||
">>> builder.set_place_layout_info(place_1, p1_layout)\n", |
|||
"\n", |
|||
">>> place_2 = builder.add_place(1, 0, \"place_2\")\n", |
|||
">>> p2_layout = stormpy.gspn.LayoutInfo(18.5, 2.0)\n", |
|||
">>> builder.set_place_layout_info(place_2, p2_layout)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Places and transitions can be linked by input, output and inhibition arcs.\n", |
|||
"We add the arcs of our GSPN as follows:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder.add_output_arc(it_1, place_1)\n", |
|||
">>> builder.add_inhibition_arc(place_1, it_1)\n", |
|||
">>> builder.add_input_arc(place_1, tt_1)\n", |
|||
">>> builder.add_output_arc(tt_1, place_2)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We can now build the GSPN:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> gspn = builder.build_gspn()" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"After building, we export the GSPN.\n", |
|||
"GSPNs can be saved in the PNPRO format via `export_gspn_pnpro_file(path)` and in the PNML format via `export_gspn_pnml_file(path)`.\n", |
|||
"We export the GSPN into the PNPRO format:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> export_path = stormpy.examples.files.gspn_pnpro_simple\n", |
|||
">>> gspn.export_gspn_pnpro_file(export_path)" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.1731207, |
|||
"filename": "gspns.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Generalized Stochastic Petri Nets" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,84 +0,0 @@ |
|||
********************************** |
|||
Generalized Stochastic Petri Nets |
|||
********************************** |
|||
|
|||
Loading GSPNs |
|||
============== |
|||
.. seealso:: `01-gspns.py <https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/01-gspns.py>`_ |
|||
|
|||
|
|||
Generalized stochastic Petri nets can be given either in the PNPRO format or in the PNML format. |
|||
We start by loading a GSPN stored in the PNML format:: |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.gspn |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
|
|||
>>> import_path = stormpy.examples.files.gspn_pnml_simple |
|||
>>> gspn_parser = stormpy.gspn.GSPNParser() |
|||
>>> gspn = gspn_parser.parse(import_path) |
|||
|
|||
After loading, we can display some properties of the GSPN:: |
|||
|
|||
>>> print("Name of GSPN: {}.".format(gspn.get_name())) |
|||
Name of GSPN: simple_gspn. |
|||
>>> print("Number of places: {}.".format(gspn.get_number_of_places())) |
|||
Number of places: 4. |
|||
>>> print("Number of immediate transitions: {}.".format(gspn.get_number_of_immediate_transitions())) |
|||
Number of immediate transitions: 3. |
|||
>>> print("Number of timed transitions: {}.".format(gspn.get_number_of_timed_transitions())) |
|||
Number of timed transitions: 2. |
|||
|
|||
Building GSPNs |
|||
============================= |
|||
.. seealso:: `02-gspns.py <https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/02-gspns.py>`_ |
|||
|
|||
In the following, we describe how to construct GSPNs via the ``GSPNBuilder``. |
|||
First, we create an instance of the ``GSPNBuilder`` and set the name of the GSPN:: |
|||
|
|||
>>> builder = stormpy.gspn.GSPNBuilder() |
|||
>>> builder.set_name("my_gspn") |
|||
|
|||
In the next step, we add an immediate transition to the GSPN. |
|||
Additionally, we define the position of the transition by setting its layout information:: |
|||
|
|||
>>> it_1 = builder.add_immediate_transition(1, 0.0, "it_1") |
|||
>>> it_layout = stormpy.gspn.LayoutInfo(1.5, 2.0) |
|||
>>> builder.set_transition_layout_info(it_1, it_layout) |
|||
|
|||
We add a timed transition and set its layout information:: |
|||
|
|||
>>> tt_1 = builder.add_timed_transition(0, 0.4, "tt_1") |
|||
>>> tt_layout = stormpy.gspn.LayoutInfo(12.5, 2.0) |
|||
>>> builder.set_transition_layout_info(tt_1, tt_layout) |
|||
|
|||
Next, we add two places to the GSPN and set their layouts:: |
|||
|
|||
>>> place_1 = builder.add_place(1, 1, "place_1") |
|||
>>> p1_layout = stormpy.gspn.LayoutInfo(6.5, 2.0) |
|||
>>> builder.set_place_layout_info(place_1, p1_layout) |
|||
|
|||
>>> place_2 = builder.add_place(1, 0, "place_2") |
|||
>>> p2_layout = stormpy.gspn.LayoutInfo(18.5, 2.0) |
|||
>>> builder.set_place_layout_info(place_2, p2_layout) |
|||
|
|||
Places and transitions can be linked by input, output and inhibition arcs. |
|||
We add the arcs of our GSPN as follows:: |
|||
|
|||
>>> builder.add_output_arc(it_1, place_1) |
|||
>>> builder.add_inhibition_arc(place_1, it_1) |
|||
>>> builder.add_input_arc(place_1, tt_1) |
|||
>>> builder.add_output_arc(tt_1, place_2) |
|||
|
|||
We can now build the GSPN:: |
|||
|
|||
>>> gspn = builder.build_gspn() |
|||
|
|||
After building, we export the GSPN. |
|||
GSPNs can be saved in the PNPRO format via ``export_gspn_pnpro_file(path)`` and in the PNML format via ``export_gspn_pnml_file(path)``. |
|||
We export the GSPN into the PNPRO format:: |
|||
|
|||
>>> export_path = stormpy.examples.files.gspn_pnpro_simple |
|||
>>> gspn.export_gspn_pnpro_file(export_path) |
|||
|
@ -0,0 +1,189 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Continuous-time Markov chains (CTMCs)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"In this section, we explain how Stormpy can be used to build a simple CTMC.\n", |
|||
"Building CTMCs works similar to building DTMCs as in [Discrete-time Markov chains (DTMCs)](building_dtmcs.ipynb), but additionally every state is equipped with an exit rate.\n", |
|||
"\n", |
|||
"[01-building-ctmcs.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_ctmcs/01-building-ctmcs.py)\n", |
|||
"\n", |
|||
"First, we import Stormpy:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Transition Matrix\n", |
|||
"\n", |
|||
"In this example, we build the transition matrix using a numpy array" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import numpy as np\n", |
|||
">>> transitions = np.array([\n", |
|||
"... [0, 1.5, 0, 0],\n", |
|||
"... [3, 0, 1.5, 0],\n", |
|||
"... [0, 3, 0, 1.5],\n", |
|||
"... [0, 0, 3, 0], ], dtype='float64')" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The following function call returns a sparse matrix with default row groups:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> transition_matrix = stormpy.build_sparse_matrix(transitions)\n", |
|||
">>> print(transition_matrix) " |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Labeling\n", |
|||
"\n", |
|||
"The state labeling is created analogously to the previous example in [building DTMCs](building_dtmcs.ipynb#labeling):" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_labeling = stormpy.storage.StateLabeling(4)\n", |
|||
">>> state_labels = {'empty', 'init', 'deadlock', 'full'}\n", |
|||
">>> for label in state_labels:\n", |
|||
"... state_labeling.add_label(label)\n", |
|||
">>> state_labeling.add_label_to_state('init', 0)\n", |
|||
">>> state_labeling.add_label_to_state('empty', 0)\n", |
|||
">>> state_labeling.add_label_to_state('full', 3)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Exit Rates\n", |
|||
"\n", |
|||
"Lastly, we initialize a list to equip every state with an exit rate:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> exit_rates = [1.5, 4.5, 4.5, 3.0]" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building the Model\n", |
|||
"\n", |
|||
"Now, we can collect all components, including the choice labeling and the exit rates.\n", |
|||
"To let the transition values be interpreted as rates we set rate_transitions to True:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, rate_transitions=True)\n", |
|||
">>> components.exit_rates = exit_rates" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"And finally, we can build the model:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> ctmc = stormpy.storage.SparseCtmc(components)\n", |
|||
">>> print(ctmc) " |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.1853151, |
|||
"filename": "building_ctmcs.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Continuous-time Markov chains (CTMCs)" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,103 +0,0 @@ |
|||
************************************** |
|||
Continuous-time Markov chains (CTMCs) |
|||
************************************** |
|||
|
|||
|
|||
.. check if the following doctest should be run (and hide it in Sphinx) |
|||
>>> # Skip tests if numpy is not available |
|||
>>> import pytest |
|||
>>> try: |
|||
... import numpy as np |
|||
... except ModuleNotFoundError: |
|||
... np = None |
|||
>>> if np is None: |
|||
... pytest.skip("skipping the doctest below since it's not going to work.") |
|||
|
|||
|
|||
Background |
|||
===================== |
|||
|
|||
In this section, we explain how Stormpy can be used to build a simple CTMC. |
|||
Building CTMCs works similar to building DTMCs as in :doc:`building_dtmcs`, but additionally every state is equipped with an exit rate. |
|||
|
|||
.. seealso:: `01-building-ctmcs.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_ctmcs/01-building-ctmcs.py>`_ |
|||
|
|||
First, we import Stormpy:: |
|||
|
|||
>>> import stormpy |
|||
|
|||
Transition Matrix |
|||
===================== |
|||
In this example, we build the transition matrix using a numpy array |
|||
|
|||
|
|||
|
|||
>>> import numpy as np |
|||
|
|||
>>> transitions = np.array([ |
|||
... [0, 1.5, 0, 0], |
|||
... [3, 0, 1.5, 0], |
|||
... [0, 3, 0, 1.5], |
|||
... [0, 0, 3, 0], ], dtype='float64') |
|||
|
|||
The following function call returns a sparse matrix with default row groups:: |
|||
|
|||
>>> transition_matrix = stormpy.build_sparse_matrix(transitions) |
|||
>>> print(transition_matrix) # doctest: +SKIP |
|||
0 1 2 3 |
|||
---- group 0/3 ---- |
|||
0 ( 0 1.5 0 0 ) 0 |
|||
---- group 1/3 ---- |
|||
1 ( 3 0 1.5 0 ) 1 |
|||
---- group 2/3 ---- |
|||
2 ( 0 3 0 1.5 ) 2 |
|||
---- group 3/3 ---- |
|||
3 ( 0 0 3 0 ) 3 |
|||
0 1 2 3 |
|||
|
|||
|
|||
Labeling |
|||
================ |
|||
The state labeling is created analogously to the previous example in :ref:`building DTMCs<doc/models/building_dtmcs:Labeling>`:: |
|||
|
|||
>>> state_labeling = stormpy.storage.StateLabeling(4) |
|||
>>> state_labels = {'empty', 'init', 'deadlock', 'full'} |
|||
>>> for label in state_labels: |
|||
... state_labeling.add_label(label) |
|||
>>> state_labeling.add_label_to_state('init', 0) |
|||
>>> state_labeling.add_label_to_state('empty', 0) |
|||
>>> state_labeling.add_label_to_state('full', 3) |
|||
|
|||
Exit Rates |
|||
==================== |
|||
Lastly, we initialize a list to equip every state with an exit rate:: |
|||
|
|||
>>> exit_rates = [1.5, 4.5, 4.5, 3.0] |
|||
|
|||
Building the Model |
|||
==================== |
|||
|
|||
Now, we can collect all components, including the choice labeling and the exit rates. |
|||
To let the transition values be interpreted as rates we set `rate_transitions` to `True`:: |
|||
|
|||
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, rate_transitions=True) |
|||
>>> components.exit_rates = exit_rates |
|||
|
|||
And finally, we can build the model:: |
|||
|
|||
>>> ctmc = stormpy.storage.SparseCtmc(components) |
|||
>>> print(ctmc) # doctest: +SKIP |
|||
-------------------------------------------------------------- |
|||
Model type: CTMC (sparse) |
|||
States: 4 |
|||
Transitions: 6 |
|||
Reward Models: none |
|||
State Labels: 4 labels |
|||
* init -> 1 item(s) |
|||
* empty -> 1 item(s) |
|||
* deadlock -> 0 item(s) |
|||
* full -> 1 item(s) |
|||
Choice Labels: none |
|||
-------------------------------------------------------------- |
|||
|
|||
|
@ -0,0 +1,328 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Discrete-time Markov chains (DTMCs)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"As described in [Getting Started](../../getting_started.ipynb),\n", |
|||
"Storm can be used to translate a model description e.g. in form of a prism file into a Markov chain.\n", |
|||
"\n", |
|||
"Here, we use Stormpy to create the components for a model and build a DTMC directly from these components without parsing a model description.\n", |
|||
"We consider the previous example of the Knuth-Yao die.\n", |
|||
"\n", |
|||
"[01-building-dtmcs.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_dtmcs/01-building-dtmcs.py)\n", |
|||
"\n", |
|||
"In the following we create the transition matrix, the state labeling and the reward models of a DTMC.\n", |
|||
"First, we import stormpy:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Transition Matrix\n", |
|||
"\n", |
|||
"We begin by creating the matrix representing the transitions in the model in terms of probabilities.\n", |
|||
"For constructing the transition matrix, we use the SparseMatrixBuilder:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder = stormpy.SparseMatrixBuilder(rows = 0, columns = 0, entries = 0, force_dimensions = False, has_custom_row_grouping = False)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Here, we start with an empty matrix to later insert more entries.\n", |
|||
"If the number of rows, columns and entries is known, the matrix can be constructed using these values.\n", |
|||
"\n", |
|||
"For DTMCs each state has at most one outgoing probability distribution.\n", |
|||
"Thus, we create a matrix with trivial row grouping where each group contains one row representing the state action.\n", |
|||
"In [Markov decision processes (MDPs)](building_mdps.ipynb) we will revisit the example of the die, but extend the model with nondeterministic choice.\n", |
|||
"\n", |
|||
"We specify the transitions of the model by adding values to the matrix where the column represents the target state.\n", |
|||
"All transitions are equipped with a probability defined by the value:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder.add_next_value(row = 0, column = 1, value = 0.5)\n", |
|||
">>> builder.add_next_value(0, 2, 0.5)\n", |
|||
">>> builder.add_next_value(1, 3, 0.5)\n", |
|||
">>> builder.add_next_value(1, 4, 0.5)\n", |
|||
">>> builder.add_next_value(2, 5, 0.5)\n", |
|||
">>> builder.add_next_value(2, 6, 0.5)\n", |
|||
">>> builder.add_next_value(3, 7, 0.5)\n", |
|||
">>> builder.add_next_value(3, 1, 0.5)\n", |
|||
">>> builder.add_next_value(4, 8, 0.5)\n", |
|||
">>> builder.add_next_value(4, 9, 0.5)\n", |
|||
">>> builder.add_next_value(5, 10, 0.5)\n", |
|||
">>> builder.add_next_value(5, 11, 0.5)\n", |
|||
">>> builder.add_next_value(6, 2, 0.5)\n", |
|||
">>> builder.add_next_value(6, 12, 0.5)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Lastly, we add a self-loop with probability one to the final states:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for s in range(7,13):\n", |
|||
"... builder.add_next_value(s, s, 1)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Finally, we can build the matrix:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> transition_matrix = builder.build()" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"It should be noted that entries can only be inserted in ascending order, i.e. row by row and column by column.\n", |
|||
"Stormpy provides the possibility to build a sparse matrix using the numpy library ([https://numpy.org/](https://numpy.org/) )\n", |
|||
"Instead of using the SparseMatrixBuilder, a sparse matrix can be build from a numpy array via the method stormpy.build_sparse_matrix.\n", |
|||
"An example is given in [building CTMCs](building_ctmcs.ipynb#transition-matrix)." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Labeling\n", |
|||
"\n", |
|||
"States can be labeled with sets of propositions, for example state 0 can be labeled with “init”.\n", |
|||
"In order to specify the state labeling we create an empty labeling for the given number of states and add the labels to the labeling:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_labeling = stormpy.storage.StateLabeling(13)\n", |
|||
"\n", |
|||
">>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}\n", |
|||
">>> for label in labels:\n", |
|||
"... state_labeling.add_label(label)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Labels can be asociated with states. As an example, we label the state 0 with “init”:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_labeling.add_label_to_state('init', 0)\n", |
|||
">>> print(state_labeling.get_states('init'))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Next, we set the associations between the remaining labels and states.:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_labeling.add_label_to_state('one', 7)\n", |
|||
">>> state_labeling.add_label_to_state('two', 8)\n", |
|||
">>> state_labeling.add_label_to_state('three', 9)\n", |
|||
">>> state_labeling.add_label_to_state('four', 10)\n", |
|||
">>> state_labeling.add_label_to_state('five', 11)\n", |
|||
">>> state_labeling.add_label_to_state('six', 12)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"To set the same label for multiple states, we can use a BitVector representation for the set of states:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))\n", |
|||
">>> print(state_labeling) " |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Defining a choice labeling is possible in a similar way." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Reward Models\n", |
|||
"\n", |
|||
"Stormpy supports multiple reward models such as state rewards, state-action rewards and as transition rewards.\n", |
|||
"In this example, the actions of states which satisfy s<7 acquire a reward of 1.0.\n", |
|||
"\n", |
|||
"The state-action rewards are represented by a vector, which is associated to a reward model named “coin_flips”:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> reward_models = {}\n", |
|||
">>> action_reward = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n", |
|||
">>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector = action_reward)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building the Model\n", |
|||
"\n", |
|||
"Next, we collect all components:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"And finally, we can build the DTMC:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> dtmc = stormpy.storage.SparseDtmc(components)\n", |
|||
">>> print(dtmc) " |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.203723, |
|||
"filename": "building_dtmcs.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Discrete-time Markov chains (DTMCs)" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,151 +0,0 @@ |
|||
************************************ |
|||
Discrete-time Markov chains (DTMCs) |
|||
************************************ |
|||
|
|||
|
|||
Background |
|||
===================== |
|||
As described in :doc:`../../getting_started`, |
|||
Storm can be used to translate a model description e.g. in form of a prism file into a Markov chain. |
|||
|
|||
Here, we use Stormpy to create the components for a model and build a DTMC directly from these components without parsing a model description. |
|||
We consider the previous example of the Knuth-Yao die. |
|||
|
|||
.. seealso:: `01-building-dtmcs.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_dtmcs/01-building-dtmcs.py>`_ |
|||
|
|||
In the following we create the transition matrix, the state labeling and the reward models of a DTMC. |
|||
First, we import stormpy:: |
|||
|
|||
>>> import stormpy |
|||
|
|||
Transition Matrix |
|||
===================== |
|||
We begin by creating the matrix representing the transitions in the model in terms of probabilities. |
|||
For constructing the transition matrix, we use the SparseMatrixBuilder:: |
|||
|
|||
>>> builder = stormpy.SparseMatrixBuilder(rows = 0, columns = 0, entries = 0, force_dimensions = False, has_custom_row_grouping = False) |
|||
|
|||
Here, we start with an empty matrix to later insert more entries. |
|||
If the number of rows, columns and entries is known, the matrix can be constructed using these values. |
|||
|
|||
For DTMCs each state has at most one outgoing probability distribution. |
|||
Thus, we create a matrix with trivial row grouping where each group contains one row representing the state action. |
|||
In :doc:`building_mdps` we will revisit the example of the die, but extend the model with nondeterministic choice. |
|||
|
|||
We specify the transitions of the model by adding values to the matrix where the column represents the target state. |
|||
All transitions are equipped with a probability defined by the value:: |
|||
|
|||
>>> builder.add_next_value(row = 0, column = 1, value = 0.5) |
|||
>>> builder.add_next_value(0, 2, 0.5) |
|||
>>> builder.add_next_value(1, 3, 0.5) |
|||
>>> builder.add_next_value(1, 4, 0.5) |
|||
>>> builder.add_next_value(2, 5, 0.5) |
|||
>>> builder.add_next_value(2, 6, 0.5) |
|||
>>> builder.add_next_value(3, 7, 0.5) |
|||
>>> builder.add_next_value(3, 1, 0.5) |
|||
>>> builder.add_next_value(4, 8, 0.5) |
|||
>>> builder.add_next_value(4, 9, 0.5) |
|||
>>> builder.add_next_value(5, 10, 0.5) |
|||
>>> builder.add_next_value(5, 11, 0.5) |
|||
>>> builder.add_next_value(6, 2, 0.5) |
|||
>>> builder.add_next_value(6, 12, 0.5) |
|||
|
|||
Lastly, we add a self-loop with probability one to the final states:: |
|||
|
|||
>>> for s in range(7,13): |
|||
... builder.add_next_value(s, s, 1) |
|||
|
|||
|
|||
Finally, we can build the matrix:: |
|||
|
|||
>>> transition_matrix = builder.build() |
|||
|
|||
It should be noted that entries can only be inserted in ascending order, i.e. row by row and column by column. |
|||
Stormpy provides the possibility to build a sparse matrix using the numpy library (https://numpy.org/ ) |
|||
Instead of using the SparseMatrixBuilder, a sparse matrix can be build from a numpy array via the method `stormpy.build_sparse_matrix`. |
|||
An example is given in :ref:`building CTMCs <doc/models/building_ctmcs:Transition Matrix>`. |
|||
|
|||
Labeling |
|||
==================== |
|||
|
|||
States can be labeled with sets of propositions, for example state 0 can be labeled with "init". |
|||
In order to specify the state labeling we create an empty labeling for the given number of states and add the labels to the labeling:: |
|||
|
|||
>>> state_labeling = stormpy.storage.StateLabeling(13) |
|||
|
|||
>>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'} |
|||
>>> for label in labels: |
|||
... state_labeling.add_label(label) |
|||
|
|||
|
|||
Labels can be asociated with states. As an example, we label the state 0 with "init":: |
|||
|
|||
>>> state_labeling.add_label_to_state('init', 0) |
|||
>>> print(state_labeling.get_states('init')) |
|||
bit vector(1/13) [0 ] |
|||
|
|||
Next, we set the associations between the remaining labels and states.:: |
|||
|
|||
>>> state_labeling.add_label_to_state('one', 7) |
|||
>>> state_labeling.add_label_to_state('two', 8) |
|||
>>> state_labeling.add_label_to_state('three', 9) |
|||
>>> state_labeling.add_label_to_state('four', 10) |
|||
>>> state_labeling.add_label_to_state('five', 11) |
|||
>>> state_labeling.add_label_to_state('six', 12) |
|||
|
|||
To set the same label for multiple states, we can use a BitVector representation for the set of states:: |
|||
|
|||
>>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12])) |
|||
>>> print(state_labeling) # doctest: +SKIP |
|||
9 labels |
|||
* one -> 1 item(s) |
|||
* four -> 1 item(s) |
|||
* done -> 6 item(s) |
|||
* three -> 1 item(s) |
|||
* init -> 1 item(s) |
|||
* two -> 1 item(s) |
|||
* six -> 1 item(s) |
|||
* deadlock -> 0 item(s) |
|||
* five -> 1 item(s) |
|||
|
|||
Defining a choice labeling is possible in a similar way. |
|||
|
|||
Reward Models |
|||
==================== |
|||
Stormpy supports multiple reward models such as state rewards, state-action rewards and as transition rewards. |
|||
In this example, the actions of states which satisfy `s<7` acquire a reward of 1.0. |
|||
|
|||
The state-action rewards are represented by a vector, which is associated to a reward model named "coin_flips":: |
|||
|
|||
>>> reward_models = {} |
|||
>>> action_reward = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] |
|||
>>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector = action_reward) |
|||
|
|||
Building the Model |
|||
==================== |
|||
|
|||
Next, we collect all components:: |
|||
|
|||
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models) |
|||
|
|||
And finally, we can build the DTMC:: |
|||
|
|||
>>> dtmc = stormpy.storage.SparseDtmc(components) |
|||
>>> print(dtmc) # doctest: +SKIP |
|||
-------------------------------------------------------------- |
|||
Model type: DTMC (sparse) |
|||
States: 13 |
|||
Transitions: 20 |
|||
Reward Models: coin_flips |
|||
State Labels: 9 labels |
|||
* three -> 1 item(s) |
|||
* six -> 1 item(s) |
|||
* done -> 6 item(s) |
|||
* four -> 1 item(s) |
|||
* five -> 1 item(s) |
|||
* deadlock -> 0 item(s) |
|||
* init -> 1 item(s) |
|||
* two -> 1 item(s) |
|||
* one -> 1 item(s) |
|||
Choice Labels: none |
|||
-------------------------------------------------------------- |
@ -0,0 +1,211 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Markov automata (MAs)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"We already saw the process of building [CTMCs](building_ctmcs.ipynb) and [MDPs](building_mdps.ipynb) via Stormpy.\n", |
|||
"\n", |
|||
"Markov automata use states that are probabilistic, i.e. like the states of an MDP, or Markovian, i.e. like the states of a CTMC.\n", |
|||
"\n", |
|||
"In this section, we build a small MA with five states from which the first four are Markovian.\n", |
|||
"Since we covered labeling and exit rates already in the previous examples we omit the description of these components.\n", |
|||
"The full example can be found here:\n", |
|||
"\n", |
|||
"[01-building-mas.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_mas/01-building-mas.py)\n", |
|||
"\n", |
|||
"First, we import Stormpy:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Transition Matrix\n", |
|||
"\n", |
|||
"For [building MDPS](building_mdps.ipynb#transition-matrix), we used the SparseMatrixBuilder to create a matrix with a custom row grouping.\n", |
|||
"In this example, we use the numpy library.\n", |
|||
"\n", |
|||
"In the beginning, we create a numpy array that will be used to build the transition matrix of our model.:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import numpy as np\n", |
|||
">>> transitions = np.array([\n", |
|||
"... [0, 1, 0, 0, 0],\n", |
|||
"... [0.8, 0, 0.2, 0, 0],\n", |
|||
"... [0.9, 0, 0, 0.1, 0],\n", |
|||
"... [0, 0, 0, 0, 1],\n", |
|||
"... [0, 0, 0, 1, 0],\n", |
|||
"... [0, 0, 0, 0, 1]], dtype='float64')" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"When building the matrix we define a custom row grouping by passing a list containing the starting row of each row group in ascending order:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> transition_matrix = stormpy.build_sparse_matrix(transitions, [0, 2, 3, 4, 5])\n", |
|||
">>> print(transition_matrix) " |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": { |
|||
"nbsphinx": "hidden" |
|||
}, |
|||
"source": [ |
|||
"## Labeling and Exit Rates" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"nbsphinx": "hidden" |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
"\n", |
|||
">>> state_labeling = stormpy.storage.StateLabeling(5)\n", |
|||
">>> state_labels = {'init', 'deadlock'}\n", |
|||
">>> for label in state_labels:\n", |
|||
"... state_labeling.add_label(label)\n", |
|||
">>> state_labeling.add_label_to_state('init', 0)\n", |
|||
"\n", |
|||
">>> choice_labeling = stormpy.storage.ChoiceLabeling(6)\n", |
|||
">>> choice_labels = {'alpha', 'beta'}\n", |
|||
">>> for label in choice_labels:\n", |
|||
"... choice_labeling.add_label(label)\n", |
|||
">>> choice_labeling.add_label_to_choice('alpha', 0)\n", |
|||
">>> choice_labeling.add_label_to_choice('beta', 1)\n", |
|||
"\n", |
|||
">>> exit_rates = [0.0, 10.0, 12.0, 1.0, 1.0]" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Markovian States\n", |
|||
"\n", |
|||
"In order to define which states have only one probability distribution over the successor states,\n", |
|||
"we build a BitVector that contains the respective Markovian states:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> markovian_states = stormpy.BitVector(5, [1, 2, 3, 4])" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building the Model\n", |
|||
"\n", |
|||
"Now, we can collect all components:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, markovian_states=markovian_states)\n", |
|||
">>> components.choice_labeling = choice_labeling\n", |
|||
">>> components.exit_rates = exit_rates" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Finally, we can build the model:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> ma = stormpy.storage.SparseMA(components)\n", |
|||
">>> print(ma) " |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"celltoolbar": "Edit Metadata", |
|||
"date": 1598178167.2185411, |
|||
"filename": "building_mas.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Markov automata (MAs)" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,120 +0,0 @@ |
|||
************************************** |
|||
Markov automata (MAs) |
|||
************************************** |
|||
|
|||
|
|||
.. check if the following doctest should be run (and hide it in Sphinx) |
|||
>>> # Skip tests if numpy is not available |
|||
>>> import pytest |
|||
>>> try: |
|||
... import numpy as np |
|||
... except ModuleNotFoundError: |
|||
... np = None |
|||
>>> if np is None: |
|||
... pytest.skip("skipping the doctest below since it's not going to work.") |
|||
|
|||
|
|||
Background |
|||
===================== |
|||
|
|||
We already saw the process of building :doc:`CTMCs <building_ctmcs>` and :doc:`MDPs <building_mdps>` via Stormpy. |
|||
|
|||
Markov automata use states that are probabilistic, i.e. like the states of an MDP, or Markovian, i.e. like the states of a CTMC. |
|||
|
|||
In this section, we build a small MA with five states from which the first four are Markovian. |
|||
Since we covered labeling and exit rates already in the previous examples we omit the description of these components. |
|||
The full example can be found here: |
|||
|
|||
.. seealso:: `01-building-mas.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_mas/01-building-mas.py>`_ |
|||
|
|||
First, we import Stormpy:: |
|||
|
|||
>>> import stormpy |
|||
|
|||
Transition Matrix |
|||
================== |
|||
For :ref:`building MDPS <doc/models/building_mdps:Transition Matrix>`, we used the `SparseMatrixBuilder` to create a matrix with a custom row grouping. |
|||
In this example, we use the numpy library. |
|||
|
|||
In the beginning, we create a numpy array that will be used to build the transition matrix of our model.:: |
|||
|
|||
>>> import numpy as np |
|||
>>> transitions = np.array([ |
|||
... [0, 1, 0, 0, 0], |
|||
... [0.8, 0, 0.2, 0, 0], |
|||
... [0.9, 0, 0, 0.1, 0], |
|||
... [0, 0, 0, 0, 1], |
|||
... [0, 0, 0, 1, 0], |
|||
... [0, 0, 0, 0, 1]], dtype='float64') |
|||
|
|||
When building the matrix we define a custom row grouping by passing a list containing the starting row of each row group in ascending order:: |
|||
|
|||
>>> transition_matrix = stormpy.build_sparse_matrix(transitions, [0, 2, 3, 4, 5]) |
|||
>>> print(transition_matrix) # doctest: +SKIP |
|||
|
|||
0 1 2 3 4 |
|||
---- group 0/4 ---- |
|||
0 ( 0 1 0 0 0 ) 0 |
|||
1 ( 0.8 0 0.2 0 0 ) 1 |
|||
---- group 1/4 ---- |
|||
2 ( 0.9 0 0 0.1 0 ) 2 |
|||
---- group 2/4 ---- |
|||
3 ( 0 0 0 0 1 ) 3 |
|||
---- group 3/4 ---- |
|||
4 ( 0 0 0 1 0 ) 4 |
|||
---- group 4/4 ---- |
|||
5 ( 0 0 0 0 1 ) 5 |
|||
0 1 2 3 4 |
|||
|
|||
|
|||
Markovian States |
|||
================== |
|||
In order to define which states have only one probability distribution over the successor states, |
|||
we build a BitVector that contains the respective Markovian states:: |
|||
|
|||
>>> markovian_states = stormpy.BitVector(5, [1, 2, 3, 4]) |
|||
|
|||
Building the Model |
|||
==================== |
|||
|
|||
.. testsetup:: |
|||
# Not displayed in documentation but needed for doctests |
|||
|
|||
>>> state_labeling = stormpy.storage.StateLabeling(5) |
|||
>>> state_labels = {'init', 'deadlock'} |
|||
>>> for label in state_labels: |
|||
... state_labeling.add_label(label) |
|||
>>> state_labeling.add_label_to_state('init', 0) |
|||
>>> choice_labeling = stormpy.storage.ChoiceLabeling(6) |
|||
>>> choice_labels = {'alpha', 'beta'} |
|||
>>> for label in choice_labels: |
|||
... choice_labeling.add_label(label) |
|||
>>> choice_labeling.add_label_to_choice('alpha', 0) |
|||
>>> choice_labeling.add_label_to_choice('beta', 1) |
|||
>>> exit_rates = [0.0, 10.0, 12.0, 1.0, 1.0] |
|||
|
|||
Now, we can collect all components:: |
|||
|
|||
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, markovian_states=markovian_states) |
|||
>>> components.choice_labeling = choice_labeling |
|||
>>> components.exit_rates = exit_rates |
|||
|
|||
Finally, we can build the model:: |
|||
|
|||
>>> ma = stormpy.storage.SparseMA(components) |
|||
>>> print(ma) # doctest: +SKIP |
|||
-------------------------------------------------------------- |
|||
Model type: Markov Automaton (sparse) |
|||
States: 5 |
|||
Transitions: 8 |
|||
Choices: 6 |
|||
Markovian St.: 4 |
|||
Max. Rate.: 12 |
|||
Reward Models: none |
|||
State Labels: 2 labels |
|||
* deadlock -> 0 item(s) |
|||
* init -> 1 item(s) |
|||
Choice Labels: 2 labels |
|||
* alpha -> 1 item(s) |
|||
* beta -> 1 item(s) |
|||
-------------------------------------------------------------- |
@ -0,0 +1,309 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Markov decision processes (MDPs)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"In [Discrete-time Markov chains (DTMCs)](building_dtmcs.ipynb) we modelled Knuth-Yao’s model of a fair die by the means of a DTMC.\n", |
|||
"In the following we extend this model with nondeterministic choice by building a Markov decision process.\n", |
|||
"\n", |
|||
"[01-building-mdps.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_mdps/01-building-mdps.py)\n", |
|||
"\n", |
|||
"First, we import Stormpy:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Transition Matrix\n", |
|||
"\n", |
|||
"Since we want to build a nondeterminstic model, we create a transition matrix with a custom row group for each state:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder = stormpy.SparseMatrixBuilder(rows=0, columns=0, entries=0, force_dimensions=False, has_custom_row_grouping=True, row_groups=0)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We need more than one row for the transitions starting in state 0 because a nondeterministic choice over the actions is available.\n", |
|||
"Therefore, we start a new group that will contain the rows representing actions of state 0.\n", |
|||
"Note that the row group needs to be added before any entries are added to the group:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder.new_row_group(0)\n", |
|||
">>> builder.add_next_value(0, 1, 0.5)\n", |
|||
">>> builder.add_next_value(0, 2, 0.5)\n", |
|||
">>> builder.add_next_value(1, 1, 0.2)\n", |
|||
">>> builder.add_next_value(1, 2, 0.8)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"In this example, we have two nondeterministic choices in state 0.\n", |
|||
"With choice 0 we have probability 0.5 to got to state 1 and probability 0.5 to got to state 2.\n", |
|||
"With choice 1 we got to state 1 with probability 0.2 and go to state 2 with probability 0.8.\n", |
|||
"\n", |
|||
"For the remaining states, we need to specify the starting rows of each row group:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> builder.new_row_group(2)\n", |
|||
">>> builder.add_next_value(2, 3, 0.5)\n", |
|||
">>> builder.add_next_value(2, 4, 0.5)\n", |
|||
">>> builder.new_row_group(3)\n", |
|||
">>> builder.add_next_value(3, 5, 0.5)\n", |
|||
">>> builder.add_next_value(3, 6, 0.5)\n", |
|||
">>> builder.new_row_group(4)\n", |
|||
">>> builder.add_next_value(4, 7, 0.5)\n", |
|||
">>> builder.add_next_value(4, 1, 0.5)\n", |
|||
">>> builder.new_row_group(5)\n", |
|||
">>> builder.add_next_value(5, 8, 0.5)\n", |
|||
">>> builder.add_next_value(5, 9, 0.5)\n", |
|||
">>> builder.new_row_group(6)\n", |
|||
">>> builder.add_next_value(6, 10, 0.5)\n", |
|||
">>> builder.add_next_value(6, 11, 0.5)\n", |
|||
">>> builder.new_row_group(7)\n", |
|||
">>> builder.add_next_value(7, 2, 0.5)\n", |
|||
">>> builder.add_next_value(7, 12, 0.5)\n", |
|||
"\n", |
|||
">>> for s in range(8, 14):\n", |
|||
"... builder.new_row_group(s)\n", |
|||
"... builder.add_next_value(s, s - 1, 1)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Finally, we build the transition matrix:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> transition_matrix = builder.build()" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Labeling\n", |
|||
"\n", |
|||
"We have seen the construction of a state labeling in previous examples. Therefore we omit the description here\n", |
|||
"Instead, we focus on the choices.\n", |
|||
"Since in state 0 a nondeterministic choice over two actions is available, the number of choices is 14.\n", |
|||
"To distinguish those we can define a choice labeling:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"nbsphinx": "hidden" |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_labeling = stormpy.storage.StateLabeling(13)\n", |
|||
">>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}\n", |
|||
">>> for label in labels:\n", |
|||
"... state_labeling.add_label(label)\n", |
|||
"\n", |
|||
">>> state_labeling.add_label_to_state('init', 0)\n", |
|||
">>> state_labeling.add_label_to_state('one', 7)\n", |
|||
">>> state_labeling.add_label_to_state('two', 8)\n", |
|||
">>> state_labeling.add_label_to_state('three', 9)\n", |
|||
">>> state_labeling.add_label_to_state('four', 10)\n", |
|||
">>> state_labeling.add_label_to_state('five', 11)\n", |
|||
">>> state_labeling.add_label_to_state('six', 12)\n", |
|||
">>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> choice_labeling = stormpy.storage.ChoiceLabeling(14)\n", |
|||
">>> choice_labels = {'a', 'b'}\n", |
|||
"\n", |
|||
">>> for label in choice_labels:\n", |
|||
"... choice_labeling.add_label(label)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We assign the label ‘a’ to the first action of state 0 and ‘b’ to the second.\n", |
|||
"Recall that those actions where defined in row one and two of the transition matrix respectively:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> choice_labeling.add_label_to_choice('a', 0)\n", |
|||
">>> choice_labeling.add_label_to_choice('b', 1)\n", |
|||
">>> print(choice_labeling) " |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Reward models\n", |
|||
"\n", |
|||
"In this reward model the length of the action rewards coincides with the number of choices:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> reward_models = {}\n", |
|||
">>> action_reward = [0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n", |
|||
">>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector=action_reward)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Building the Model\n", |
|||
"\n", |
|||
"We collect the components:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models, rate_transitions=False)\n", |
|||
">>> components.choice_labeling = choice_labeling" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We build the model:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> mdp = stormpy.storage.SparseMdp(components)\n", |
|||
">>> print(mdp) " |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Partially observable Markov decision process (POMDPs)\n", |
|||
"\n", |
|||
"To build a partially observable Markov decision process (POMDP),\n", |
|||
"components.observations can be set to a list of numbers that defines the status of the observables in each state." |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"celltoolbar": "Edit Metadata", |
|||
"date": 1598178167.234528, |
|||
"filename": "building_mdps.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Markov decision processes (MDPs)" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,151 +0,0 @@ |
|||
*********************************************** |
|||
Markov decision processes (MDPs) |
|||
*********************************************** |
|||
|
|||
Background |
|||
===================== |
|||
|
|||
In :doc:`building_dtmcs` we modelled Knuth-Yao's model of a fair die by the means of a DTMC. |
|||
In the following we extend this model with nondeterministic choice by building a Markov decision process. |
|||
|
|||
.. seealso:: `01-building-mdps.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_mdps/01-building-mdps.py>`_ |
|||
|
|||
First, we import Stormpy:: |
|||
|
|||
>>> import stormpy |
|||
|
|||
Transition Matrix |
|||
===================== |
|||
Since we want to build a nondeterminstic model, we create a transition matrix with a custom row group for each state:: |
|||
|
|||
>>> builder = stormpy.SparseMatrixBuilder(rows=0, columns=0, entries=0, force_dimensions=False, has_custom_row_grouping=True, row_groups=0) |
|||
|
|||
We need more than one row for the transitions starting in state 0 because a nondeterministic choice over the actions is available. |
|||
Therefore, we start a new group that will contain the rows representing actions of state 0. |
|||
Note that the row group needs to be added before any entries are added to the group:: |
|||
|
|||
>>> builder.new_row_group(0) |
|||
>>> builder.add_next_value(0, 1, 0.5) |
|||
>>> builder.add_next_value(0, 2, 0.5) |
|||
>>> builder.add_next_value(1, 1, 0.2) |
|||
>>> builder.add_next_value(1, 2, 0.8) |
|||
|
|||
In this example, we have two nondeterministic choices in state 0. |
|||
With choice `0` we have probability 0.5 to got to state 1 and probability 0.5 to got to state 2. |
|||
With choice `1` we got to state 1 with probability 0.2 and go to state 2 with probability 0.8. |
|||
|
|||
For the remaining states, we need to specify the starting rows of each row group:: |
|||
|
|||
>>> builder.new_row_group(2) |
|||
>>> builder.add_next_value(2, 3, 0.5) |
|||
>>> builder.add_next_value(2, 4, 0.5) |
|||
>>> builder.new_row_group(3) |
|||
>>> builder.add_next_value(3, 5, 0.5) |
|||
>>> builder.add_next_value(3, 6, 0.5) |
|||
>>> builder.new_row_group(4) |
|||
>>> builder.add_next_value(4, 7, 0.5) |
|||
>>> builder.add_next_value(4, 1, 0.5) |
|||
>>> builder.new_row_group(5) |
|||
>>> builder.add_next_value(5, 8, 0.5) |
|||
>>> builder.add_next_value(5, 9, 0.5) |
|||
>>> builder.new_row_group(6) |
|||
>>> builder.add_next_value(6, 10, 0.5) |
|||
>>> builder.add_next_value(6, 11, 0.5) |
|||
>>> builder.new_row_group(7) |
|||
>>> builder.add_next_value(7, 2, 0.5) |
|||
>>> builder.add_next_value(7, 12, 0.5) |
|||
|
|||
>>> for s in range(8, 14): |
|||
... builder.new_row_group(s) |
|||
... builder.add_next_value(s, s - 1, 1) |
|||
|
|||
Finally, we build the transition matrix:: |
|||
|
|||
>>> transition_matrix = builder.build() |
|||
|
|||
Labeling |
|||
================ |
|||
We have seen the construction of a state labeling in previous examples. Therefore we omit the description here. |
|||
|
|||
Instead, we focus on the choices. |
|||
Since in state 0 a nondeterministic choice over two actions is available, the number of choices is 14. |
|||
To distinguish those we can define a choice labeling:: |
|||
|
|||
>>> choice_labeling = stormpy.storage.ChoiceLabeling(14) |
|||
>>> choice_labels = {'a', 'b'} |
|||
|
|||
>>> for label in choice_labels: |
|||
... choice_labeling.add_label(label) |
|||
|
|||
We assign the label 'a' to the first action of state 0 and 'b' to the second. |
|||
Recall that those actions where defined in row one and two of the transition matrix respectively:: |
|||
|
|||
>>> choice_labeling.add_label_to_choice('a', 0) |
|||
>>> choice_labeling.add_label_to_choice('b', 1) |
|||
>>> print(choice_labeling) # doctest: +SKIP |
|||
Choice 2 labels |
|||
* a -> 1 item(s) |
|||
* b -> 1 item(s) |
|||
|
|||
|
|||
Reward models |
|||
================== |
|||
|
|||
In this reward model the length of the action rewards coincides with the number of choices:: |
|||
|
|||
>>> reward_models = {} |
|||
>>> action_reward = [0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] |
|||
>>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector=action_reward) |
|||
|
|||
Building the Model |
|||
==================== |
|||
|
|||
.. testsetup:: |
|||
|
|||
# Not displayed in documentation but needed for doctests |
|||
>>> state_labeling = stormpy.storage.StateLabeling(13) |
|||
>>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'} |
|||
>>> for label in labels: |
|||
... state_labeling.add_label(label) |
|||
>>> state_labeling.add_label_to_state('init', 0) |
|||
>>> state_labeling.add_label_to_state('one', 7) |
|||
>>> state_labeling.add_label_to_state('two', 8) |
|||
>>> state_labeling.add_label_to_state('three', 9) |
|||
>>> state_labeling.add_label_to_state('four', 10) |
|||
>>> state_labeling.add_label_to_state('five', 11) |
|||
>>> state_labeling.add_label_to_state('six', 12) |
|||
>>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12])) |
|||
|
|||
We collect the components:: |
|||
|
|||
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models, rate_transitions=False) |
|||
>>> components.choice_labeling = choice_labeling |
|||
|
|||
We build the model:: |
|||
|
|||
>>> mdp = stormpy.storage.SparseMdp(components) |
|||
>>> print(mdp) # doctest: +SKIP |
|||
Model type: MDP (sparse) |
|||
States: 13 |
|||
Transitions: 22 |
|||
Choices: 14 |
|||
Reward Models: coin_flips |
|||
State Labels: 9 labels |
|||
* one -> 1 item(s) |
|||
* six -> 1 item(s) |
|||
* three -> 1 item(s) |
|||
* four -> 1 item(s) |
|||
* done -> 6 item(s) |
|||
* init -> 1 item(s) |
|||
* five -> 1 item(s) |
|||
* deadlock -> 0 item(s) |
|||
* two -> 1 item(s) |
|||
Choice Labels: 2 labels |
|||
* a -> 1 item(s) |
|||
* b -> 1 item(s) |
|||
|
|||
Partially observable Markov decision process (POMDPs) |
|||
======================================================== |
|||
|
|||
To build a partially observable Markov decision process (POMDP), |
|||
`components.observations` can be set to a list of numbers that defines the status of the observables in each state. |
@ -0,0 +1,174 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Parametric Models" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Instantiating parametric models\n", |
|||
"\n", |
|||
"[01-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/01-parametric-models.py)\n", |
|||
"\n", |
|||
"Input formats such as prism allow to specify programs with open constants. We refer to these open constants as parameters.\n", |
|||
"If the constants only influence the probabilities or rates, but not the topology of the underlying model, we can build these models as parametric models:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> path = stormpy.examples.files.prism_dtmc_die\n", |
|||
">>> prism_program = stormpy.parse_prism_program(path)\n", |
|||
">>> formula_str = \"P=? [F s=7 & d=2]\"\n", |
|||
">>> properties = stormpy.parse_properties(formula_str, prism_program)\n", |
|||
">>> model = stormpy.build_parametric_model(prism_program, properties)\n", |
|||
">>> parameters = model.collect_probability_parameters()\n", |
|||
">>> for x in parameters:\n", |
|||
"... print(x)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"In order to obtain a standard DTMC, MDP or other Markov model, we need to instantiate these models by means of a model instantiator:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy.pars\n", |
|||
">>> instantiator = stormpy.pars.PDtmcInstantiator(model)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Before we obtain an instantiated model, we need to map parameters to values: We build such a dictionary as follows:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> point = dict()\n", |
|||
">>> for x in parameters:\n", |
|||
"... print(x.name)\n", |
|||
"... point[x] = 0.4\n", |
|||
">>> instantiated_model = instantiator.instantiate(point)\n", |
|||
">>> result = stormpy.model_checking(instantiated_model, properties[0])" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Initial states and labels are set as for the parameter-free case." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Checking parametric models\n", |
|||
"\n", |
|||
"[02-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/02-parametric-models.py)\n", |
|||
"\n", |
|||
"It is also possible to check the parametric model directly, similar as before in [Checking properties](../getting_started.ipynb#checking-properties):" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> result = stormpy.model_checking(model, properties[0])\n", |
|||
">>> initial_state = model.initial_states[0]\n", |
|||
">>> func = result.at(initial_state)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We collect the constraints ensuring that underlying model is well-formed and the graph structure does not change:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> collector = stormpy.ConstraintCollector(model)\n", |
|||
">>> for formula in collector.wellformed_constraints:\n", |
|||
"... print(formula)\n", |
|||
">>> for formula in collector.graph_preserving_constraints:\n", |
|||
"... print(formula)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Collecting information about the parametric models\n", |
|||
"\n", |
|||
"[03-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/03-parametric-models.py)\n", |
|||
"\n", |
|||
"This example shows three implementations to obtain the number of transitions with probability one in a parametric model." |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.2485256, |
|||
"filename": "parametric_models.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Parametric Models" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,67 +0,0 @@ |
|||
***************** |
|||
Parametric Models |
|||
***************** |
|||
|
|||
|
|||
|
|||
Instantiating parametric models |
|||
=============================== |
|||
.. seealso:: `01-parametric-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/01-parametric-models.py>`_ |
|||
|
|||
Input formats such as prism allow to specify programs with open constants. We refer to these open constants as parameters. |
|||
If the constants only influence the probabilities or rates, but not the topology of the underlying model, we can build these models as parametric models:: |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> path = stormpy.examples.files.prism_dtmc_die |
|||
>>> prism_program = stormpy.parse_prism_program(path) |
|||
>>> formula_str = "P=? [F s=7 & d=2]" |
|||
>>> properties = stormpy.parse_properties(formula_str, prism_program) |
|||
>>> model = stormpy.build_parametric_model(prism_program, properties) |
|||
>>> parameters = model.collect_probability_parameters() |
|||
>>> for x in parameters: |
|||
... print(x) |
|||
|
|||
In order to obtain a standard DTMC, MDP or other Markov model, we need to instantiate these models by means of a model instantiator:: |
|||
|
|||
>>> import stormpy.pars |
|||
>>> instantiator = stormpy.pars.PDtmcInstantiator(model) |
|||
|
|||
Before we obtain an instantiated model, we need to map parameters to values: We build such a dictionary as follows:: |
|||
|
|||
>>> point = dict() |
|||
>>> for x in parameters: |
|||
... print(x.name) |
|||
... point[x] = 0.4 |
|||
>>> instantiated_model = instantiator.instantiate(point) |
|||
>>> result = stormpy.model_checking(instantiated_model, properties[0]) |
|||
|
|||
Initial states and labels are set as for the parameter-free case. |
|||
|
|||
|
|||
Checking parametric models |
|||
========================== |
|||
.. seealso:: `02-parametric-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/02-parametric-models.py>`_ |
|||
|
|||
It is also possible to check the parametric model directly, similar as before in :ref:`getting-started-checking-properties`:: |
|||
|
|||
>>> result = stormpy.model_checking(model, properties[0]) |
|||
>>> initial_state = model.initial_states[0] |
|||
>>> func = result.at(initial_state) |
|||
|
|||
We collect the constraints ensuring that underlying model is well-formed and the graph structure does not change:: |
|||
|
|||
>>> collector = stormpy.ConstraintCollector(model) |
|||
>>> for formula in collector.wellformed_constraints: |
|||
... print(formula) |
|||
>>> for formula in collector.graph_preserving_constraints: |
|||
... print(formula) |
|||
|
|||
Collecting information about the parametric models |
|||
================================================== |
|||
.. seealso:: `03-parametric-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/03-parametric-models.py>`_ |
|||
|
|||
This example shows three implementations to obtain the number of transitions with probability one in a parametric model. |
|||
|
|||
|
@ -0,0 +1,133 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Reward Models\n", |
|||
"\n", |
|||
"In [Getting Started](../getting_started.ipynb), we mainly looked at probabilities in the Markov models and properties that refer to these probabilities.\n", |
|||
"In this section, we discuss reward models." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Exploring reward models\n", |
|||
"\n", |
|||
"[01-reward-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples/reward_models/01-reward-models.py)\n", |
|||
"\n", |
|||
"We consider the die again, but with another property which talks about the expected reward:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)\n", |
|||
">>> prop = \"R=? [F \\\"done\\\"]\"\n", |
|||
"\n", |
|||
">>> properties = stormpy.parse_properties(prop, program, None)\n", |
|||
">>> model = stormpy.build_model(program, properties)\n", |
|||
">>> assert len(model.reward_models) == 1" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The model now has a reward model, as the property talks about rewards.\n", |
|||
"When [Building Models](building_models.ipynb) from explicit sources, the reward model is always included if it is defined in the source.\n", |
|||
"We can do model checking analogous to probabilities:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> initial_state = model.initial_states[0]\n", |
|||
">>> result = stormpy.model_checking(model, properties[0])\n", |
|||
">>> print(\"Result: {}\".format(round(result.at(initial_state), 6)))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The reward model has a name which we can obtain as follows:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> reward_model_name = list(model.reward_models.keys())[0]\n", |
|||
">>> print(reward_model_name)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We discuss later how to work with multiple reward models.\n", |
|||
"Rewards come in multiple fashions, as state rewards, state-action rewards and as transition rewards.\n", |
|||
"In this example, we only have state-action rewards. These rewards are a vector, over which we can trivially iterate:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> assert not model.reward_models[reward_model_name].has_state_rewards\n", |
|||
">>> assert model.reward_models[reward_model_name].has_state_action_rewards\n", |
|||
">>> assert not model.reward_models[reward_model_name].has_transition_rewards\n", |
|||
">>> for reward in model.reward_models[reward_model_name].state_action_rewards:\n", |
|||
"... print(reward)" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598188121.7157953, |
|||
"filename": "reward_models.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Reward Models" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,64 +0,0 @@ |
|||
************** |
|||
Reward Models |
|||
************** |
|||
|
|||
In :doc:`../getting_started`, we mainly looked at probabilities in the Markov models and properties that refer to these probabilities. |
|||
In this section, we discuss reward models. |
|||
|
|||
Exploring reward models |
|||
------------------------ |
|||
.. seealso:: `01-reward-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples/reward_models/01-reward-models.py>`_ |
|||
|
|||
We consider the die again, but with another property which talks about the expected reward:: |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die) |
|||
>>> prop = "R=? [F \"done\"]" |
|||
|
|||
>>> properties = stormpy.parse_properties(prop, program, None) |
|||
>>> model = stormpy.build_model(program, properties) |
|||
>>> assert len(model.reward_models) == 1 |
|||
|
|||
The model now has a reward model, as the property talks about rewards. |
|||
When :doc:`building_models` from explicit sources, the reward model is always included if it is defined in the source. |
|||
We can do model checking analogous to probabilities:: |
|||
|
|||
|
|||
>>> initial_state = model.initial_states[0] |
|||
>>> result = stormpy.model_checking(model, properties[0]) |
|||
>>> print("Result: {}".format(round(result.at(initial_state), 6))) |
|||
Result: 3.666667 |
|||
|
|||
|
|||
The reward model has a name which we can obtain as follows:: |
|||
|
|||
>>> reward_model_name = list(model.reward_models.keys())[0] |
|||
>>> print(reward_model_name) |
|||
coin_flips |
|||
|
|||
We discuss later how to work with multiple reward models. |
|||
Rewards come in multiple fashions, as state rewards, state-action rewards and as transition rewards. |
|||
In this example, we only have state-action rewards. These rewards are a vector, over which we can trivially iterate:: |
|||
|
|||
>>> assert not model.reward_models[reward_model_name].has_state_rewards |
|||
>>> assert model.reward_models[reward_model_name].has_state_action_rewards |
|||
>>> assert not model.reward_models[reward_model_name].has_transition_rewards |
|||
>>> for reward in model.reward_models[reward_model_name].state_action_rewards: |
|||
... print(reward) |
|||
1.0 |
|||
1.0 |
|||
1.0 |
|||
1.0 |
|||
1.0 |
|||
1.0 |
|||
1.0 |
|||
0.0 |
|||
0.0 |
|||
0.0 |
|||
0.0 |
|||
0.0 |
|||
0.0 |
|||
|
|||
|
@ -0,0 +1,202 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Working with Schedulers\n", |
|||
"\n", |
|||
"In non-deterministic models the notion of a scheduler (or policy) is important.\n", |
|||
"The scheduler determines which action to take at each state.\n", |
|||
"\n", |
|||
"For a given reachability property, Storm can return the scheduler realizing the resulting probability." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Examining Schedulers for MDPs\n", |
|||
"\n", |
|||
"[01-schedulers.py](https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/01-schedulers.py)\n", |
|||
"\n", |
|||
"As in [Getting Started](../getting_started.ipynb), we import some required modules and build a model from the example files:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy\n", |
|||
">>> import stormpy.core\n", |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
"\n", |
|||
">>> path = stormpy.examples.files.prism_mdp_coin_2_2\n", |
|||
">>> formula_str = \"Pmin=? [F \\\"finished\\\" & \\\"all_coins_equal_1\\\"]\"\n", |
|||
">>> program = stormpy.parse_prism_program(path)\n", |
|||
">>> formulas = stormpy.parse_properties(formula_str, program)\n", |
|||
">>> model = stormpy.build_model(program, formulas)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Next we check the model and make sure to extract the scheduler:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> result = stormpy.model_checking(model, formulas[0], extract_scheduler=True)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The result then contains the scheduler we want:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> assert result.has_scheduler\n", |
|||
">>> scheduler = result.scheduler\n", |
|||
">>> assert scheduler.memoryless\n", |
|||
">>> assert scheduler.deterministic\n", |
|||
">>> print(scheduler)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"To get the information which action the scheduler chooses in which state, we can simply iterate over the states:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for state in model.states:\n", |
|||
"... choice = scheduler.get_choice(state)\n", |
|||
"... action = choice.get_deterministic_choice()\n", |
|||
"... print(\"In state {} choose action {}\".format(state, action))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Examining Schedulers for Markov automata\n", |
|||
"\n", |
|||
"[02-schedulers.py](https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/02-schedulers.py)\n", |
|||
"\n", |
|||
"Currently there is no support yet for scheduler extraction on MAs.\n", |
|||
"However, if the timing information is not relevant for the property, we can circumvent this lack by first transforming the MA to an MDP.\n", |
|||
"\n", |
|||
"We build the model as before:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path = stormpy.examples.files.prism_ma_simple\n", |
|||
">>> formula_str = \"Tmin=? [ F s=4 ]\"\n", |
|||
"\n", |
|||
">>> program = stormpy.parse_prism_program(path, False, True)\n", |
|||
">>> formulas = stormpy.parse_properties_for_prism_program(formula_str, program)\n", |
|||
">>> ma = stormpy.build_model(program, formulas)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Next we transform the continuous-time model into a discrete-time model.\n", |
|||
"Note that all timing information is lost at this point:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> mdp, mdp_formulas = stormpy.transform_to_discrete_time_model(ma, formulas)\n", |
|||
">>> assert mdp.model_type == stormpy.ModelType.MDP" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"After the transformation we have obtained an MDP where we can extract the scheduler as shown before:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> result = stormpy.model_checking(mdp, mdp_formulas[0], extract_scheduler=True)\n", |
|||
">>> scheduler = result.scheduler\n", |
|||
">>> print(scheduler)\n" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.268541, |
|||
"filename": "schedulers.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Working with Schedulers" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,99 +0,0 @@ |
|||
*********************** |
|||
Working with Schedulers |
|||
*********************** |
|||
|
|||
In non-deterministic models the notion of a scheduler (or policy) is important. |
|||
The scheduler determines which action to take at each state. |
|||
|
|||
For a given reachability property, Storm can return the scheduler realizing the resulting probability. |
|||
|
|||
Examining Schedulers for MDPs |
|||
============================= |
|||
|
|||
.. seealso:: `01-schedulers.py <https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/01-schedulers.py>`_ |
|||
|
|||
As in :doc:`../getting_started`, we import some required modules and build a model from the example files:: |
|||
|
|||
>>> import stormpy |
|||
>>> import stormpy.core |
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
|
|||
>>> path = stormpy.examples.files.prism_mdp_coin_2_2 |
|||
>>> formula_str = "Pmin=? [F \"finished\" & \"all_coins_equal_1\"]" |
|||
>>> program = stormpy.parse_prism_program(path) |
|||
>>> formulas = stormpy.parse_properties(formula_str, program) |
|||
>>> model = stormpy.build_model(program, formulas) |
|||
|
|||
Next we check the model and make sure to extract the scheduler:: |
|||
|
|||
>>> result = stormpy.model_checking(model, formulas[0], extract_scheduler=True) |
|||
|
|||
The result then contains the scheduler we want:: |
|||
|
|||
>>> assert result.has_scheduler |
|||
>>> scheduler = result.scheduler |
|||
>>> assert scheduler.memoryless |
|||
>>> assert scheduler.deterministic |
|||
>>> print(scheduler) |
|||
___________________________________________________________________ |
|||
Fully defined memoryless deterministic scheduler: |
|||
model state: choice(s) |
|||
0 0 |
|||
1 0 |
|||
2 1 |
|||
3 0 |
|||
-etc- |
|||
|
|||
To get the information which action the scheduler chooses in which state, we can simply iterate over the states:: |
|||
|
|||
>>> for state in model.states: |
|||
... choice = scheduler.get_choice(state) |
|||
... action = choice.get_deterministic_choice() |
|||
... print("In state {} choose action {}".format(state, action)) |
|||
In state 0 choose action 0 |
|||
In state 1 choose action 0 |
|||
In state 2 choose action 1 |
|||
In state 3 choose action 0 |
|||
In state 4 choose action 0 |
|||
In state 5 choose action 0 |
|||
-etc- |
|||
|
|||
|
|||
Examining Schedulers for Markov automata |
|||
======================================== |
|||
|
|||
.. seealso:: `02-schedulers.py <https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/02-schedulers.py>`_ |
|||
|
|||
Currently there is no support yet for scheduler extraction on MAs. |
|||
However, if the timing information is not relevant for the property, we can circumvent this lack by first transforming the MA to an MDP. |
|||
|
|||
We build the model as before:: |
|||
|
|||
>>> path = stormpy.examples.files.prism_ma_simple |
|||
>>> formula_str = "Tmin=? [ F s=4 ]" |
|||
|
|||
>>> program = stormpy.parse_prism_program(path, False, True) |
|||
>>> formulas = stormpy.parse_properties_for_prism_program(formula_str, program) |
|||
>>> ma = stormpy.build_model(program, formulas) |
|||
|
|||
Next we transform the continuous-time model into a discrete-time model. |
|||
Note that all timing information is lost at this point:: |
|||
|
|||
>>> mdp, mdp_formulas = stormpy.transform_to_discrete_time_model(ma, formulas) |
|||
>>> assert mdp.model_type == stormpy.ModelType.MDP |
|||
|
|||
After the transformation we have obtained an MDP where we can extract the scheduler as shown before:: |
|||
|
|||
>>> result = stormpy.model_checking(mdp, mdp_formulas[0], extract_scheduler=True) |
|||
>>> scheduler = result.scheduler |
|||
>>> print(scheduler) |
|||
___________________________________________________________________ |
|||
Fully defined memoryless deterministic scheduler: |
|||
model state: choice(s) |
|||
0 1 |
|||
1 0 |
|||
2 0 |
|||
3 0 |
|||
4 0 |
|||
-etc- |
@ -0,0 +1,167 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Working with Shortest Paths\n", |
|||
"\n", |
|||
"Storm can enumerate the most probable paths of a model, leading from the initial state to a defined set of target states, which we refer to as shortest paths.\n", |
|||
"In particular, the model states visited along those paths are available as sets and can be accumulated, yielding a *sub-model*." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Background\n", |
|||
"\n", |
|||
"The underlying implementation uses the *recursive enumeration algorithm* [[JM1999]](#jm1999), substituting distance for probability – which is why we refer to the most probable paths as the *shortest* paths.\n", |
|||
"\n", |
|||
"This algorithm computes the shortest paths recursively and in order, i.e., to find the 7th shortest path, the 1st through 6th shortest paths are computed as precursors. The next (i.e., 8th shortest) path can then be computed efficiently.\n", |
|||
"\n", |
|||
"It is crucial to note that *any* path is eligible, including those that (repeatedly) traverse loops (i.e., *non-simple* paths). This is a common case in practice: Often a large number of similar paths that differ only in the order and number of loop traversals occur successively in the sequence of shortest paths. (For applications that are only interested in simple paths, this is rather unfortunate.)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## Examining Shortest Paths\n", |
|||
"\n", |
|||
"[01-shortest-paths.py](https://github.com/moves-rwth/stormpy/blob/master/examples/shortest_paths/01-shortest-paths.py)\n", |
|||
"\n", |
|||
"As in [Getting Started](../getting_started.ipynb), we import some required modules and build a model from the example files:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files\n", |
|||
">>> path = stormpy.examples.files.prism_dtmc_die\n", |
|||
">>> prism_program = stormpy.parse_prism_program(path)\n", |
|||
">>> model = stormpy.build_model(prism_program)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We also import the `ShortestPathsGenerator`:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> from stormpy.utility import ShortestPathsGenerator" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"and choose a target state (by its ID) to which we want to compute the shortest paths:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> state_id = 8" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"It is also possible to specify a set of target states (as a list, e.g., `[8, 10, 11]`) or a label in the model if applicable (e.g., `\"observe0Greater1\"`).\n", |
|||
"For simplicity, we will stick to using a single state for now.\n", |
|||
"\n", |
|||
"We initialize a `ShortestPathsGenerator` instance:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> spg = ShortestPathsGenerator(model, state_id)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Now we can query the k-shortest path by index. Note that 1-based indices are used, so that the 3rd shortest path indeed corresponds to index `k=3`.\n", |
|||
"Let us inspect the first three shortest paths:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for k in range(1, 4):\n", |
|||
"... path = spg.get_path_as_list(k)\n", |
|||
"... distance = spg.get_distance(k)\n", |
|||
"... print(\"{}-shortest path to state #{}: {}, with distance {}\".format(k, state_id, path, distance))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"As you can see, the distance (i.e., probability of the path) is also available.\n", |
|||
"Note that the paths are displayed as a backward-traversal from the target to the initial state.\n", |
|||
"\n", |
|||
"<a id='jm1999'></a>\n", |
|||
"\\[JM1999\\] Víctor M. Jiménez, Andrés Marzal. [Computing the K Shortest Paths: A New Algorithm and an Experimental Comparison](https://scholar.google.com/scholar?q=Computing+the+k+shortest+paths%3A+A+new+algorithm+and+an+experimental+comparison). International Workshop on Algorithm Engineering, 1999" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598178167.2826114, |
|||
"filename": "shortest_paths.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Working with Shortest Paths" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,63 +0,0 @@ |
|||
*************************** |
|||
Working with Shortest Paths |
|||
*************************** |
|||
|
|||
Storm can enumerate the most probable paths of a model, leading from the initial state to a defined set of target states, which we refer to as shortest paths. |
|||
In particular, the model states visited along those paths are available as sets and can be accumulated, yielding a *sub-model*. |
|||
|
|||
Background |
|||
========== |
|||
|
|||
The underlying implementation uses the *recursive enumeration algorithm* [JM1999]_, substituting distance for probability – which is why we refer to the most probable paths as the *shortest* paths. |
|||
|
|||
This algorithm computes the shortest paths recursively and in order, i.e., to find the 7th shortest path, the 1st through 6th shortest paths are computed as precursors. The next (i.e., 8th shortest) path can then be computed efficiently. |
|||
|
|||
It is crucial to note that *any* path is eligible, including those that (repeatedly) traverse loops (i.e., *non-simple* paths). This is a common case in practice: Often a large number of similar paths that differ only in the order and number of loop traversals occur successively in the sequence of shortest paths. (For applications that are only interested in simple paths, this is rather unfortunate.) |
|||
|
|||
|
|||
Examining Shortest Paths |
|||
======================== |
|||
|
|||
.. seealso:: `01-shortest-paths.py <https://github.com/moves-rwth/stormpy/blob/master/examples/shortest_paths/01-shortest-paths.py>`_ |
|||
|
|||
As in :doc:`../getting_started`, we import some required modules and build a model from the example files:: |
|||
|
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
>>> path = stormpy.examples.files.prism_dtmc_die |
|||
>>> prism_program = stormpy.parse_prism_program(path) |
|||
>>> model = stormpy.build_model(prism_program) |
|||
|
|||
|
|||
We also import the ``ShortestPathsGenerator``:: |
|||
|
|||
>>> from stormpy.utility import ShortestPathsGenerator |
|||
|
|||
and choose a target state (by its ID) to which we want to compute the shortest paths:: |
|||
|
|||
>>> state_id = 8 |
|||
|
|||
It is also possible to specify a set of target states (as a list, e.g., ``[8, 10, 11]``) or a label in the model if applicable (e.g., ``"observe0Greater1"``). |
|||
For simplicity, we will stick to using a single state for now. |
|||
|
|||
We initialize a ``ShortestPathsGenerator`` instance:: |
|||
|
|||
>>> spg = ShortestPathsGenerator(model, state_id) |
|||
|
|||
Now we can query the `k`-shortest path by index. Note that 1-based indices are used, so that the 3rd shortest path indeed corresponds to index ``k=3``. |
|||
Let us inspect the first three shortest paths:: |
|||
|
|||
>>> for k in range(1, 4): |
|||
... path = spg.get_path_as_list(k) |
|||
... distance = spg.get_distance(k) |
|||
... print("{}-shortest path to state #{}: {}, with distance {}".format(k, state_id, path, distance)) |
|||
1-shortest path to state #8: [8, 4, 1, 0], with distance 0.125 |
|||
2-shortest path to state #8: [8, 4, 1, 3, 1, 0], with distance 0.03125 |
|||
3-shortest path to state #8: [8, 4, 1, 3, 1, 3, 1, 0], with distance 0.0078125 |
|||
|
|||
As you can see, the distance (i.e., probability of the path) is also available. |
|||
Note that the paths are displayed as a backward-traversal from the target to the initial state. |
|||
|
|||
.. Yeah, sorry about that. Would be more user-friendly to (un-)reverse it |
|||
|
|||
.. [JM1999] Víctor M. Jiménez, Andrés Marzal. `Computing the K Shortest Paths: A New Algorithm and an Experimental Comparison <https://scholar.google.com/scholar?q=Computing+the+k+shortest+paths%3A+A+new+algorithm+and+an+experimental+comparison>`_. International Workshop on Algorithm Engineering, 1999 |
@ -0,0 +1,480 @@ |
|||
{ |
|||
"cells": [ |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"# Getting Started\n", |
|||
"\n", |
|||
"Before starting with this guide, one should follow the instructions for [Installation](installation.ipynb)." |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"## A Quick Tour through Stormpy\n", |
|||
"\n", |
|||
"This guide is intended for people which have a basic understanding of probabilistic models and their verification. More details and further pointers to literature can be found on the\n", |
|||
"[Storm website](http://www.stormchecker.org/).\n", |
|||
"While we assume some very basic programming concepts, we refrain from using more advanced concepts of python throughout the guide.\n", |
|||
"\n", |
|||
"We start with a selection of high-level constructs in stormpy, and go into more details afterwards. More in-depth examples can be found in the [Advanced Examples](advanced_topics.ipynb).\n", |
|||
"\n", |
|||
"The code examples are also given in the [examples/](https://github.com/moves-rwth/stormpy/blob/master/examples/) folder. These boxes throughout the text will tell you which example contains the code discussed.\n", |
|||
"\n", |
|||
"We start by launching the python 3 interpreter:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"source": [ |
|||
"```\n", |
|||
"$ python3\n", |
|||
"```" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"First we import stormpy:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"### Building models\n", |
|||
"\n", |
|||
"[01-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/01-getting-started.py)\n", |
|||
"\n", |
|||
"There are several ways to create a Markov chain.\n", |
|||
"One of the easiest is to parse a description of such a Markov chain and to let Storm build the chain.\n", |
|||
"\n", |
|||
"Here, we build a Markov chain from a prism program.\n", |
|||
"Stormpy comes with a small set of examples, which we use here:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> import stormpy.examples\n", |
|||
">>> import stormpy.examples.files" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"With this, we can now import the path of our prism file:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path = stormpy.examples.files.prism_dtmc_die\n", |
|||
">>> prism_program = stormpy.parse_prism_program(path)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The `prism_program` can be translated into a Markov chain:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> model = stormpy.build_model(prism_program)\n", |
|||
">>> print(\"Number of states: {}\".format(model.nr_states))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": {}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of transitions: {}\".format(model.nr_transitions))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"This tells us that the model has 13 states and 20 transitions.\n", |
|||
"\n", |
|||
"Moreover, initial states and deadlocks are indicated with a labelling function. We can see the labels present in the model by:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Labels: {}\".format(model.labeling.get_labels()))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We will investigate ways to examine the model in more detail later in [Investigating the model](#getting-started-investigating-the-model).\n", |
|||
"\n", |
|||
"\n" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"### Building properties\n", |
|||
"\n", |
|||
"[02-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/02-getting-started.py)\n", |
|||
"\n", |
|||
"Storm takes properties in the prism-property format.\n", |
|||
"To express that one is interested in the reachability of any state where the prism program variable `s` is 2, one would formulate:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"source": [ |
|||
"```\n", |
|||
"P=? [F s=2]\n", |
|||
"```" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Stormpy can be used to parse this. As the variables in the property refer to a program, the program has to be passed as an additional parameter:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> formula_str = \"P=? [F s=2]\"\n", |
|||
">>> properties = stormpy.parse_properties(formula_str, prism_program)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Notice that properties is now a list of properties containing a single element.\n", |
|||
"\n", |
|||
"However, if we build the model as before, then the appropriate information that the variable `s=2` in some states is not present.\n", |
|||
"In order to label the states accordingly, we should notify Storm upon building the model that we would like to preserve given properties.\n", |
|||
"Storm will then add the labels accordingly:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> model = stormpy.build_model(prism_program, properties)\n", |
|||
">>> print(\"Labels in the model: {}\".format(sorted(model.labeling.get_labels())))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Model building however now behaves slightly different: Only the properties passed are preserved, which means that model building might skip parts of the model.\n", |
|||
"In particular, to check the probability of eventually reaching a state `x` where `s=2`, successor states of `x` are not relevant:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(\"Number of states: {}\".format(model.nr_states))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"If we consider another property, however, such as:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"source": [ |
|||
"```\n", |
|||
"P=? [F s=7 & d=2]\n", |
|||
"```" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"then Storm is only skipping exploration of successors of the particular state `y` where `s=7` and `d=2`. In this model, state `y` has a self-loop, so effectively, the whole model is explored.\n", |
|||
"\n", |
|||
"\n", |
|||
"<a id='getting-started-checking-properties'></a>" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"### Checking properties\n", |
|||
"\n", |
|||
"[03-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/03-getting-started.py)\n", |
|||
"\n", |
|||
"The last lesson taught us to construct properties and models with matching state labels.\n", |
|||
"Now default checking routines are just a simple command away:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> properties = stormpy.parse_properties(formula_str, prism_program)\n", |
|||
">>> model = stormpy.build_model(prism_program, properties)\n", |
|||
">>> result = stormpy.model_checking(model, properties[0])" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"The result may contain information about all states.\n", |
|||
"Instead, we can iterate over the results:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> assert result.result_for_all_states\n", |
|||
">>> for x in result.get_values():\n", |
|||
"... pass # do something with x" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"#### Results for all states\n", |
|||
"\n", |
|||
"Some model checking algorithms do not provide results for all states. In those cases, the result is not valid for all states, and to iterate over them, a different method is required. We will explain this later.\n", |
|||
"\n", |
|||
"A good way to get the result for the initial states is as follows:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> initial_state = model.initial_states[0]\n", |
|||
">>> print(result.at(initial_state))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"\n", |
|||
"<a id='getting-started-investigating-the-model'></a>" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"### Investigating the model\n", |
|||
"\n", |
|||
"[04-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/04-getting-started.py)\n", |
|||
"\n", |
|||
"One powerful part of the Storm model checker is to quickly create the Markov chain from higher-order descriptions, as seen above:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> path = stormpy.examples.files.prism_dtmc_die\n", |
|||
">>> prism_program = stormpy.parse_prism_program(path)\n", |
|||
">>> model = stormpy.build_model(prism_program)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"In this example, we will exploit this, and explore the underlying Markov chain of the model.\n", |
|||
"The most basic question might be what the type of the constructed model is:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> print(model.model_type)" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We can also directly explore the underlying state space/matrix.\n", |
|||
"Notice that this code can be applied to both deterministic and non-deterministic models:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for state in model.states:\n", |
|||
"... for action in state.actions:\n", |
|||
"... for transition in action.transitions:\n", |
|||
"... print(\"From state {}, with probability {}, go to state {}\".format(state, transition.value(), transition.column))" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"Let us go into some more details. For DTMCs, each state has (at most) one outgoing probability distribution.\n", |
|||
"Thus:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for state in model.states:\n", |
|||
"... assert len(state.actions) <= 1" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "markdown", |
|||
"metadata": {}, |
|||
"source": [ |
|||
"We can also check if a state is indeed an initial state. Notice that `model.initial_states` contains state ids, not states.:" |
|||
] |
|||
}, |
|||
{ |
|||
"cell_type": "code", |
|||
"execution_count": null, |
|||
"metadata": { |
|||
"hide-output": false |
|||
}, |
|||
"outputs": [], |
|||
"source": [ |
|||
">>> for state in model.states:\n", |
|||
"... if state.id in model.initial_states:\n", |
|||
"... pass" |
|||
] |
|||
} |
|||
], |
|||
"metadata": { |
|||
"date": 1598188121.7690735, |
|||
"filename": "getting_started.rst", |
|||
"kernelspec": { |
|||
"display_name": "Python 3", |
|||
"language": "python", |
|||
"name": "python3" |
|||
}, |
|||
"language_info": { |
|||
"codemirror_mode": { |
|||
"name": "ipython", |
|||
"version": 3 |
|||
}, |
|||
"file_extension": ".py", |
|||
"mimetype": "text/x-python", |
|||
"name": "python", |
|||
"nbconvert_exporter": "python", |
|||
"pygments_lexer": "ipython3", |
|||
"version": "3.8.2" |
|||
}, |
|||
"title": "Getting Started" |
|||
}, |
|||
"nbformat": 4, |
|||
"nbformat_minor": 4 |
|||
} |
@ -1,188 +0,0 @@ |
|||
**************************** |
|||
Getting Started |
|||
**************************** |
|||
|
|||
Before starting with this guide, one should follow the instructions for :doc:`installation`. |
|||
|
|||
A Quick Tour through Stormpy |
|||
================================ |
|||
|
|||
This guide is intended for people which have a basic understanding of probabilistic models and their verification. More details and further pointers to literature can be found on the |
|||
`Storm website <http://www.stormchecker.org/>`_. |
|||
While we assume some very basic programming concepts, we refrain from using more advanced concepts of python throughout the guide. |
|||
|
|||
We start with a selection of high-level constructs in stormpy, and go into more details afterwards. More in-depth examples can be found in the :doc:`advanced_topics`. |
|||
|
|||
.. seealso:: The code examples are also given in the `examples/ <https://github.com/moves-rwth/stormpy/blob/master/examples/>`_ folder. These boxes throughout the text will tell you which example contains the code discussed. |
|||
|
|||
We start by launching the python 3 interpreter:: |
|||
|
|||
$ python3 |
|||
|
|||
First we import stormpy:: |
|||
|
|||
>>> import stormpy |
|||
|
|||
Building models |
|||
------------------------------------------------ |
|||
.. seealso:: `01-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/01-getting-started.py>`_ |
|||
|
|||
There are several ways to create a Markov chain. |
|||
One of the easiest is to parse a description of such a Markov chain and to let Storm build the chain. |
|||
|
|||
Here, we build a Markov chain from a prism program. |
|||
Stormpy comes with a small set of examples, which we use here:: |
|||
|
|||
>>> import stormpy.examples |
|||
>>> import stormpy.examples.files |
|||
|
|||
With this, we can now import the path of our prism file:: |
|||
|
|||
>>> path = stormpy.examples.files.prism_dtmc_die |
|||
>>> prism_program = stormpy.parse_prism_program(path) |
|||
|
|||
The ``prism_program`` can be translated into a Markov chain:: |
|||
|
|||
>>> model = stormpy.build_model(prism_program) |
|||
>>> print("Number of states: {}".format(model.nr_states)) |
|||
Number of states: 13 |
|||
>>> print("Number of transitions: {}".format(model.nr_transitions)) |
|||
Number of transitions: 20 |
|||
|
|||
This tells us that the model has 13 states and 20 transitions. |
|||
|
|||
Moreover, initial states and deadlocks are indicated with a labelling function. We can see the labels present in the model by:: |
|||
|
|||
>>> print("Labels: {}".format(model.labeling.get_labels())) |
|||
Labels: ... |
|||
|
|||
We will investigate ways to examine the model in more detail later in :ref:`getting-started-investigating-the-model`. |
|||
|
|||
.. _getting-started-building-properties: |
|||
|
|||
Building properties |
|||
-------------------------- |
|||
.. seealso:: `02-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/02-getting-started.py>`_ |
|||
|
|||
Storm takes properties in the prism-property format. |
|||
To express that one is interested in the reachability of any state where the prism program variable ``s`` is 2, one would formulate:: |
|||
|
|||
P=? [F s=2] |
|||
|
|||
Stormpy can be used to parse this. As the variables in the property refer to a program, the program has to be passed as an additional parameter:: |
|||
|
|||
>>> formula_str = "P=? [F s=2]" |
|||
>>> properties = stormpy.parse_properties(formula_str, prism_program) |
|||
|
|||
Notice that properties is now a list of properties containing a single element. |
|||
|
|||
However, if we build the model as before, then the appropriate information that the variable ``s=2`` in some states is not present. |
|||
In order to label the states accordingly, we should notify Storm upon building the model that we would like to preserve given properties. |
|||
Storm will then add the labels accordingly:: |
|||
|
|||
>>> model = stormpy.build_model(prism_program, properties) |
|||
>>> print("Labels in the model: {}".format(sorted(model.labeling.get_labels()))) |
|||
Labels in the model: ['(s = 2)', 'deadlock', 'init'] |
|||
|
|||
Model building however now behaves slightly different: Only the properties passed are preserved, which means that model building might skip parts of the model. |
|||
In particular, to check the probability of eventually reaching a state ``x`` where ``s=2``, successor states of ``x`` are not relevant:: |
|||
|
|||
>>> print("Number of states: {}".format(model.nr_states)) |
|||
Number of states: 8 |
|||
|
|||
If we consider another property, however, such as:: |
|||
|
|||
P=? [F s=7 & d=2] |
|||
|
|||
then Storm is only skipping exploration of successors of the particular state ``y`` where ``s=7`` and ``d=2``. In this model, state ``y`` has a self-loop, so effectively, the whole model is explored. |
|||
|
|||
.. _getting-started-checking-properties: |
|||
|
|||
Checking properties |
|||
------------------------------------ |
|||
.. seealso:: `03-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/03-getting-started.py>`_ |
|||
|
|||
The last lesson taught us to construct properties and models with matching state labels. |
|||
Now default checking routines are just a simple command away:: |
|||
|
|||
>>> properties = stormpy.parse_properties(formula_str, prism_program) |
|||
>>> model = stormpy.build_model(prism_program, properties) |
|||
>>> result = stormpy.model_checking(model, properties[0]) |
|||
|
|||
The result may contain information about all states. |
|||
Instead, we can iterate over the results:: |
|||
|
|||
>>> assert result.result_for_all_states |
|||
>>> for x in result.get_values(): |
|||
... pass # do something with x |
|||
|
|||
|
|||
.. topic:: Results for all states |
|||
|
|||
Some model checking algorithms do not provide results for all states. In those cases, the result is not valid for all states, and to iterate over them, a different method is required. We will explain this later. |
|||
|
|||
A good way to get the result for the initial states is as follows:: |
|||
|
|||
>>> initial_state = model.initial_states[0] |
|||
>>> print(result.at(initial_state)) |
|||
0.5 |
|||
|
|||
.. _getting-started-investigating-the-model: |
|||
|
|||
Investigating the model |
|||
------------------------------------- |
|||
.. seealso:: `04-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/04-getting-started.py>`_ |
|||
|
|||
One powerful part of the Storm model checker is to quickly create the Markov chain from higher-order descriptions, as seen above:: |
|||
|
|||
>>> path = stormpy.examples.files.prism_dtmc_die |
|||
>>> prism_program = stormpy.parse_prism_program(path) |
|||
>>> model = stormpy.build_model(prism_program) |
|||
|
|||
In this example, we will exploit this, and explore the underlying Markov chain of the model. |
|||
The most basic question might be what the type of the constructed model is:: |
|||
|
|||
>>> print(model.model_type) |
|||
ModelType.DTMC |
|||
|
|||
We can also directly explore the underlying state space/matrix. |
|||
Notice that this code can be applied to both deterministic and non-deterministic models:: |
|||
|
|||
>>> for state in model.states: |
|||
... for action in state.actions: |
|||
... for transition in action.transitions: |
|||
... print("From state {}, with probability {}, go to state {}".format(state, transition.value(), transition.column)) |
|||
From state 0, with probability 0.5, go to state 1 |
|||
From state 0, with probability 0.5, go to state 2 |
|||
From state 1, with probability 0.5, go to state 3 |
|||
From state 1, with probability 0.5, go to state 4 |
|||
From state 2, with probability 0.5, go to state 5 |
|||
From state 2, with probability 0.5, go to state 6 |
|||
From state 3, with probability 0.5, go to state 1 |
|||
From state 3, with probability 0.5, go to state 7 |
|||
From state 4, with probability 0.5, go to state 8 |
|||
From state 4, with probability 0.5, go to state 9 |
|||
From state 5, with probability 0.5, go to state 10 |
|||
From state 5, with probability 0.5, go to state 11 |
|||
From state 6, with probability 0.5, go to state 2 |
|||
From state 6, with probability 0.5, go to state 12 |
|||
From state 7, with probability 1.0, go to state 7 |
|||
From state 8, with probability 1.0, go to state 8 |
|||
From state 9, with probability 1.0, go to state 9 |
|||
From state 10, with probability 1.0, go to state 10 |
|||
From state 11, with probability 1.0, go to state 11 |
|||
From state 12, with probability 1.0, go to state 12 |
|||
|
|||
Let us go into some more details. For DTMCs, each state has (at most) one outgoing probability distribution. |
|||
Thus:: |
|||
|
|||
>>> for state in model.states: |
|||
... assert len(state.actions) <= 1 |
|||
|
|||
|
|||
We can also check if a state is indeed an initial state. Notice that ``model.initial_states`` contains state ids, not states.:: |
|||
|
|||
>>> for state in model.states: |
|||
... if state.id in model.initial_states: |
|||
... pass |
|||
|
Write
Preview
Loading…
Cancel
Save
Reference in new issue