Browse Source

replaced rst files

refactoring
hannah 5 years ago
committed by Matthias Volk
parent
commit
9bc48bbf3a
No known key found for this signature in database GPG Key ID: 83A57678F739FCD3
  1. 1375
      doc/source/api/core.ipynb
  2. 7
      doc/source/api/core.rst
  3. 99
      doc/source/api/dft.ipynb
  4. 7
      doc/source/api/dft.rst
  5. 25
      doc/source/api/exceptions.ipynb
  6. 7
      doc/source/api/exceptions.rst
  7. 384
      doc/source/api/gspn.ipynb
  8. 7
      doc/source/api/gspn.rst
  9. 34
      doc/source/api/info.ipynb
  10. 7
      doc/source/api/info.rst
  11. 101
      doc/source/api/logic.ipynb
  12. 7
      doc/source/api/logic.rst
  13. 110
      doc/source/api/pars.ipynb
  14. 7
      doc/source/api/pars.rst
  15. 759
      doc/source/api/storage.ipynb
  16. 7
      doc/source/api/storage.rst
  17. 53
      doc/source/api/utility.ipynb
  18. 7
      doc/source/api/utility.rst
  19. 196
      doc/source/doc/.ipynb_checkpoints/analysis-checkpoint.ipynb
  20. 138
      doc/source/doc/.ipynb_checkpoints/building_models-checkpoint.ipynb
  21. 132
      doc/source/doc/.ipynb_checkpoints/dfts-checkpoint.ipynb
  22. 285
      doc/source/doc/.ipynb_checkpoints/exploration-checkpoint.ipynb
  23. 260
      doc/source/doc/.ipynb_checkpoints/gspns-checkpoint.ipynb
  24. 174
      doc/source/doc/.ipynb_checkpoints/parametric_models-checkpoint.ipynb
  25. 133
      doc/source/doc/.ipynb_checkpoints/reward_models-checkpoint.ipynb
  26. 202
      doc/source/doc/.ipynb_checkpoints/schedulers-checkpoint.ipynb
  27. 167
      doc/source/doc/.ipynb_checkpoints/shortest_paths-checkpoint.ipynb
  28. 196
      doc/source/doc/analysis.ipynb
  29. 69
      doc/source/doc/analysis.rst
  30. 11
      doc/source/doc/building_models.ipynb
  31. 132
      doc/source/doc/dfts.ipynb
  32. 51
      doc/source/doc/dfts.rst
  33. 109
      doc/source/doc/engines.ipynb
  34. 82
      doc/source/doc/engines.rst
  35. 285
      doc/source/doc/exploration.ipynb
  36. 113
      doc/source/doc/exploration.rst
  37. 260
      doc/source/doc/gspns.ipynb
  38. 84
      doc/source/doc/gspns.rst
  39. 189
      doc/source/doc/models/.ipynb_checkpoints/building_ctmcs-checkpoint.ipynb
  40. 328
      doc/source/doc/models/.ipynb_checkpoints/building_dtmcs-checkpoint.ipynb
  41. 211
      doc/source/doc/models/.ipynb_checkpoints/building_mas-checkpoint.ipynb
  42. 309
      doc/source/doc/models/.ipynb_checkpoints/building_mdps-checkpoint.ipynb
  43. 189
      doc/source/doc/models/building_ctmcs.ipynb
  44. 103
      doc/source/doc/models/building_ctmcs.rst
  45. 328
      doc/source/doc/models/building_dtmcs.ipynb
  46. 151
      doc/source/doc/models/building_dtmcs.rst
  47. 211
      doc/source/doc/models/building_mas.ipynb
  48. 120
      doc/source/doc/models/building_mas.rst
  49. 309
      doc/source/doc/models/building_mdps.ipynb
  50. 151
      doc/source/doc/models/building_mdps.rst
  51. 174
      doc/source/doc/parametric_models.ipynb
  52. 67
      doc/source/doc/parametric_models.rst
  53. 133
      doc/source/doc/reward_models.ipynb
  54. 64
      doc/source/doc/reward_models.rst
  55. 202
      doc/source/doc/schedulers.ipynb
  56. 99
      doc/source/doc/schedulers.rst
  57. 167
      doc/source/doc/shortest_paths.ipynb
  58. 63
      doc/source/doc/shortest_paths.rst
  59. 481
      doc/source/getting_started.ipynb
  60. 188
      doc/source/getting_started.rst

1375
doc/source/api/core.ipynb
File diff suppressed because it is too large
View File

7
doc/source/api/core.rst

@ -1,7 +0,0 @@
Stormpy.core
**************************
.. automodule:: stormpy
:members:
:undoc-members:
:imported-members:

99
doc/source/api/dft.ipynb

@ -0,0 +1,99 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.dft\n",
"\n",
"class DFTDynamic Fault Tree\n",
"\n",
"can_have_nondeterminismself: stormpy.dft.dft.DFTboolWhether the model can contain non-deterministic choices\n",
"\n",
"get_elementself: stormpy.dft.dft.DFTindex: intstormpy.dft.dft.DFTElementGet DFT element at index\n",
"\n",
"get_element_by_nameself: stormpy.dft.dft.DFTname: strstormpy.dft.dft.DFTElementGet DFT element by name\n",
"\n",
"modularisationself: stormpy.dft.dft.DFTList[stormpy.dft.dft.DFT]Split DFT into independent modules\n",
"\n",
"nr_beself: stormpy.dft.dft.DFTintNumber of basic elements\n",
"\n",
"nr_dynamicself: stormpy.dft.dft.DFTintNumber of dynamic elements\n",
"\n",
"nr_elementsself: stormpy.dft.dft.DFTintTotal number of elements\n",
"\n",
"symmetriesself: stormpy.dft.dft.DFTstorm::storage::DFTIndependentSymmetriesCompute symmetries in DFT\n",
"\n",
"property top_level_elementGet top level element\n",
"\n",
"class DFTElementDFT element\n",
"\n",
"property idId\n",
"\n",
"property nameName\n",
"\n",
"class DFTSymmetriesSymmetries in DFT\n",
"\n",
"property groupsSymmetry groups\n",
"\n",
"class ParametricDFTParametric DFT\n",
"\n",
"can_have_nondeterminismself: stormpy.dft.dft.ParametricDFTboolWhether the model can contain non-deterministic choices\n",
"\n",
"get_elementself: stormpy.dft.dft.ParametricDFTindex: intstormpy.dft.dft.ParametricDFTElementGet DFT element at index\n",
"\n",
"get_element_by_nameself: stormpy.dft.dft.ParametricDFTname: strstormpy.dft.dft.ParametricDFTElementGet DFT element by name\n",
"\n",
"modularisationself: stormpy.dft.dft.ParametricDFTList[stormpy.dft.dft.ParametricDFT]Split DFT into independent modules\n",
"\n",
"nr_beself: stormpy.dft.dft.ParametricDFTintNumber of basic elements\n",
"\n",
"nr_dynamicself: stormpy.dft.dft.ParametricDFTintNumber of dynamic elements\n",
"\n",
"nr_elementsself: stormpy.dft.dft.ParametricDFTintTotal number of elements\n",
"\n",
"symmetriesself: stormpy.dft.dft.ParametricDFTstorm::storage::DFTIndependentSymmetriesCompute symmetries in DFT\n",
"\n",
"property top_level_elementGet top level element\n",
"\n",
"class ParametricDFTElementParametric DFT element\n",
"\n",
"property idId\n",
"\n",
"property nameName\n",
"\n",
"analyze_dftdft: stormpy.dft.dft.DFTproperties: List[stormpy.logic.logic.Formula]symred: bool = Trueallow_modularisation: bool = Falserelevant_events: Set[int] = set()dc_for_relevant: bool = FalseList[float]Analyze the DFT\n",
"\n",
"compute_dependency_conflictsdft: stormpy.dft.dft.DFTuse_smt: bool = Falsesolver_timeout: float = 0boolSet conflicts between FDEPs. Is used in analysis.\n",
"\n",
"compute_relevant_eventsdft: stormpy.dft.dft.DFTproperties: List[stormpy.logic.logic.Formula]additional_relevant_names: List[str] = []Set[int]Compute relevant event ids from properties and additional relevant names\n",
"\n",
"export_dft_json_filedft: stormpy.dft.dft.DFTpath: strNoneExport DFT to JSON file\n",
"\n",
"export_dft_json_stringdft: stormpy.dft.dft.DFTstrExport DFT to JSON string\n",
"\n",
"is_well_formeddft: stormpy.dft.dft.DFTcheck_valid_for_analysis: bool = TrueTuple[bool, str]Check whether DFT is well-formed.\n",
"\n",
"load_dft_galileo_filepath: strstormpy.dft.dft.DFTLoad DFT from Galileo file\n",
"\n",
"load_dft_json_filepath: strstormpy.dft.dft.DFTLoad DFT from JSON file\n",
"\n",
"load_dft_json_stringjson_string: strstormpy.dft.dft.DFTLoad DFT from JSON string\n",
"\n",
"transform_dftdft: stormpy.dft.dft.DFTunique_constant_be: boolbinary_fdeps: boolstormpy.dft.dft.DFTApply transformations on DFT"
]
}
],
"metadata": {
"date": 1598178166.5329514,
"filename": "dft.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.dft"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/dft.rst

@ -1,7 +0,0 @@
Stormpy.dft
**************************
.. automodule:: stormpy.dft
:members:
:undoc-members:
:imported-members:

25
doc/source/api/exceptions.ipynb

@ -0,0 +1,25 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.exceptions\n",
"\n",
"exception StormErrormessageBase class for exceptions in Storm."
]
}
],
"metadata": {
"date": 1598178166.53555,
"filename": "exceptions.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.exceptions"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/exceptions.rst

@ -1,7 +0,0 @@
Stormpy.exceptions
**************************
.. automodule:: stormpy.exceptions
:members:
:undoc-members:
:imported-members:

384
doc/source/api/gspn.ipynb

@ -0,0 +1,384 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.gspn\n",
"\n",
"class GSPNGeneralized Stochastic Petri Net\n",
"\n",
"export_gspn_pnml_fileself: stormpy.gspn.gspn.GSPNfilepath: strNoneExport GSPN to PNML file\n",
"\n",
"export_gspn_pnpro_fileself: stormpy.gspn.gspn.GSPNfilepath: strNoneExport GSPN to PNPRO file\n",
"\n",
"get_immediate_transitionself: stormpy.gspn.gspn.GSPNname: strstorm::gspn::ImmediateTransition<double>Returns the immediate transition with the corresponding name\n",
"\n",
"get_immediate_transitionsself: stormpy.gspn.gspn.GSPNList[storm::gspn::ImmediateTransition<double>]> Returns the immediate transitions of this GSPN.\n",
"\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Return type</dt>\n",
"<dd>\n",
"list[stormpy.ImmediateTransition]\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"get_initial_markingself: stormpy.gspn.gspn.GSPNarg0: Dict[int, int]arg1: intstorm::gspn::MarkingComputes the initial marking of this GSPN\n",
"\n",
"get_nameself: stormpy.gspn.gspn.GSPNstrGet name of GSPN\n",
"\n",
"get_number_of_immediate_transitionsself: stormpy.gspn.gspn.GSPNintGet the number of immediate transitions in this GSPN\n",
"\n",
"get_number_of_placesself: stormpy.gspn.gspn.GSPNintGet the number of places in this GSPN\n",
"\n",
"get_number_of_timed_transitionsself: stormpy.gspn.gspn.GSPNintGet the number of timed transitions in this GSPN\n",
"\n",
"get_partitionsself: stormpy.gspn.gspn.GSPNList[storm::gspn::TransitionPartition]Get the partitions of this GSPN\n",
"\n",
"get_place*args**kwargsOverloaded function.\n",
"\n",
"1. get_place(self: stormpy.gspn.gspn.GSPN, id: int) -> storm::gspn::Place \n",
" \n",
" \n",
" Returns the place with the corresponding id. \n",
" \n",
" \n",
" <dl style='margin: 20px 0;'>\n",
" <dt>param uint64_t id</dt>\n",
" <dd>\n",
" The ID of the place. \n",
" </dd>\n",
" <dt>rtype</dt>\n",
" <dd>\n",
" stormpy.Place \n",
" </dd>\n",
" \n",
" </dl>\n",
" \n",
" \n",
"1. get_place(self: stormpy.gspn.gspn.GSPN, name: str) -> storm::gspn::Place \n",
"\n",
"\n",
"get_placesself: stormpy.gspn.gspn.GSPNList[storm::gspn::Place]> Returns a vector of the places of this GSPN.\n",
"\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Return type</dt>\n",
"<dd>\n",
"list[stormpy.Place]\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"get_timed_transitionself: stormpy.gspn.gspn.GSPNname: strstorm::gspn::TimedTransition<double>Returns the timed transition with the corresponding name\n",
"\n",
"get_timed_transitionsself: stormpy.gspn.gspn.GSPNList[storm::gspn::TimedTransition<double>]> Returns a vector of the timed transitions of this GSPN.\n",
"\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Return type</dt>\n",
"<dd>\n",
"list[stormpy.TimedTransition]\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"get_transitionself: stormpy.gspn.gspn.GSPNname: strstorm::gspn::TransitionReturns the transition with the corresponding name\n",
"\n",
"immediate_transition_id_to_transition_idarg0: intintis_validself: stormpy.gspn.gspn.GSPNboolPerform some checks\n",
"\n",
"set_nameself: stormpy.gspn.gspn.GSPNarg0: strNoneSet name of GSPN\n",
"\n",
"timed_transition_id_to_transition_idarg0: intinttransition_id_to_immediate_transition_idarg0: intinttransition_id_to_timed_transition_idarg0: intintclass GSPNBuilderGeneralized Stochastic Petri Net Builder\n",
"\n",
"add_immediate_transitionself: stormpy.gspn.gspn.GSPNBuilderpriority: int = 0weight: float = 0name: str = ''intAdd an immediate transition to the GSPN\n",
"\n",
"add_inhibition_arc*args**kwargsOverloaded function.\n",
"\n",
"1. add_inhibition_arc(self: stormpy.gspn.gspn.GSPNBuilder, from: int, to: int, multiplicity: int=1) -> None \n",
" \n",
" \n",
" Add an new inhibition arc from a place to a transition. \n",
" \n",
" \n",
" <dl style='margin: 20px 0;'>\n",
" <dt>param from</dt>\n",
" <dd>\n",
" The ID or name of the place from which the arc is originating. \n",
" </dd>\n",
" <dt>type from</dt>\n",
" <dd>\n",
" uint_64_t or str \n",
" </dd>\n",
" <dt>param to</dt>\n",
" <dd>\n",
" The ID or name of the transition to which the arc goes to. \n",
" </dd>\n",
" <dt>type to</dt>\n",
" <dd>\n",
" uint_64_t or str \n",
" </dd>\n",
" <dt>param uint64_t multiplicity</dt>\n",
" <dd>\n",
" The multiplicity of the arc, default = 1. \n",
" </dd>\n",
" \n",
" </dl>\n",
" \n",
" \n",
"1. add_inhibition_arc(self: stormpy.gspn.gspn.GSPNBuilder, from: str, to: str, multiplicity: int=1) -> None \n",
"\n",
"\n",
"add_input_arc*args**kwargsOverloaded function.\n",
"\n",
"1. add_input_arc(self: stormpy.gspn.gspn.GSPNBuilder, from: int, to: int, multiplicity: int=1) -> None \n",
" \n",
" \n",
" Add a new input arc from a place to a transition \n",
" \n",
" \n",
" <dl style='margin: 20px 0;'>\n",
" <dt>param from</dt>\n",
" <dd>\n",
" The ID or name of the place from which the arc is originating. \n",
" </dd>\n",
" <dt>type from</dt>\n",
" <dd>\n",
" uint_64_t or str \n",
" </dd>\n",
" <dt>param uint_64_t to</dt>\n",
" <dd>\n",
" The ID or name of the transition to which the arc goes to. \n",
" </dd>\n",
" <dt>type from</dt>\n",
" <dd>\n",
" uint_64_t or str \n",
" </dd>\n",
" <dt>param uint64_t multiplicity</dt>\n",
" <dd>\n",
" The multiplicity of the arc, default = 1. \n",
" </dd>\n",
" \n",
" </dl>\n",
" \n",
" \n",
"1. add_input_arc(self: stormpy.gspn.gspn.GSPNBuilder, from: str, to: str, multiplicity: int=1) -> None \n",
"\n",
"\n",
"add_normal_arcself: stormpy.gspn.gspn.GSPNBuilderfrom: strto: strmultiplicity: int=1None> Add an arc from a named element to a named element.\n",
"Can be both input or output arc, but not an inhibition arc.\n",
"Convenience function for textual format parsers.\n",
"\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- from (str) – Source element in the GSPN from where this arc starts. \n",
"- to (str) – Target element in the GSPN where this arc ends. \n",
"- multiplicity (uint64_t) – Multiplicity of the arc, default = 1. \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"add_output_arc*args**kwargsOverloaded function.\n",
"\n",
"1. add_output_arc(self: stormpy.gspn.gspn.GSPNBuilder, from: int, to: int, multiplicity: int=1) -> None \n",
" \n",
" \n",
" Add an new output arc from a transition to a place. \n",
" \n",
" \n",
" <dl style='margin: 20px 0;'>\n",
" <dt>param from</dt>\n",
" <dd>\n",
" The ID or name of the transition from which the arc is originating. \n",
" </dd>\n",
" <dt>type from</dt>\n",
" <dd>\n",
" uint_64_t or str \n",
" </dd>\n",
" <dt>param to</dt>\n",
" <dd>\n",
" The ID or name of the place to which the arc goes to. \n",
" </dd>\n",
" <dt>type to</dt>\n",
" <dd>\n",
" uint_64_t or str \n",
" </dd>\n",
" <dt>param uint64_t multiplicity</dt>\n",
" <dd>\n",
" The multiplicity of the arc, default = 1. \n",
" </dd>\n",
" \n",
" </dl>\n",
" \n",
" \n",
"1. add_output_arc(self: stormpy.gspn.gspn.GSPNBuilder, from: str, to: str, multiplicity: int) -> None \n",
"\n",
"\n",
"add_placeself: stormpy.gspn.gspn.GSPNBuildercapacity: Optional[int] = 1initial_tokens: int = 0name: str = ''intAdd a place to the GSPN\n",
"\n",
"add_timed_transition*args**kwargsOverloaded function.\n",
"\n",
"1. add_timed_transition(self: stormpy.gspn.gspn.GSPNBuilder, priority: int, rate: float, name: str=’’) -> int \n",
"\n",
"\n",
"Add a timed transition to the GSPN\n",
"\n",
"1. add_timed_transition(self: stormpy.gspn.gspn.GSPNBuilder, priority: int, rate: float, num_servers: Optional[int], name: str=’’) -> int \n",
"\n",
"\n",
"build_gspnself: stormpy.gspn.gspn.GSPNBuilderexpression_manager: stormpy.storage.storage.ExpressionManager = Noneconstants_substitution: Dict[stormpy.storage.storage.Variable, stormpy.storage.storage.Expression] = {}storm::gspn::GSPNConstruct GSPN\n",
"\n",
"set_nameself: stormpy.gspn.gspn.GSPNBuildername: strNoneSet name of GSPN\n",
"\n",
"set_place_layout_infoself: stormpy.gspn.gspn.GSPNBuilderplace_id: intlayout_info: storm::gspn::LayoutInfoNone> Set place layout information.\n",
"\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- id (uint64_t) – The ID of the place. \n",
"- layout_info (stormpy.LayoutInfo) – The layout information. \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"set_transition_layout_infoself: stormpy.gspn.gspn.GSPNBuildertransition_id: intlayout_info: storm::gspn::LayoutInfoNone> Set transition layout information.\n",
"\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- id (uint64_t) – The ID of the transition. \n",
"- layout_info (stormpy.LayoutInfo) – The layout information. \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"class GSPNParserparseself: stormpy.gspn.gspn.GSPNParserfilename: strconstant_definitions: str = ''stormpy.gspn.gspn.GSPNclass GSPNToJaniBuilderbuildself: stormpy.gspn.gspn.GSPNToJaniBuilderautomaton_name: str = 'gspn_automaton'stormpy.storage.storage.JaniModelBuild Jani model from GSPN\n",
"\n",
"create_deadlock_propertiesself: stormpy.gspn.gspn.GSPNToJaniBuilderjani_model: stormpy.storage.storage.JaniModelList[stormpy.core.Property]Create standard properties for deadlocks\n",
"\n",
"class ImmediateTransitionImmediateTransition in a GSPN\n",
"\n",
"get_weightself: stormpy.gspn.gspn.ImmediateTransitionfloatGet weight of this transition\n",
"\n",
"no_weight_attachedself: stormpy.gspn.gspn.ImmediateTransitionboolTrue iff no weight is attached\n",
"\n",
"set_weightself: stormpy.gspn.gspn.ImmediateTransitionweight: floatNoneSet weight of this transition\n",
"\n",
"class LayoutInfoproperty rotationproperty xproperty yclass PlacePlace in a GSPN\n",
"\n",
"get_capacityself: stormpy.gspn.gspn.PlaceintGet the capacity of tokens of this place\n",
"\n",
"get_idself: stormpy.gspn.gspn.PlaceintGet the id of this place\n",
"\n",
"get_nameself: stormpy.gspn.gspn.PlacestrGet name of this place\n",
"\n",
"get_number_of_initial_tokensself: stormpy.gspn.gspn.PlaceintGet the number of initial tokens of this place\n",
"\n",
"has_restricted_capacityself: stormpy.gspn.gspn.PlaceboolIs capacity of this place restricted\n",
"\n",
"set_capacityself: stormpy.gspn.gspn.Placecap: Optional[int]NoneSet the capacity of tokens of this place\n",
"\n",
"set_nameself: stormpy.gspn.gspn.Placename: strNoneSet name of this place\n",
"\n",
"set_number_of_initial_tokensself: stormpy.gspn.gspn.Placetokens: intNoneSet the number of initial tokens of this place\n",
"\n",
"class TimedTransitionTimedTransition in a GSPN\n",
"\n",
"get_number_of_serversself: stormpy.gspn.gspn.TimedTransitionintGet number of servers\n",
"\n",
"get_rateself: stormpy.gspn.gspn.TimedTransitionfloatGet rate of this transition\n",
"\n",
"has_infinite_server_semanticsself: stormpy.gspn.gspn.TimedTransitionboolGet semantics of this transition\n",
"\n",
"has_k_server_semanticsself: stormpy.gspn.gspn.TimedTransitionboolGet semantics of this transition\n",
"\n",
"has_single_server_semanticsself: stormpy.gspn.gspn.TimedTransitionboolGet semantics of this transition\n",
"\n",
"set_infinite_server_semanticsself: stormpy.gspn.gspn.TimedTransitionNoneSet semantics of this transition\n",
"\n",
"set_k_server_semanticsself: stormpy.gspn.gspn.TimedTransitionk: intNoneSet semantics of this transition\n",
"\n",
"set_rateself: stormpy.gspn.gspn.TimedTransitionrate: floatNoneSet rate of this transition\n",
"\n",
"set_single_server_semanticsself: stormpy.gspn.gspn.TimedTransitionNoneSet semantics of this transition\n",
"\n",
"class TransitionTransition in a GSPN\n",
"\n",
"exists_inhibition_arcself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceboolCheck whether the given place is connected to this transition via an inhibition arc.\n",
"\n",
"exists_input_arcself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceboolCheck whether the given place is connected to this transition via an input arc.\n",
"\n",
"exists_output_arcself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceboolCheck whether the given place is connected to this transition via an output arc.\n",
"\n",
"fireself: stormpy.gspn.gspn.Transitionmarking: storm::gspn::Markingstorm::gspn::MarkingFire the transition if possible.\n",
"\n",
"get_idself: stormpy.gspn.gspn.TransitionintGet id of this transition\n",
"\n",
"get_inhibition_arc_multiplicityself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceintReturns the corresponding multiplicity.\n",
"\n",
"get_inhibition_placesself: stormpy.gspn.gspn.TransitionDict[int, int]get_input_arc_multiplicityself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceintReturns the corresponding multiplicity.\n",
"\n",
"get_input_placesself: stormpy.gspn.gspn.TransitionDict[int, int]get_nameself: stormpy.gspn.gspn.TransitionstrGet name of this transition\n",
"\n",
"get_output_arc_multiplicityself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceintReturns the corresponding multiplicity.\n",
"\n",
"get_output_placesself: stormpy.gspn.gspn.TransitionDict[int, int]get_priorityself: stormpy.gspn.gspn.TransitionintGet priority of this transition\n",
"\n",
"is_enabledself: stormpy.gspn.gspn.Transitionmarking: storm::gspn::MarkingboolCheck if the given marking enables the transition.\n",
"\n",
"remove_inhibition_arcself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceboolRemove an inhibition arc connected to a given place.\n",
"\n",
"remove_input_arcself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceboolRemove an input arc connected to a given place.\n",
"\n",
"remove_output_arcself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.PlaceboolRemove an output arc connected to a given place.\n",
"\n",
"set_inhibition_arc_multiplicityself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.Placemultiplicity: intNoneSet the multiplicity of the inhibition arc originating from the place.\n",
"\n",
"set_input_arc_multiplicityself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.Placemultiplicity: intNoneSet the multiplicity of the input arc originating from the place.\n",
"\n",
"set_nameself: stormpy.gspn.gspn.Transitionname: strNoneSet name of this transition\n",
"\n",
"set_output_arc_multiplicityself: stormpy.gspn.gspn.Transitionplace: stormpy.gspn.gspn.Placemultiplicity: intNoneSet the multiplicity of the output arc going to the place.\n",
"\n",
"set_priorityself: stormpy.gspn.gspn.Transitionpriority: intNoneSet priority of this transition\n",
"\n",
"class TransitionPartitionnr_transitionsself: stormpy.gspn.gspn.TransitionPartitionintGet number of transitions\n",
"\n",
"property priorityproperty transitions"
]
}
],
"metadata": {
"date": 1598178166.7294815,
"filename": "gspn.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.gspn"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/gspn.rst

@ -1,7 +0,0 @@
Stormpy.gspn
**************************
.. automodule:: stormpy.gspn
:members:
:undoc-members:
:imported-members:

34
doc/source/api/info.ipynb

@ -0,0 +1,34 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.info\n",
"\n",
"class VersionVersion information for Storm\n",
"\n",
"build_info = \"Compiled on Linux 5.4.0-31-generic using gcc 9.3.0 with flags ' -std=c++14 -O3 -DNDEBUG -fprefetch-loop-arrays -flto -flto-partition=none -march=native -fomit-frame-pointer'\"development = Truelong = 'Version 1.6.1 (dev) (+ 121 commits) build from revision 8d9e2a92f03b713aee2a4b6b65737cc5c8c54856 (clean)'major = 1minor = 6patch = 1short = '1.6.1 (dev)'storm_exact_use_clnCheck if exact arithmetic in Storm uses CLN.\n",
":return: True if exact arithmetic uses CLN.\n",
"\n",
"storm_ratfunc_use_clnCheck if rational functions in Storm use CLN.\n",
":return: True iff rational functions use CLN.\n",
"\n",
"storm_versionGet storm version.\n",
":return: Storm version"
]
}
],
"metadata": {
"date": 1598178166.7365491,
"filename": "info.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.info"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/info.rst

@ -1,7 +0,0 @@
Stormpy.info
**************************
.. automodule:: stormpy.info
:members:
:undoc-members:
:imported-members:

101
doc/source/api/logic.ipynb

@ -0,0 +1,101 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.logic\n",
"\n",
"class AtomicExpressionFormulaFormula with an atomic expression\n",
"\n",
"class AtomicLabelFormulaFormula with an atomic label\n",
"\n",
"class BinaryPathFormulaPath formula with two operands\n",
"\n",
"property left_subformulaproperty right_subformulaclass BinaryStateFormulaState formula with two operands\n",
"\n",
"class BooleanBinaryStateFormulaBoolean binary state formula\n",
"\n",
"class BooleanLiteralFormulaFormula with a boolean literal\n",
"\n",
"class BoundedUntilFormulaUntil Formula with either a step or a time bound.\n",
"\n",
"class ComparisonTypeGEQ = ComparisonType.GEQGREATER = ComparisonType.GREATERLEQ = ComparisonType.LEQLESS = ComparisonType.LESSclass ConditionalFormulaFormula with the right hand side being a condition.\n",
"\n",
"class CumulativeRewardFormulaSummed rewards over a the paths\n",
"\n",
"class EventuallyFormulaFormula for eventually\n",
"\n",
"class FormulaGeneric Storm Formula\n",
"\n",
"cloneself: stormpy.logic.logic.Formulastormpy.logic.logic.Formulaproperty is_probability_operatoris it a probability operator\n",
"\n",
"property is_reward_operatoris it a reward operator\n",
"\n",
"substituteself: stormpy.logic.logic.Formulaconstants_map: Dict[stormpy.storage.storage.Variable, stormpy.storage.storage.Expression]stormpy.logic.logic.FormulaSubstitute variables\n",
"\n",
"substitute_labels_by_labelsself: stormpy.logic.logic.Formulareplacements: Dict[str, str]stormpy.logic.logic.Formulasubstitute label occurences\n",
"\n",
"class GloballyFormulaFormula for globally\n",
"\n",
"class InstantaneousRewardFormulaInstantaneous reward\n",
"\n",
"class LongRunAvarageOperatorLong run average operator\n",
"\n",
"class LongRunAverageRewardFormulaLong run average reward\n",
"\n",
"class OperatorFormulaOperator formula\n",
"\n",
"property comparison_typeComparison type of bound\n",
"\n",
"property has_boundFlag if formula is bounded\n",
"\n",
"property has_optimality_typeFlag if an optimality type is present\n",
"\n",
"property optimality_typeFlag for the optimality type\n",
"\n",
"remove_boundself: stormpy.logic.logic.OperatorFormulaNoneremove_optimality_typeself: stormpy.logic.logic.OperatorFormulaNoneremove the optimality type\n",
"\n",
"set_boundself: stormpy.logic.logic.OperatorFormulacomparison_type: stormpy.logic.logic.ComparisonTypebound: stormpy.storage.storage.ExpressionNoneSet bound\n",
"\n",
"set_optimality_typeself: stormpy.logic.logic.OperatorFormulanew_optimality_type: stormpy.core.OptimizationDirectionNoneset the optimality type (use remove optimiality type for clearing)\n",
"\n",
"property thresholdThreshold of bound (currently only applicable to rational expressions)\n",
"\n",
"property threshold_exprclass PathFormulaFormula about the probability of a set of paths in an automaton\n",
"\n",
"class ProbabilityOperatorProbability operator\n",
"\n",
"class RewardOperatorReward operator\n",
"\n",
"has_reward_nameself: stormpy.logic.logic.RewardOperatorboolproperty reward_nameclass StateFormulaFormula about a state of an automaton\n",
"\n",
"class TimeOperatorThe time operator\n",
"\n",
"class UnaryBooleanStateFormulaUnary boolean state formula\n",
"\n",
"class UnaryPathFormulaPath formula with one operand\n",
"\n",
"property is_bounded_until_formulaproperty is_eventually_formulaproperty is_until_formulaproperty subformulathe subformula\n",
"\n",
"class UnaryStateFormulaState formula with one operand\n",
"\n",
"property subformulathe subformula\n",
"\n",
"class UntilFormulaPath Formula for unbounded until"
]
}
],
"metadata": {
"date": 1598178166.754715,
"filename": "logic.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.logic"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/logic.rst

@ -1,7 +0,0 @@
Stormpy.logic
**************************
.. automodule:: stormpy.logic
:members:
:undoc-members:
:imported-members:

110
doc/source/api/pars.ipynb

@ -0,0 +1,110 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.pars\n",
"\n",
"class DtmcParameterLiftingModelCheckerRegion model checker for DTMCs\n",
"\n",
"get_bound_all_statesself: stormpy.pars.pars.DtmcParameterLiftingModelCheckerenvironment: stormpy.core.Environmentregion: stormpy.pars.pars.ParameterRegionmaximise: bool = Truestormpy.core.ExplicitQuantitativeCheckResultGet bound\n",
"\n",
"class MdpParameterLiftingModelCheckerRegion model checker for MPDs\n",
"\n",
"get_bound_all_statesself: stormpy.pars.pars.MdpParameterLiftingModelCheckerenvironment: stormpy.core.Environmentregion: stormpy.pars.pars.ParameterRegionmaximise: bool = Truestormpy.core.ExplicitQuantitativeCheckResultGet bound\n",
"\n",
"class ModelInstantiatormodelClass for instantiating models.\n",
"\n",
"instantiatevaluationInstantiate model with given valuation.\n",
":param valuation: Valuation from parameter to value.\n",
":return: Instantiated model.\n",
"\n",
"class ModelTypeType of the model\n",
"\n",
"CTMC = ModelType.CTMCDTMC = ModelType.DTMCMA = ModelType.MAMDP = ModelType.MDPPOMDP = ModelType.POMDPclass PCtmcInstantiatorInstantiate PCTMCs to CTMCs\n",
"\n",
"instantiateself: stormpy.pars.pars.PCtmcInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseCtmcInstantiate model with given parameter values\n",
"\n",
"class PDtmcExactInstantiationCheckerInstantiate pDTMCs to exact DTMCs and immediately check\n",
"\n",
"checkself: stormpy.pars.pars.PDtmcExactInstantiationCheckerenv: stormpy.core.Environmentinstantiation: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.core._CheckResultset_graph_preservingself: stormpy.pars.pars._PDtmcExactInstantiationCheckerBasevalue: boolNoneclass PDtmcInstantiationCheckerInstantiate pDTMCs to DTMCs and immediately check\n",
"\n",
"checkself: stormpy.pars.pars.PDtmcInstantiationCheckerenv: stormpy.core.Environmentinstantiation: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.core._CheckResultset_graph_preservingself: stormpy.pars.pars._PDtmcInstantiationCheckerBasevalue: boolNoneclass PDtmcInstantiatorInstantiate PDTMCs to DTMCs\n",
"\n",
"instantiateself: stormpy.pars.pars.PDtmcInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseDtmcInstantiate model with given parameter values\n",
"\n",
"class PMaInstantiatorInstantiate PMAs to MAs\n",
"\n",
"instantiateself: stormpy.pars.pars.PMaInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseMAInstantiate model with given parameter values\n",
"\n",
"class PMdpExactInstantiationCheckerInstantiate PMDP to exact MDPs and immediately check\n",
"\n",
"checkself: stormpy.pars.pars.PMdpExactInstantiationCheckerenv: stormpy.core.Environmentinstantiation: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.core._CheckResultset_graph_preservingself: stormpy.pars.pars._PMdpExactInstantiationCheckerBasevalue: boolNoneclass PMdpInstantiationCheckerInstantiate PMDP to MDPs and immediately check\n",
"\n",
"checkself: stormpy.pars.pars.PMdpInstantiationCheckerenv: stormpy.core.Environmentinstantiation: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.core._CheckResultset_graph_preservingself: stormpy.pars.pars._PMdpInstantiationCheckerBasevalue: boolNoneclass PMdpInstantiatorInstantiate PMDPs to MDPs\n",
"\n",
"instantiateself: stormpy.pars.pars.PMdpInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseMdpInstantiate model with given parameter values\n",
"\n",
"class ParameterRegionParameter region\n",
"\n",
"property areaGet area\n",
"\n",
"create_from_stringregion_string: strvariables: Set[pycarl.core.Variable]stormpy.pars.pars.ParameterRegionCreate region from string\n",
"\n",
"class PartialPCtmcInstantiatorInstantiate PCTMCs to CTMCs\n",
"\n",
"instantiateself: stormpy.pars.pars.PartialPCtmcInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseParametricCtmcInstantiate model with given parameter values\n",
"\n",
"class PartialPDtmcInstantiatorInstantiate PDTMCs to DTMCs\n",
"\n",
"instantiateself: stormpy.pars.pars.PartialPDtmcInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseParametricDtmcInstantiate model with given parameter values\n",
"\n",
"class PartialPMaInstantiatorInstantiate PMAs to MAs\n",
"\n",
"instantiateself: stormpy.pars.pars.PartialPMaInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseParametricMAInstantiate model with given parameter values\n",
"\n",
"class PartialPMdpInstantiatorInstantiate PMDPs to MDPs\n",
"\n",
"instantiateself: stormpy.pars.pars.PartialPMdpInstantiatorarg0: Dict[pycarl.core.Variable, pycarl.cln.cln.Rational]stormpy.storage.storage.SparseParametricMdpInstantiate model with given parameter values\n",
"\n",
"class RegionModelCheckerRegion model checker via paramater lifting\n",
"\n",
"check_regionself: stormpy.pars.pars.RegionModelCheckerenvironment: stormpy.core.Environmentregion: stormpy.pars.pars.ParameterRegionhypothesis: stormpy.pars.pars.RegionResultHypothesis = RegionResultHypothesis.UNKNOWNinitialResult: stormpy.pars.pars.RegionResult = RegionResult.UNKNOWNsampleVertices: bool = Falsestormpy.pars.pars.RegionResultCheck region\n",
"\n",
"get_boundself: stormpy.pars.pars.RegionModelCheckerenvironment: stormpy.core.Environmentregion: stormpy.pars.pars.ParameterRegionmaximise: bool = Truepycarl.cln.cln.FactorizedRationalFunctionGet bound\n",
"\n",
"get_split_suggestionself: stormpy.pars.pars.RegionModelCheckerDict[pycarl.core.Variable, float]Get estimate\n",
"\n",
"specifyself: stormpy.pars.pars.RegionModelCheckerenvironment: stormpy.core.Environmentmodel: stormpy.storage.storage._SparseParametricModelformula: stormpy.logic.logic.Formulagenerate_splitting_estimate: bool = Falseallow_model_simplification: bool = TrueNonespecify arguments\n",
"\n",
"class RegionResultTypes of region check results\n",
"\n",
"ALLSAT = RegionResult.ALLSATALLVIOLATED = RegionResult.ALLVIOLATEDCENTERSAT = RegionResult.CENTERSATCENTERVIOLATED = RegionResult.CENTERVIOLATEDEXISTSBOTH = RegionResult.EXISTSBOTHEXISTSSAT = RegionResult.EXISTSSATEXISTSVIOLATED = RegionResult.EXISTSVIOLATEDUNKNOWN = RegionResult.UNKNOWNclass RegionResultHypothesisHypothesis for the result of a parameter region\n",
"\n",
"ALLSAT = RegionResultHypothesis.ALLSATALLVIOLATED = RegionResultHypothesis.ALLVIOLATEDUNKNOWN = RegionResultHypothesis.UNKNOWNexception StormErrormessageBase class for exceptions in Storm.\n",
"\n",
"create_region_checkerenvironment: stormpy.core.Environmentmodel: stormpy.storage.storage._SparseParametricModelformula: stormpy.logic.logic.Formulagenerate_splitting_estimate: bool = Falseallow_model_simplification: bool = Truestormpy.pars.pars.RegionModelCheckerCreate region checker\n",
"\n",
"gather_derivativesmodel: stormpy.storage.storage._SparseParametricModelvar: pycarl.core.VariableSet[pycarl.cln.cln.FactorizedPolynomial]Gather all derivatives of transition probabilities\n",
"\n",
"simplify_modelmodelformulaSimplify parametric model preserving the given formula by eliminating states with constant outgoing probabilities.\n",
":param model: Model.\n",
":param formula: Formula.\n",
":return: Tuple of simplified model and simplified formula."
]
}
],
"metadata": {
"date": 1598178166.7897925,
"filename": "pars.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.pars"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/pars.rst

@ -1,7 +0,0 @@
Stormpy.pars
**************************
.. automodule:: stormpy.pars
:members:
:undoc-members:
:imported-members:

759
doc/source/api/storage.ipynb

@ -0,0 +1,759 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.storage\n",
"\n",
"class Bdd_SylvanBdd\n",
"\n",
"to_expressionself: stormpy.storage.storage.Bdd_Sylvanexpression_manager: storm::expressions::ExpressionManagerTuple[List[storm::expressions::Expression], Dict[int, storm::expressions::Variable]]class BitVectorgetself: stormpy.storage.storage.BitVectorindex: intboolload_from_stringdescription: strstormpy.storage.storage.BitVectornumber_of_set_bitsself: stormpy.storage.storage.BitVectorintsetself: stormpy.storage.storage.BitVectorindex: intvalue: bool = TrueNoneSet\n",
"\n",
"sizeself: stormpy.storage.storage.BitVectorintstore_as_stringself: stormpy.storage.storage.BitVectorstrclass ChoiceLabelingLabeling for choices\n",
"\n",
"add_label_to_choiceself: stormpy.storage.storage.ChoiceLabelinglabel: strstate: intNoneAdds a label to a given choice\n",
"\n",
"get_choicesself: stormpy.storage.storage.ChoiceLabelinglabel: strstormpy.storage.storage.BitVectorGet all choices which have the given label\n",
"\n",
"get_labels_of_choiceself: stormpy.storage.storage.ChoiceLabelingchoice: intSet[str]Get labels of the given choice\n",
"\n",
"set_choicesself: stormpy.storage.storage.ChoiceLabelinglabel: strchoices: stormpy.storage.storage.BitVectorNoneAdd a label to a the given choices\n",
"\n",
"class ChoiceOriginsThis class represents the origin of choices of a model in terms of the input model spec.\n",
"\n",
"as_jani_choice_originsself: stormpy.storage.storage.ChoiceOriginsstorm::storage::sparse::JaniChoiceOriginsas_prism_choice_originsself: stormpy.storage.storage.ChoiceOriginsstorm::storage::sparse::PrismChoiceOriginsget_choice_infoself: stormpy.storage.storage.ChoiceOriginsidentifier: intstrhuman readable string\n",
"\n",
"get_identifier_infoself: stormpy.storage.storage.ChoiceOriginsidentifier: intstrhuman readable string\n",
"\n",
"get_number_of_identifiersself: stormpy.storage.storage.ChoiceOriginsintthe number of considered identifier\n",
"\n",
"is_jani_choice_originsself: stormpy.storage.storage.ChoiceOriginsboolis_prism_choice_originsself: stormpy.storage.storage.ChoiceOriginsboolclass DdManager_Sylvanget_meta_variableself: stormpy.storage.storage.DdManager_Sylvanexpression_variable: storm::expressions::Variablestormpy.storage.storage.DdMetaVariable_Sylvanclass DdMetaVariableTypeBitvector = DdMetaVariableType.BitvectorBool = DdMetaVariableType.BoolInt = DdMetaVariableType.Intclass DdMetaVariable_Sylvancompute_indicesself: stormpy.storage.storage.DdMetaVariable_Sylvansorted: bool = TrueList[int]property lowest_valueproperty nameproperty typeclass Dd_SylvanDd\n",
"\n",
"property dd_managerget the manager\n",
"\n",
"property meta_variablesthe contained meta variables\n",
"\n",
"property node_countget node count\n",
"\n",
"class DistributionDoubleFinite Support Distribution\n",
"\n",
"class ExpressionHolds an expression\n",
"\n",
"Andarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionConjunctionarg0: List[stormpy.storage.storage.Expression]stormpy.storage.storage.ExpressionDisjunctionarg0: List[stormpy.storage.storage.Expression]stormpy.storage.storage.ExpressionDividearg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionEqarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionGeqarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionGreaterarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionIffarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionImpliesarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionLeqarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionLessarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionMinusarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionModuloarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionMultiplyarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionNeqarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionOrarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionPlusarg0: stormpy.storage.storage.Expressionarg1: stormpy.storage.storage.Expressionstormpy.storage.storage.Expressionproperty arityThe arity of the expression\n",
"\n",
"contains_variableself: stormpy.storage.storage.Expressionvariables: Set[stormpy.storage.storage.Variable]boolCheck if the expression contains any of the given variables.\n",
"\n",
"contains_variablesself: stormpy.storage.storage.ExpressionboolCheck if the expression contains variables.\n",
"\n",
"evaluate_as_boolself: stormpy.storage.storage.ExpressionboolGet the boolean value this expression evaluates to\n",
"\n",
"evaluate_as_doubleself: stormpy.storage.storage.ExpressionfloatGet the double value this expression evaluates to\n",
"\n",
"evaluate_as_intself: stormpy.storage.storage.ExpressionintGet the integer value this expression evaluates to\n",
"\n",
"evaluate_as_rationalself: stormpy.storage.storage.Expression__gmp_expr<__mpq_struct [1], __mpq_struct [1]>Get the rational number this expression evaluates to\n",
"\n",
"get_operandself: stormpy.storage.storage.ExpressionoperandIndex: intstormpy.storage.storage.ExpressionGet the operand at the given index\n",
"\n",
"get_variablesself: stormpy.storage.storage.ExpressionSet[stormpy.storage.storage.Variable]Get the variables\n",
"\n",
"has_boolean_typeself: stormpy.storage.storage.ExpressionboolCheck if the expression is a boolean\n",
"\n",
"has_integer_typeself: stormpy.storage.storage.ExpressionboolCheck if the expression is an integer\n",
"\n",
"has_rational_typeself: stormpy.storage.storage.ExpressionboolCheck if the expression is a rational\n",
"\n",
"identifierself: stormpy.storage.storage.ExpressionstrRetrieves the identifier associated with this expression if this expression is a variable\n",
"\n",
"property is_function_applicationTrue iff the expression is a function application (of any sort\n",
"\n",
"is_literalself: stormpy.storage.storage.ExpressionboolCheck if the expression is a literal\n",
"\n",
"is_variableself: stormpy.storage.storage.ExpressionboolCheck if the expression is a variable\n",
"\n",
"property managerGet the manager\n",
"\n",
"property operatorThe operator of the expression (if it is a function application)\n",
"\n",
"simplifyself: stormpy.storage.storage.Expressionstormpy.storage.storage.ExpressionSimplify expression\n",
"\n",
"substituteself: stormpy.storage.storage.Expressionsubstitution_map: Dict[stormpy.storage.storage.Variable, stormpy.storage.storage.Expression]stormpy.storage.storage.Expressionproperty typeGet the Type\n",
"\n",
"class ExpressionManagerManages variables for expressions\n",
"\n",
"create_booleanself: stormpy.storage.storage.ExpressionManagerboolean: boolstorm::expressions::ExpressionCreate expression from boolean\n",
"\n",
"create_boolean_variableself: stormpy.storage.storage.ExpressionManagername: strauxiliary: bool = Falsestorm::expressions::Variablecreate Boolean variable\n",
"\n",
"create_integerself: stormpy.storage.storage.ExpressionManagerinteger: intstorm::expressions::ExpressionCreate expression from integer number\n",
"\n",
"create_integer_variableself: stormpy.storage.storage.ExpressionManagername: strauxiliary: bool = Falsestorm::expressions::Variablecreate Integer variable\n",
"\n",
"create_rationalself: stormpy.storage.storage.ExpressionManager, rational: __gmp_expr<__mpq_struct [1], __mpq_struct [1]>storm::expressions::ExpressionCreate expression from rational number\n",
"\n",
"create_rational_variableself: stormpy.storage.storage.ExpressionManagername: strauxiliary: bool = Falsestorm::expressions::Variablecreate Rational variable\n",
"\n",
"get_variableself: stormpy.storage.storage.ExpressionManagername: strstorm::expressions::Variableget variably by name\n",
"\n",
"class ExpressionParserParser for storm-expressions\n",
"\n",
"parseself: stormpy.storage.storage.ExpressionParserstring: strignore_error: bool = Falsestormpy.storage.storage.Expressionparse\n",
"\n",
"set_identifier_mappingself: stormpy.storage.storage.ExpressionParserarg0: Dict[str, stormpy.storage.storage.Expression]Nonesets identifiers\n",
"\n",
"class ExpressionTypeThe type of an expression\n",
"\n",
"property is_booleanproperty is_integerproperty is_rationalclass ItemLabelingLabeling\n",
"\n",
"add_labelself: stormpy.storage.storage.ItemLabelinglabel: strNoneAdd label\n",
"\n",
"contains_labelself: stormpy.storage.storage.ItemLabelinglabel: strboolCheck if the given label is contained in the labeling\n",
"\n",
"get_labelsself: stormpy.storage.storage.ItemLabelingSet[str]Get all labels\n",
"\n",
"class JaniAssignmentJani Assignment\n",
"\n",
"property expressionclass JaniAutomatonA Jani Automation\n",
"\n",
"add_edgeself: stormpy.storage.storage.JaniAutomatonedge: storm::jani::EdgeNoneadd_initial_locationself: stormpy.storage.storage.JaniAutomatonindex: intNoneadd_locationself: stormpy.storage.storage.JaniAutomatonlocation: storm::jani::Locationintadds a new location, returns the index\n",
"\n",
"property edgesget edges\n",
"\n",
"property initial_location_indicesproperty initial_states_restrictioninitial state restriction\n",
"\n",
"property location_variableproperty locationsproperty nameproperty variablesclass JaniBoundedIntegerVariableA Bounded Integer\n",
"\n",
"class JaniChoiceOriginsThis class represents for each choice the origin in the jani spec.\n",
"\n",
"get_edge_index_setself: stormpy.storage.storage.JaniChoiceOriginschoice_index: intstormpy.core.FlatSetreturns the set of edges that induced the choice\n",
"\n",
"property modelretrieves the associated JANI model\n",
"\n",
"class JaniConstantA Constant in JANI\n",
"\n",
"property definedis constant defined by some expression\n",
"\n",
"property expression_variableexpression variable for this constant\n",
"\n",
"property namename of constant\n",
"\n",
"property typetype of constant\n",
"\n",
"class JaniEdgeA Jani Edge\n",
"\n",
"property action_indexaction index\n",
"\n",
"property colorcolor for the edge\n",
"\n",
"property destinationsedge destinations\n",
"\n",
"property guardedge guard\n",
"\n",
"has_silent_actionself: stormpy.storage.storage.JaniEdgeboolIs the edge labelled with the silent action\n",
"\n",
"property nr_destinationsnr edge destinations\n",
"\n",
"property rateedge rate\n",
"\n",
"property source_location_indexindex for source location\n",
"\n",
"substituteself: stormpy.storage.storage.JaniEdge, mapping: Dict[storm::expressions::Variable, storm::expressions::Expression]Noneproperty template_edgetemplate edge\n",
"\n",
"class JaniEdgeDestinationDestination in Jani\n",
"\n",
"property assignmentsproperty probabilityproperty target_location_indexclass JaniInformationObjectAn object holding information about a JANI model\n",
"\n",
"property avg_var_domain_sizeproperty model_typeproperty nr_automataproperty nr_edgesproperty nr_variablesproperty state_domain_sizeclass JaniLocationA Location in JANI\n",
"\n",
"property assignmentslocation assignments\n",
"\n",
"property namename of the location\n",
"\n",
"class JaniLocationExpanderA transformer for Jani expanding variables into locations\n",
"\n",
"get_resultself: stormpy.storage.storage.JaniLocationExpanderstormpy.storage.storage.JaniModeltransformself: stormpy.storage.storage.JaniLocationExpanderautomaton_name: strvariable_name: strNoneclass JaniModelA Jani Model\n",
"\n",
"add_automatonself: stormpy.storage.storage.JaniModelautomaton: storm::jani::Automatonintadd an automaton (with a unique name)\n",
"\n",
"property automataget automata\n",
"\n",
"check_validself: stormpy.storage.storage.JaniModelNoneSome basic checks to ensure validity\n",
"\n",
"property constantsget constants\n",
"\n",
"decode_automaton_and_edge_indexarg0: intTuple[int, int]get edge and automaton from edge/automaton index\n",
"\n",
"define_constantsself: stormpy.storage.storage.JaniModel, map: Dict[storm::expressions::Variable, storm::expressions::Expression]stormpy.storage.storage.JaniModeldefine constants with a mapping from the corresponding expression variables to expressions\n",
"\n",
"encode_automaton_and_edge_indexarg0: intarg1: intintget edge/automaton-index\n",
"\n",
"property expression_managerget expression manager\n",
"\n",
"finalizeself: stormpy.storage.storage.JaniModelNonefinalizes the model. After this action, be careful changing the data structure.\n",
"\n",
"flatten_compositionself: stormpy.storage.storage.JaniModelsmt_solver_factory: stormpy.utility.utility.SmtSolverFactory=<stormpy.utility.utility.SmtSolverFactory object at 0x7fd42a716670>stormpy.storage.storage.JaniModelget_automatonself: stormpy.storage.storage.JaniModelname: strstorm::jani::Automatonget_automaton_indexself: stormpy.storage.storage.JaniModelname: strintget index for automaton name\n",
"\n",
"get_constantself: stormpy.storage.storage.JaniModelname: strstorm::jani::Constantget constant by name\n",
"\n",
"property global_variableshas_standard_compositionself: stormpy.storage.storage.JaniModelboolis the composition the standard composition\n",
"\n",
"property has_undefined_constantsFlag if program has undefined constants\n",
"\n",
"property initial_states_restrictioninitial states restriction\n",
"\n",
"make_standard_compliantself: stormpy.storage.storage.JaniModelNonemake standard JANI compliant\n",
"\n",
"property model_typeModel type\n",
"\n",
"property namemodel name\n",
"\n",
"remove_constantself: stormpy.storage.storage.JaniModelconstant_name: strNoneremove a constant. Make sure the constant does not appear in the model.\n",
"\n",
"replace_automatonself: stormpy.storage.storage.JaniModelindex: intnew_automaton: storm::jani::AutomatonNonereplace automaton at index\n",
"\n",
"restrict_edgesself: stormpy.storage.storage.JaniModeledge_set: stormpy.core.FlatSetstormpy.storage.storage.JaniModelrestrict model to edges given by set\n",
"\n",
"set_model_typeself: stormpy.storage.storage.JaniModelarg0: stormpy.core.JaniModelTypeNoneSets (only) the model type\n",
"\n",
"set_standard_system_compositionself: stormpy.storage.storage.JaniModelNonesets the composition to the standard composition\n",
"\n",
"substitute_constantsself: stormpy.storage.storage.JaniModelstormpy.storage.storage.JaniModelsubstitute constants\n",
"\n",
"substitute_functionsself: stormpy.storage.storage.JaniModelNonesubstitute functions\n",
"\n",
"to_dotself: stormpy.storage.storage.JaniModelstrproperty undefined_constants_are_graph_preservingFlag if the undefined constants do not change the graph structure\n",
"\n",
"class JaniOrderedAssignmentsSet of assignments\n",
"\n",
"addself: stormpy.storage.storage.JaniOrderedAssignmentsnew_assignment: storm::jani::Assignmentadd_to_existing: bool=Falseboolcloneself: stormpy.storage.storage.JaniOrderedAssignmentsstormpy.storage.storage.JaniOrderedAssignmentsclone assignments (performs a deep copy)\n",
"\n",
"substituteself: stormpy.storage.storage.JaniOrderedAssignments, substitution_map: Dict[storm::expressions::Variable, storm::expressions::Expression]Nonesubstitute in rhs according to given substitution map\n",
"\n",
"class JaniScopeChangerA transformer for Jani changing variables from local to global and vice versa\n",
"\n",
"make_variables_localself: stormpy.storage.storage.JaniScopeChangermodel: stormpy.storage.storage.JaniModelproperties: List[stormpy.core.Property] = []stormpy.storage.storage.JaniModelclass JaniTemplateEdgeTemplate edge, internal data structure for edges\n",
"\n",
"add_destinationself: stormpy.storage.storage.JaniTemplateEdgearg0: storm::jani::TemplateEdgeDestinationNoneproperty assignmentsproperty destinationsproperty guardclass JaniTemplateEdgeDestinationTemplate edge destination, internal data structure for edge destinations\n",
"\n",
"property assignmentsclass JaniVariableA Variable in JANI\n",
"\n",
"property expression_variableexpression variable for this variable\n",
"\n",
"property namename of constant\n",
"\n",
"class JaniVariableSetJani Set of Variables\n",
"\n",
"add_bounded_integer_variableself: stormpy.storage.storage.JaniVariableSetvariable: storm::jani::BoundedIntegerVariablestorm::jani::BoundedIntegerVariableadd_variableself: stormpy.storage.storage.JaniVariableSetarg0: storm::jani::VariableNoneemptyself: stormpy.storage.storage.JaniVariableSetboolis there a variable in the set?\n",
"\n",
"get_variable_by_expr_variableself: stormpy.storage.storage.JaniVariableSetarg0: storm::expressions::Variablestorm::jani::Variableget_variable_by_nameself: stormpy.storage.storage.JaniVariableSetarg0: strstorm::jani::Variableclass ModelTypeType of the model\n",
"\n",
"CTMC = ModelType.CTMCDTMC = ModelType.DTMCMA = ModelType.MAMDP = ModelType.MDPPOMDP = ModelType.POMDPclass OperatorTypeType of an operator (of any sort)\n",
"\n",
"And = OperatorType.AndCeil = OperatorType.CeilDivide = OperatorType.DivideEqual = OperatorType.EqualFloor = OperatorType.FloorGreater = OperatorType.GreaterGreaterOrEqual = OperatorType.GreaterOrEqualIff = OperatorType.IffImplies = OperatorType.ImpliesIte = OperatorType.IteLess = OperatorType.LessLessOrEqual = OperatorType.LessOrEqualMax = OperatorType.MaxMin = OperatorType.MinMinus = OperatorType.MinusModulo = OperatorType.ModuloNot = OperatorType.NotNotEqual = OperatorType.NotEqualOr = OperatorType.OrPlus = OperatorType.PlusPower = OperatorType.PowerTimes = OperatorType.TimesXor = OperatorType.Xorclass ParametricSparseMatrixParametric sparse matrix\n",
"\n",
"get_rowself: stormpy.storage.storage.ParametricSparseMatrixrow: intstorm::storage::SparseMatrix<carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RA, carl::MonomialComparator<&carl::Monomial::compareGradedLexical, true>, carl::StdMultivariatePolynomialPolicies<carl::NoReasons, carl::NoAllocator> > >, true> >::rowsGet row\n",
"\n",
"get_row_group_endself: stormpy.storage.storage.ParametricSparseMatrixarg0: intintget_row_group_startself: stormpy.storage.storage.ParametricSparseMatrixarg0: intintget_rowsself: stormpy.storage.storage.ParametricSparseMatrixrow_start: introw_end: intstorm::storage::SparseMatrix<carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RA, carl::MonomialComparator<&carl::Monomial::compareGradedLexical, true>, carl::StdMultivariatePolynomialPolicies<carl::NoReasons, carl::NoAllocator> > >, true> >::rowsGet rows from start to end\n",
"\n",
"property has_trivial_row_groupingTrivial row grouping\n",
"\n",
"property nr_columnsNumber of columns\n",
"\n",
"property nr_entriesNumber of non-zero entries\n",
"\n",
"property nr_rowsNumber of rows\n",
"\n",
"print_rowself: stormpy.storage.storage.ParametricSparseMatrixrow: intstrPrint row\n",
"\n",
"row_iterself: stormpy.storage.storage.ParametricSparseMatrixrow_start: introw_end: intiteratorGet iterator from start to end\n",
"\n",
"submatrixself: stormpy.storage.storage.ParametricSparseMatrixrow_constraint: stormpy.storage.storage.BitVectorcolumn_constraint: stormpy.storage.storage.BitVectorinsert_diagonal_entries: bool = Falsestormpy.storage.storage.ParametricSparseMatrixGet submatrix\n",
"\n",
"class ParametricSparseMatrixBuilderBuilder of parametric sparse matrix\n",
"\n",
"add_next_valueself: stormpy.storage.storage.ParametricSparseMatrixBuilderrow: intcolumn: intvalue: carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RAcarl::MonomialComparator<&carl::Monomial::compareGradedLexicaltrue>carl::StdMultivariatePolynomialPolicies<carl::NoReasonscarl::NoAllocator> > >true>NoneSets the matrix entry at the given row and column to the given value. After all entries have been added,\n",
"calling function build() is mandatory.\n",
"\n",
"Note: this is a linear setter. That is, it must be called consecutively for each entry, row by row and\n",
"column by column. As multiple entries per column are admitted, consecutive calls to this method are\n",
"admitted to mention the same row-column-pair. If rows are skipped entirely, the corresponding rows are\n",
"treated as empty. If these constraints are not met, an exception is thrown.\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- row (double) – The row in which the matrix entry is to be set \n",
"- column (double) – The column in which the matrix entry is to be set \n",
"- value ([RationalFunction](core#stormpy.RationalFunction)) – The value that is to be set at the specified row and column \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"buildself: stormpy.storage.storage.ParametricSparseMatrixBuilderoverridden_row_count: int=0overridden_column_count: int=0overridden-row_group_count: int=0storm::storage::SparseMatrix<carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RA, carl::MonomialComparator<&carl::Monomial::compareGradedLexical, true>, carl::StdMultivariatePolynomialPolicies<carl::NoReasons, carl::NoAllocator> > >, true> >Finalize the sparse matrix\n",
"\n",
"get_current_row_group_countself: stormpy.storage.storage.ParametricSparseMatrixBuilderintGet the current row group count\n",
"\n",
"get_last_columnself: stormpy.storage.storage.ParametricSparseMatrixBuilderintthe most recently used column\n",
"\n",
"get_last_rowself: stormpy.storage.storage.ParametricSparseMatrixBuilderintGet the most recently used row\n",
"\n",
"new_row_groupself: stormpy.storage.storage.ParametricSparseMatrixBuilderstarting_row: intNoneStart a new row group in the matrix\n",
"\n",
"replace_columnsself: stormpy.storage.storage.ParametricSparseMatrixBuilderreplacements: List[int]offset: intNoneReplaces all columns with id >= offset according to replacements.\n",
"Every state with id offset+i is replaced by the id in replacements[i]. Afterwards the columns are sorted.\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- const& replacements (std::vector<double>) – replacements Mapping indicating the replacements from offset+i -> value of i \n",
"- offset (int) – Offset to add to each id in vector index. \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"class ParametricSparseMatrixEntryEntry of parametric sparse matrix\n",
"\n",
"property columnColumn\n",
"\n",
"set_valueself: stormpy.storage.storage.ParametricSparseMatrixEntryvalue: carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RAcarl::MonomialComparator<&carl::Monomial::compareGradedLexicaltrue>carl::StdMultivariatePolynomialPolicies<carl::NoReasonscarl::NoAllocator> > >true>NoneSet value\n",
"\n",
"valueself: stormpy.storage.storage.ParametricSparseMatrixEntrycarl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RA, carl::MonomialComparator<&carl::Monomial::compareGradedLexical, true>, carl::StdMultivariatePolynomialPolicies<carl::NoReasons, carl::NoAllocator> > >, true>Value\n",
"\n",
"class ParametricSparseMatrixRowsSet of rows in a parametric sparse matrix\n",
"\n",
"class PrismAssignmentAn assignment in prism\n",
"\n",
"property expressionExpression for the update\n",
"\n",
"property variableVariable that is updated\n",
"\n",
"class PrismBooleanVariableA program boolean variable in a Prism program\n",
"\n",
"class PrismChoiceOriginsThis class represents for each choice the set of prism commands that induced the choice.\n",
"\n",
"get_command_setself: stormpy.storage.storage.PrismChoiceOriginschoice_index: intstormpy.core.FlatSetReturns the set of prism commands that induced the choice\n",
"\n",
"property programretrieves the associated Prism program\n",
"\n",
"class PrismCommandA command in a Prism program\n",
"\n",
"property global_indexGet global index\n",
"\n",
"property guard_expressionGet guard expression\n",
"\n",
"property updatesUpdates in the command\n",
"\n",
"class PrismConstantA constant in a Prism program\n",
"\n",
"property definedIs the constant defined?\n",
"\n",
"property definitionDefining expression\n",
"\n",
"property expression_variableExpression variable\n",
"\n",
"property nameConstant name\n",
"\n",
"property typeThe type of the constant\n",
"\n",
"class PrismIntegerVariableA program integer variable in a Prism program\n",
"\n",
"property lower_bound_expressionThe the lower bound expression of this integer variable\n",
"\n",
"property upper_bound_expressionThe the upper bound expression of this integer variable\n",
"\n",
"class PrismLabelA label in prism\n",
"\n",
"property expressionproperty nameclass PrismModelTypeType of the prism model\n",
"\n",
"CTMC = PrismModelType.CTMCCTMDP = PrismModelType.CTMDPDTMC = PrismModelType.DTMCMA = PrismModelType.MAMDP = PrismModelType.MDPUNDEFINED = PrismModelType.UNDEFINEDclass PrismModuleA module in a Prism program\n",
"\n",
"property boolean_variablesAll boolean Variables of this module\n",
"\n",
"property commandsCommands in the module\n",
"\n",
"get_boolean_variableself: stormpy.storage.storage.PrismModulevariable_name: strstorm::prism::BooleanVariableget_integer_variableself: stormpy.storage.storage.PrismModulevariable_name: strstorm::prism::IntegerVariableproperty integer_variablesAll integer Variables of this module\n",
"\n",
"property nameName of the module\n",
"\n",
"class PrismProgramA Prism Program\n",
"\n",
"property constantsGet Program Constants\n",
"\n",
"define_constantsself: stormpy.storage.storage.PrismProgram, arg0: Dict[storm::expressions::Variable, storm::expressions::Expression]stormpy.storage.storage.PrismProgramDefine constants\n",
"\n",
"property expression_managerGet the expression manager for expressions in this program\n",
"\n",
"flattenself: stormpy.storage.storage.PrismProgramsmt_factory: stormpy.utility.utility.SmtSolverFactory=<stormpy.utility.utility.SmtSolverFactory object at 0x7fd42a7e1730>stormpy.storage.storage.PrismProgramPut program into a single module\n",
"\n",
"get_constantself: stormpy.storage.storage.PrismProgramname: strstorm::prism::Constantget_label_expressionself: stormpy.storage.storage.PrismProgramlabel: strstorm::expressions::ExpressionGet the expression of the given label.\n",
"\n",
"get_moduleself: stormpy.storage.storage.PrismProgrammodule_name: strstorm::prism::Moduleproperty hasUndefinedConstantsDoes the program have undefined constants?\n",
"\n",
"property has_undefined_constantsFlag if program has undefined constants\n",
"\n",
"property isDeterministicModelDoes the program describe a deterministic model?\n",
"\n",
"property labelsGet all labels in the program\n",
"\n",
"property model_typeModel type\n",
"\n",
"property modulesModules in the program\n",
"\n",
"property nr_modulesNumber of modules\n",
"\n",
"restrict_commandsself: stormpy.storage.storage.PrismProgramarg0: stormpy.core.FlatSetstormpy.storage.storage.PrismProgramRestrict commands\n",
"\n",
"property reward_modelsThe defined reward models\n",
"\n",
"simplifyself: stormpy.storage.storage.PrismProgramstormpy.storage.storage.PrismProgramSimplify\n",
"\n",
"substitute_constantsself: stormpy.storage.storage.PrismProgramstormpy.storage.storage.PrismProgramSubstitute constants within program\n",
"\n",
"substitute_formulasself: stormpy.storage.storage.PrismProgramstormpy.storage.storage.PrismProgramSubstitute formulas within program\n",
"\n",
"to_janiself: stormpy.storage.storage.PrismProgramproperties: List[stormpy.core.Property]all_variables_global: bool = Truesuffix: str = ''Tuple[storm::jani::Model, List[stormpy.core.Property]]Transform to Jani program\n",
"\n",
"property undefined_constants_are_graph_preservingFlag if the undefined constants do not change the graph structure\n",
"\n",
"used_constantsself: stormpy.storage.storage.PrismProgramList[storm::prism::Constant]Compute Used Constants\n",
"\n",
"property variablesGet all Expression Variables used by the program\n",
"\n",
"class PrismRewardModelReward declaration in prism\n",
"\n",
"property nameget name of the reward model\n",
"\n",
"class PrismUpdateAn update in a Prism command\n",
"\n",
"property assignmentsAssignments in the update\n",
"\n",
"property probability_expressionThe probability expression for this update\n",
"\n",
"class PrismVariableA program variable in a Prism program\n",
"\n",
"property expression_variableThe expression variable corresponding to the variable\n",
"\n",
"property initial_value_expressionThe expression represented the initial value of the variable\n",
"\n",
"property nameVariable name\n",
"\n",
"class SchedulerChoiceDoubleA choice of a finite memory scheduler\n",
"\n",
"property definedIs the choice defined by the scheduler?\n",
"\n",
"property deterministicIs the choice deterministic (given by a Dirac distribution)?\n",
"\n",
"get_choiceself: stormpy.storage.storage.SchedulerChoiceDoublestorm::storage::Distribution<double, unsigned long>Get the distribution over the actions\n",
"\n",
"get_deterministic_choiceself: stormpy.storage.storage.SchedulerChoiceDoubleintGet the deterministic choice\n",
"\n",
"class SchedulerDoubleA Finite Memory Scheduler\n",
"\n",
"compute_action_supportself: stormpy.storage.storage.SchedulerDoublenondeterministic_choice_indices: List[int]stormpy.storage.storage.BitVectorproperty deterministicIs the scheduler deterministic?\n",
"\n",
"get_choiceself: stormpy.storage.storage.SchedulerDoublestate_index: intmemory_index: int = 0storm::storage::SchedulerChoice<double>property memory_sizeHow much memory does the scheduler take?\n",
"\n",
"property memorylessIs the scheduler memoryless?\n",
"\n",
"property partialIs the scheduler partial?\n",
"\n",
"class SparseCtmcCTMC in sparse representation\n",
"\n",
"property exit_ratesclass SparseDtmcDTMC in sparse representation\n",
"\n",
"class SparseMAMA in sparse representation\n",
"\n",
"apply_schedulerself: stormpy.storage.storage.SparseMAscheduler: storm::storage::Scheduler<double>drop_unreachable_states: bool=Truestormpy.storage.storage._SparseModelapply scheduler\n",
"\n",
"convert_to_ctmcself: stormpy.storage.storage.SparseMAstormpy.storage.storage.SparseCtmcConvert the MA into a CTMC.\n",
"\n",
"property convertible_to_ctmcCheck whether the MA can be converted into a CTMC.\n",
"\n",
"property exit_ratesproperty markovian_statesproperty nondeterministic_choice_indicesclass SparseMatrixSparse matrix\n",
"\n",
"get_rowself: stormpy.storage.storage.SparseMatrixrow: intstorm::storage::SparseMatrix<double>::rowsGet row\n",
"\n",
"get_row_group_endself: stormpy.storage.storage.SparseMatrixarg0: intintget_row_group_startself: stormpy.storage.storage.SparseMatrixarg0: intintget_rowsself: stormpy.storage.storage.SparseMatrixrow_start: introw_end: intstorm::storage::SparseMatrix<double>::rowsGet rows from start to end\n",
"\n",
"property has_trivial_row_groupingTrivial row grouping\n",
"\n",
"property nr_columnsNumber of columns\n",
"\n",
"property nr_entriesNumber of non-zero entries\n",
"\n",
"property nr_rowsNumber of rows\n",
"\n",
"print_rowself: stormpy.storage.storage.SparseMatrixrow: intstrPrint rows from start to end\n",
"\n",
"row_iterself: stormpy.storage.storage.SparseMatrixrow_start: introw_end: intiteratorGet iterator from start to end\n",
"\n",
"submatrixself: stormpy.storage.storage.SparseMatrixrow_constraint: stormpy.storage.storage.BitVectorcolumn_constraint: stormpy.storage.storage.BitVectorinsert_diagonal_entries: bool = Falsestormpy.storage.storage.SparseMatrixGet submatrix\n",
"\n",
"class SparseMatrixBuilderBuilder of sparse matrix\n",
"\n",
"add_next_valueself: stormpy.storage.storage.SparseMatrixBuilderrow: intcolumn: intvalue: floatNoneSets the matrix entry at the given row and column to the given value. After all entries have been added,\n",
"calling function build() is mandatory.\n",
"\n",
"Note: this is a linear setter. That is, it must be called consecutively for each entry, row by row and\n",
"column by column. As multiple entries per column are admitted, consecutive calls to this method are\n",
"admitted to mention the same row-column-pair. If rows are skipped entirely, the corresponding rows are\n",
"treated as empty. If these constraints are not met, an exception is thrown.\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- row (double) – The row in which the matrix entry is to be set \n",
"- column (double) – The column in which the matrix entry is to be set \n",
"- value (double) – The value that is to be set at the specified row and column \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"buildself: stormpy.storage.storage.SparseMatrixBuilderoverridden_row_count: int=0overridden_column_count: int=0overridden-row_group_count: int=0storm::storage::SparseMatrix<double>Finalize the sparse matrix\n",
"\n",
"get_current_row_group_countself: stormpy.storage.storage.SparseMatrixBuilderintGet the current row group count\n",
"\n",
"get_last_columnself: stormpy.storage.storage.SparseMatrixBuilderintthe most recently used column\n",
"\n",
"get_last_rowself: stormpy.storage.storage.SparseMatrixBuilderintGet the most recently used row\n",
"\n",
"new_row_groupself: stormpy.storage.storage.SparseMatrixBuilderstarting_row: intNoneStart a new row group in the matrix\n",
"\n",
"replace_columnsself: stormpy.storage.storage.SparseMatrixBuilderreplacements: List[int]offset: intNoneReplaces all columns with id >= offset according to replacements.\n",
"Every state with id offset+i is replaced by the id in replacements[i]. Afterwards the columns are sorted.\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- const& replacements (std::vector<double>) – replacements Mapping indicating the replacements from offset+i -> value of i \n",
"- offset (int) – Offset to add to each id in vector index. \n",
"\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"class SparseMatrixEntryEntry of sparse matrix\n",
"\n",
"property columnColumn\n",
"\n",
"set_valueself: stormpy.storage.storage.SparseMatrixEntryvalue: floatNoneSet value\n",
"\n",
"valueself: stormpy.storage.storage.SparseMatrixEntryfloatValue\n",
"\n",
"class SparseMatrixRowsSet of rows in a sparse matrix\n",
"\n",
"class SparseMdpMDP in sparse representation\n",
"\n",
"apply_schedulerself: stormpy.storage.storage.SparseMdpscheduler: storm::storage::Scheduler<double>drop_unreachable_states: bool=Truestormpy.storage.storage._SparseModelapply scheduler\n",
"\n",
"get_choice_indexself: stormpy.storage.storage.SparseMdpstate: intaction_offset: intintgets the choice index for the offset action from the given state.\n",
"\n",
"get_nr_available_actionsself: stormpy.storage.storage.SparseMdpstate: intintproperty nondeterministic_choice_indicesclass SparseModelActionAction for state in sparse model\n",
"\n",
"property idId\n",
"\n",
"property transitionsGet transitions\n",
"\n",
"class SparseModelActionsActions for state in sparse model\n",
"\n",
"class SparseModelComponentsComponents required for building a sparse model\n",
"\n",
"property choice_labelingA list that stores a labeling for each choice\n",
"\n",
"property choice_originsStores for each choice from which parts of the input model description it originates\n",
"\n",
"property exit_ratesThe exit rate for each state. Must be given for CTMCs and MAs, if rate_transitions is false. Otherwise, it is optional.\n",
"\n",
"property markovian_statesA list that stores which states are Markovian (only for Markov Automata)\n",
"\n",
"property observability_classesThe POMDP observations\n",
"\n",
"property player1_matrixMatrix of player 1 choices (needed for stochastic two player games\n",
"\n",
"property rate_transitionsTrue iff the transition values (for Markovian choices) are interpreted as rates\n",
"\n",
"property reward_modelsReward models associated with the model\n",
"\n",
"property state_labelingThe state labeling\n",
"\n",
"property state_valuationsA list that stores for each state to which variable valuation it belongs\n",
"\n",
"property transition_matrixThe transition matrix\n",
"\n",
"class SparseModelStateState in sparse model\n",
"\n",
"property actionsGet actions\n",
"\n",
"property idId\n",
"\n",
"property labelsLabels\n",
"\n",
"class SparseModelStatesStates in sparse model\n",
"\n",
"class SparseParametricCtmcpCTMC in sparse representation\n",
"\n",
"class SparseParametricDtmcpDTMC in sparse representation\n",
"\n",
"class SparseParametricMApMA in sparse representation\n",
"\n",
"apply_schedulerself: stormpy.storage.storage.SparseParametricMAscheduler: storm::storage::Scheduler<carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RAcarl::MonomialComparator<&carl::Monomial::compareGradedLexicaltrue>carl::StdMultivariatePolynomialPolicies<carl::NoReasonscarl::NoAllocator> > >true> >drop_unreachable_states: bool=Truestormpy.storage.storage._SparseParametricModelapply scheduler\n",
"\n",
"property nondeterministic_choice_indicesclass SparseParametricMdppMDP in sparse representation\n",
"\n",
"apply_schedulerself: stormpy.storage.storage.SparseParametricMdpscheduler: storm::storage::Scheduler<carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RAcarl::MonomialComparator<&carl::Monomial::compareGradedLexicaltrue>carl::StdMultivariatePolynomialPolicies<carl::NoReasonscarl::NoAllocator> > >true> >drop_unreachable_states: bool=Truestormpy.storage.storage._SparseParametricModelapply scheduler\n",
"\n",
"property nondeterministic_choice_indicesclass SparseParametricModelActionAction for state in sparse parametric model\n",
"\n",
"property idId\n",
"\n",
"property transitionsGet transitions\n",
"\n",
"class SparseParametricModelActionsActions for state in sparse parametric model\n",
"\n",
"class SparseParametricModelStateState in sparse parametric model\n",
"\n",
"property actionsGet actions\n",
"\n",
"property idId\n",
"\n",
"property labelsLabels\n",
"\n",
"class SparseParametricModelStatesStates in sparse parametric model\n",
"\n",
"class SparseParametricPomdpPOMDP in sparse representation\n",
"\n",
"get_observationself: stormpy.storage.storage.SparseParametricPomdpstate: intintproperty nr_observationsproperty observationsclass SparseParametricRewardModelReward structure for parametric sparse models\n",
"\n",
"get_state_action_rewardself: stormpy.storage.storage.SparseParametricRewardModelarg0: intcarl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RA, carl::MonomialComparator<&carl::Monomial::compareGradedLexical, true>, carl::StdMultivariatePolynomialPolicies<carl::NoReasons, carl::NoAllocator> > >, true>get_state_rewardself: stormpy.storage.storage.SparseParametricRewardModelarg0: intcarl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RA, carl::MonomialComparator<&carl::Monomial::compareGradedLexical, true>, carl::StdMultivariatePolynomialPolicies<carl::NoReasons, carl::NoAllocator> > >, true>property has_state_action_rewardsproperty has_state_rewardsproperty has_transition_rewardsreduce_to_state_based_rewardsself: stormpy.storage.storage.SparseParametricRewardModeltransition_matrix: storm::storage::SparseMatrix<carl::RationalFunction<carl::FactorizedPolynomial<carl::MultivariatePolynomial<cln::cl_RAcarl::MonomialComparator<&carl::Monomial::compareGradedLexicaltrue>carl::StdMultivariatePolynomialPolicies<carl::NoReasonscarl::NoAllocator> > >true> >only_state_rewards: boolNoneReduce to state-based rewards\n",
"\n",
"property state_action_rewardsproperty state_rewardsproperty transition_rewardsclass SparsePomdpPOMDP in sparse representation\n",
"\n",
"get_observationself: stormpy.storage.storage.SparsePomdpstate: intintproperty nr_observationsproperty observationsclass SparseRewardModelReward structure for sparse models\n",
"\n",
"get_state_action_rewardself: stormpy.storage.storage.SparseRewardModelarg0: intfloatget_state_rewardself: stormpy.storage.storage.SparseRewardModelarg0: intfloatget_zero_reward_statesself: stormpy.storage.storage.SparseRewardModeltransition_matrix: storm::storage::SparseMatrix<double>stormpy.storage.storage.BitVectorget states where all rewards are zero\n",
"\n",
"property has_state_action_rewardsproperty has_state_rewardsproperty has_transition_rewardsreduce_to_state_based_rewardsself: stormpy.storage.storage.SparseRewardModeltransition_matrix: storm::storage::SparseMatrix<double>only_state_rewards: boolNoneReduce to state-based rewards\n",
"\n",
"property state_action_rewardsproperty state_rewardsproperty transition_rewardsclass StateLabelingLabeling for states\n",
"\n",
"add_label_to_stateself: stormpy.storage.storage.StateLabelinglabel: strstate: intNoneAdd label to state\n",
"\n",
"get_labels_of_stateself: stormpy.storage.storage.StateLabelingstate: intSet[str]Get labels of given state\n",
"\n",
"get_statesself: stormpy.storage.storage.StateLabelinglabel: strstormpy.storage.storage.BitVectorGet all states which have the given label\n",
"\n",
"has_state_labelself: stormpy.storage.storage.StateLabelinglabel: strstate: intboolCheck if the given state has the given label\n",
"\n",
"set_statesself: stormpy.storage.storage.StateLabelinglabel: strstates: stormpy.storage.storage.BitVectorNoneAdd a label to the given states\n",
"\n",
"class StateValuationValuations for explicit states\n",
"\n",
"get_boolean_valueself: stormpy.storage.storage.StateValuationstate: intvariable: storm::expressions::Variableboolget_integer_valueself: stormpy.storage.storage.StateValuationstate: intvariable: storm::expressions::Variableintget_jsonself: stormpy.storage.storage.StateValuationstate: intselected_variables: Optional[Set[storm::expressions::Variable]]=Nonestrget_nr_of_statesself: stormpy.storage.storage.StateValuationintget_rational_valueself: stormpy.storage.storage.StateValuationstate: intvariable: storm::expressions::Variable__gmp_expr<__mpq_struct [1], __mpq_struct [1]>get_stringself: stormpy.storage.storage.StateValuationstate: intpretty: bool=Trueselected_variables: Optional[Set[storm::expressions::Variable]]=Nonestrclass StateValuationsBuilderadd_stateself: stormpy.storage.storage.StateValuationsBuilder, state: int, boolean_values: List[bool]=[], integer_values: List[int]=[], rational_values: List[__gmp_expr<__mpq_struct [1], __mpq_struct [1]>]=[]NoneAdds a new state, no more variables should be added afterwards\n",
"\n",
"add_variableself: stormpy.storage.storage.StateValuationsBuildervariable: storm::expressions::VariableNoneAdds a new variable\n",
"\n",
"buildself: stormpy.storage.storage.StateValuationsBuilderarg0: intstormpy.storage.storage.StateValuationCreates the finalized state valuations object\n",
"\n",
"class SymbolicSylvanCtmcCTMC in symbolic representation\n",
"\n",
"class SymbolicSylvanDtmcDTMC in symbolic representation\n",
"\n",
"class SymbolicSylvanMAMA in symbolic representation\n",
"\n",
"class SymbolicSylvanMdpMDP in symbolic representation\n",
"\n",
"class SymbolicSylvanParametricCtmcpCTMC in symbolic representation\n",
"\n",
"class SymbolicSylvanParametricDtmcpDTMC in symbolic representation\n",
"\n",
"class SymbolicSylvanParametricMApMA in symbolic representation\n",
"\n",
"class SymbolicSylvanParametricMdppMDP in symbolic representation\n",
"\n",
"class SymbolicSylvanParametricRewardModelReward structure for parametric symbolic models\n",
"\n",
"property has_state_action_rewardsproperty has_state_rewardsproperty has_transition_rewardsclass SymbolicSylvanRewardModelReward structure for symbolic models\n",
"\n",
"property has_state_action_rewardsproperty has_state_rewardsproperty has_transition_rewardsclass VariableRepresents a variable\n",
"\n",
"get_expressionself: stormpy.storage.storage.Variablestorm::expressions::ExpressionGet expression from variable\n",
"\n",
"has_bitvector_typeself: stormpy.storage.storage.VariableboolCheck if the variable is of bitvector type\n",
"\n",
"has_boolean_typeself: stormpy.storage.storage.VariableboolCheck if the variable is of boolean type\n",
"\n",
"has_integer_typeself: stormpy.storage.storage.VariableboolCheck if the variable is of integer type\n",
"\n",
"has_numerical_typeself: stormpy.storage.storage.VariableboolCheck if the variable is of numerical type\n",
"\n",
"has_rational_typeself: stormpy.storage.storage.VariableboolCheck if the variable is of rational type\n",
"\n",
"property nameVariable name\n",
"\n",
"build_parametric_sparse_matrixarrayrow_group_indices=[]Build a sparse matrix from numpy array.\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- array (numpy) – The array. \n",
"- row_group_indices (List[double]) – List containing the starting row of each row group in ascending order. \n",
"\n",
"\n",
"</dd>\n",
"<dt>Returns</dt>\n",
"<dd>\n",
"Parametric sparse matrix.\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"build_sparse_matrixarrayrow_group_indices=[]Build a sparse matrix from numpy array.\n",
"\n",
"\n",
"<dl style='margin: 20px 0;'>\n",
"<dt>Parameters</dt>\n",
"<dd>\n",
"- array (numpy) – The array. \n",
"- row_group_indices (List[double]) – List containing the starting row of each row group in ascending order. \n",
"\n",
"\n",
"</dd>\n",
"<dt>Returns</dt>\n",
"<dd>\n",
"Sparse matrix.\n",
"\n",
"</dd>\n",
"\n",
"</dl>\n",
"\n",
"collect_informationarg0: stormpy.storage.storage.JaniModelstormpy.storage.storage.JaniInformationObjecteliminate_reward_accumulationsmodel: stormpy.storage.storage.JaniModelproperties: List[stormpy.core.Property]List[stormpy.core.Property]Eliminate reward accumulations"
]
}
],
"metadata": {
"date": 1598178167.095977,
"filename": "storage.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.storage"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/storage.rst

@ -1,7 +0,0 @@
Stormpy.storage
**************************
.. automodule:: stormpy.storage
:members:
:undoc-members:
:imported-members:

53
doc/source/api/utility.ipynb

@ -0,0 +1,53 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stormpy.utility\n",
"\n",
"class MatrixFormatI_Minus_P = MatrixFormat.I_Minus_PStraight = MatrixFormat.Straightclass ModelReferenceLightweight Wrapper around results\n",
"\n",
"get_boolean_valueself: stormpy.utility.utility.ModelReferencevariable: storm::expressions::Variableboolget a value for a boolean variable\n",
"\n",
"get_integer_valueself: stormpy.utility.utility.ModelReferencevariable: storm::expressions::Variableintget a value for an integer variable\n",
"\n",
"get_rational_valueself: stormpy.utility.utility.ModelReferencevariable: storm::expressions::Variablefloatget a value (as double) for an rational variable\n",
"\n",
"class Pathproperty distanceproperty predecessorKproperty predecessorNodeclass ShortestPathsGeneratorget_distanceself: stormpy.utility.utility.ShortestPathsGeneratork: intfloatget_path_as_listself: stormpy.utility.utility.ShortestPathsGeneratork: intList[int]get_statesself: stormpy.utility.utility.ShortestPathsGeneratork: intstorm::storage::BitVectorclass SmtCheckResultResult type\n",
"\n",
"Sat = SmtCheckResult.SatUnknown = SmtCheckResult.UnknownUnsat = SmtCheckResult.Unsatclass SmtSolverGeneric Storm SmtSolver Wrapper\n",
"\n",
"addself: stormpy.utility.utility.SmtSolverarg0: storm::expressions::ExpressionNoneaddconstraint\n",
"\n",
"checkself: stormpy.utility.utility.SmtSolverstormpy.utility.utility.SmtCheckResultcheck\n",
"\n",
"property modelget the model\n",
"\n",
"popself: stormpy.utility.utility.SmtSolverlevels: intNonepop\n",
"\n",
"pushself: stormpy.utility.utility.SmtSolverNonepush\n",
"\n",
"resetself: stormpy.utility.utility.SmtSolverNonereset\n",
"\n",
"class SmtSolverFactoryFactory for creating SMT Solvers\n",
"\n",
"class Z3SmtSolverz3 API for storm smtsolver wrapper\n",
"\n",
"class millisecondscountself: stormpy.utility.utility.millisecondsint"
]
}
],
"metadata": {
"date": 1598178167.1070933,
"filename": "utility.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Stormpy.utility"
},
"nbformat": 4,
"nbformat_minor": 4
}

7
doc/source/api/utility.rst

@ -1,7 +0,0 @@
Stormpy.utility
**************************
.. automodule:: stormpy.utility
:members:
:undoc-members:
:imported-members:

196
doc/source/doc/.ipynb_checkpoints/analysis-checkpoint.ipynb

@ -0,0 +1,196 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"Storm supports various model checking approaches that we consider in this section on analysis.\n",
"\n",
"As always:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> formula_str = \"P=? [F s=7 & d=2]\"\n",
">>> properties = stormpy.parse_properties(formula_str, prism_program)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Qualitative Analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adapting the model checking engine\n",
"\n",
"[02-analysis.py](https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/02-analysis.py)\n",
"\n",
"Instead of using the sparse representation, models can also be built symbolically:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> model = stormpy.build_symbolic_model(prism_program, properties)\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To access the result, the result has to be filtered:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> filter = stormpy.create_filter_initial_states_symbolic(model)\n",
">>> result.filter(filter)\n",
">>> assert result.min == result.max"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, result.min (or result.max) contains the result. Notice that if there are multiple initial states, result.min will not be equal to result.max.\n",
"\n",
"Besides this analysis on the DD, there are approaches that combine both representation.\n",
"Stormpy does support them, but we have not yet documented them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adapting model checking algorithms\n",
"\n",
"[03-analysis.py](https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/03-analysis.py)\n",
"\n",
"Reconsider the model checking example from the getting started guide:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> model = stormpy.build_model(prism_program, properties)\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also vary the model checking algorithm:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> env = stormpy.Environment()\n",
">>> env.solver_environment.set_linear_equation_solver_type(stormpy.EquationSolverType.native)\n",
">>> env.solver_environment.native_solver_environment.method = stormpy.NativeLinearEquationSolverMethod.power_iteration\n",
">>> result = stormpy.model_checking(model, properties[0], environment=env)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we allow to change some parameters of the algorithms. E.g., in iterative approaches,\n",
"we allow to change the number of iterations:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> env.solver_environment.native_solver_environment.maximum_iterations = 3\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that by setting such parameters, the result may be off from the actual model checking algorithm.\n",
"\n",
"Environments can be used likewise for symbolic model checking. See the example for more information."
]
}
],
"metadata": {
"date": 1598178167.1206837,
"filename": "analysis.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Analysis"
},
"nbformat": 4,
"nbformat_minor": 4
}

138
doc/source/doc/.ipynb_checkpoints/building_models-checkpoint.ipynb

@ -0,0 +1,138 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building Models\n",
"\n",
"[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/m-hannah/stormpy/master?filepath=notebooks%2Fbuilding_models.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"Storm supports a wide range of formalisms. Stormpy can be used to build models from some of these formalisms.\n",
"Moreover, during construction, various options can be set. This document yields information about the most important options."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building different formalisms\n",
"\n",
"We use some standard examples:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy.examples\n",
">>> import stormpy.examples.files"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Storm supports the explicit DRN format.\n",
"From this, models can be built directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.drn_ctmc_dft\n",
">>> model = stormpy.build_model_from_drn(path)\n",
">>> print(model.model_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the same for parametric models:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.drn_pdtmc_die\n",
">>> model = stormpy.build_parametric_model_from_drn(path)\n",
">>> print(model.model_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another option are JANI descriptions. These are another high-level description format.\n",
"Building models from JANI is done in two steps. First the Jani-description is parsed, and then the model is built from this description:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.jani_dtmc_die\n",
">>> jani_program, properties = stormpy.parse_jani_model(path)\n",
">>> model = stormpy.build_model(jani_program)\n",
">>> print(model.model_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that parsing JANI files also returns properties. In JANI, properties can be embedded in the model file."
]
}
],
"metadata": {
"date": 1598188121.587518,
"filename": "building_models.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Building Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

132
doc/source/doc/.ipynb_checkpoints/dfts-checkpoint.ipynb

@ -0,0 +1,132 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Dynamic Fault Trees"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building DFTs\n",
"\n",
"[01-dfts.py](https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py)\n",
"\n",
"Dynamic fault trees can be loaded from either the Galileo format or from a custom JSON form.\n",
"A file containing the DFT in the Galileo format can be loaded via `load_dft_galileo_file(path)`.\n",
"The custom JSON format can be loaded from a file via `load_dft_json_file(path)` or directly from a string via `load_dft_json_string(path)`.\n",
"We start by loading a simple DFT containing an AND gate from JSON:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.dft\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path_json = stormpy.examples.files.dft_json_and\n",
">>> dft_small = stormpy.dft.load_dft_json_file(path_json)\n",
">>> print(dft_small)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we load a more complex DFT from the Galileo format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path_galileo = stormpy.examples.files.dft_galileo_hecs\n",
">>> dft = stormpy.dft.load_dft_galileo_file(path_galileo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After loading the DFT, we can display some common statistics about the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(\"DFT with {} elements.\".format(dft.nr_elements()))\n",
">>> print(\"DFT has {} BEs and {} dynamic elements.\".format(dft.nr_be(), dft.nr_dynamic()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyzing DFTs\n",
"\n",
"[01-dfts.py](https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py)\n",
"\n",
"The next step is to analyze the DFT via `analyze_dft(dft, formula)`.\n",
"Here we can use all standard properties as described in [Building properties](../getting_started.ipynb#getting-started-building-properties).\n",
"In our example we compute the Mean-time-to-failure (MTTF):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> formula_str = \"T=? [ F \\\"failed\\\" ]\"\n",
">>> formulas = stormpy.parse_properties(formula_str)\n",
">>> results = stormpy.dft.analyze_dft(dft, [formulas[0].raw_formula])\n",
">>> result = results[0]\n",
">>> print(\"MTTF: {:.2f}\".format(result))"
]
}
],
"metadata": {
"date": 1598178167.1422036,
"filename": "dfts.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Dynamic Fault Trees"
},
"nbformat": 4,
"nbformat_minor": 4
}

285
doc/source/doc/.ipynb_checkpoints/exploration-checkpoint.ipynb

@ -0,0 +1,285 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exploring Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"Often, stormpy is used as a testbed for new algorithms.\n",
"An essential step is to transfer the (low-level) descriptions of an MDP or other state-based model into\n",
"an own algorithm. In this section, we discuss some of the functionality."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading MDPs\n",
"\n",
"[01-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/01-exploration.py)\n",
"\n",
"In [Getting Started](../getting_started.ipynb), we briefly iterated over a DTMC. In this section, we explore an MDP:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [],
"source": [
">>> import doctest\n",
">>> doctest.ELLIPSIS_MARKER = '-etc-' \n",
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_mdp_maze)\n",
">>> prop = \"R=? [F \\\"goal\\\"]\"\n",
"\n",
">>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)\n",
">>> model = stormpy.build_model(program, properties)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The iteration over the model is as before, but now, for every action, we can have several transitions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"State 0 is initial\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 1\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 2\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 3\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 4\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 5\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 6\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 7\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 8\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 9\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 10\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 11\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 12\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 13\n",
"From state 1 by action 0, with probability 1.0, go to state 2\n",
"From state 1 by action 1, with probability 1.0, go to state 1\n",
"From state 1 by action 2, with probability 1.0, go to state 1\n",
"From state 1 by action 3, with probability 1.0, go to state 6\n",
"From state 2 by action 0, with probability 1.0, go to state 3\n",
"From state 2 by action 1, with probability 1.0, go to state 1\n",
"From state 2 by action 2, with probability 1.0, go to state 2\n",
"From state 2 by action 3, with probability 1.0, go to state 2\n",
"From state 3 by action 0, with probability 1.0, go to state 4\n",
"From state 3 by action 1, with probability 1.0, go to state 2\n",
"From state 3 by action 2, with probability 1.0, go to state 3\n",
"From state 3 by action 3, with probability 1.0, go to state 7\n",
"From state 4 by action 0, with probability 1.0, go to state 5\n",
"From state 4 by action 1, with probability 1.0, go to state 3\n",
"From state 4 by action 2, with probability 1.0, go to state 4\n",
"From state 4 by action 3, with probability 1.0, go to state 4\n",
"From state 5 by action 0, with probability 1.0, go to state 5\n",
"From state 5 by action 1, with probability 1.0, go to state 4\n",
"From state 5 by action 2, with probability 1.0, go to state 5\n",
"From state 5 by action 3, with probability 1.0, go to state 8\n",
"From state 6 by action 0, with probability 1.0, go to state 6\n",
"From state 6 by action 1, with probability 1.0, go to state 6\n",
"From state 6 by action 2, with probability 1.0, go to state 1\n",
"From state 6 by action 3, with probability 1.0, go to state 9\n",
"From state 7 by action 0, with probability 1.0, go to state 7\n",
"From state 7 by action 1, with probability 1.0, go to state 7\n",
"From state 7 by action 2, with probability 1.0, go to state 3\n",
"From state 7 by action 3, with probability 1.0, go to state 10\n",
"From state 8 by action 0, with probability 1.0, go to state 8\n",
"From state 8 by action 1, with probability 1.0, go to state 8\n",
"From state 8 by action 2, with probability 1.0, go to state 5\n",
"From state 8 by action 3, with probability 1.0, go to state 11\n",
"From state 9 by action 0, with probability 1.0, go to state 9\n",
"From state 9 by action 1, with probability 1.0, go to state 9\n",
"From state 9 by action 2, with probability 1.0, go to state 6\n",
"From state 9 by action 3, with probability 1.0, go to state 12\n",
"From state 10 by action 0, with probability 1.0, go to state 10\n",
"From state 10 by action 1, with probability 1.0, go to state 10\n",
"From state 10 by action 2, with probability 1.0, go to state 7\n",
"From state 10 by action 3, with probability 1.0, go to state 14\n",
"From state 11 by action 0, with probability 1.0, go to state 10\n",
"From state 11 by action 1, with probability 1.0, go to state 10\n",
"From state 11 by action 2, with probability 1.0, go to state 8\n",
"From state 11 by action 3, with probability 1.0, go to state 13\n",
"From state 12 by action 0, with probability 1.0, go to state 12\n",
"From state 12 by action 1, with probability 1.0, go to state 12\n",
"From state 12 by action 2, with probability 1.0, go to state 9\n",
"From state 12 by action 3, with probability 1.0, go to state 12\n",
"From state 13 by action 0, with probability 1.0, go to state 13\n",
"From state 13 by action 1, with probability 1.0, go to state 13\n",
"From state 13 by action 2, with probability 1.0, go to state 11\n",
"From state 13 by action 3, with probability 1.0, go to state 13\n",
"From state 14 by action 0, with probability 1.0, go to state 14\n"
]
}
],
"source": [
">>> for state in model.states:\n",
"... if state.id in model.initial_states:\n",
"... print(\"State {} is initial\".format(state.id))\n",
"... for action in state.actions:\n",
"... for transition in action.transitions:\n",
"... print(\"From state {} by action {}, with probability {}, go to state {}\".format(state, action, transition.value(), transition.column))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Internally, storm can hold hints to the origin of the actions, which may be helpful to give meaning and for debugging.\n",
"As the availability and the encoding of this data depends on the input model, we discuss these features in highlevel_models.\n",
"\n",
"Storm currently supports deterministic rewards on states or actions. More information can be found in [Reward Models](reward_models.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading POMDPs\n",
"\n",
"[02-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/02-exploration.py)\n",
"\n",
"Internally, POMDPs extend MDPs. Thus, iterating over the MDP is done as before.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_pomdp_maze)\n",
">>> prop = \"R=? [F \\\"goal\\\"]\"\n",
">>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)\n",
">>> model = stormpy.build_model(program, properties)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indeed, all that changed in the code above is the example we use.\n",
"And, that the model type now is a POMDP:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ModelType.MDP\n"
]
}
],
"source": [
">>> print(model.model_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Additionally, POMDPs have a set of observations, which are internally just numbered by an integer from 0 to the number of observations -1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [],
"source": [
">>> print(model.nr_observations)\n",
">>> for state in model.states:\n",
"... print(\"State {} has observation id {}\".format(state.id, model.observations[state.id]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sorting states\n",
"\n",
"[03-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/03-exploration.py)\n",
"\n",
"Often, one may sort the states according to the graph structure.\n",
"Storm supports some of these sorting algorithms, e.g., topological sort."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading MAs\n",
"\n",
"To be continued…"
]
}
],
"metadata": {
"date": 1598178167.1595793,
"filename": "exploration.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Exploring Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

260
doc/source/doc/.ipynb_checkpoints/gspns-checkpoint.ipynb

@ -0,0 +1,260 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Generalized Stochastic Petri Nets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading GSPNs\n",
"\n",
"[01-gspns.py](https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/01-gspns.py)\n",
"\n",
"Generalized stochastic Petri nets can be given either in the PNPRO format or in the PNML format.\n",
"We start by loading a GSPN stored in the PNML format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.gspn\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
"\n",
">>> import_path = stormpy.examples.files.gspn_pnml_simple\n",
">>> gspn_parser = stormpy.gspn.GSPNParser()\n",
">>> gspn = gspn_parser.parse(import_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After loading, we can display some properties of the GSPN:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(\"Name of GSPN: {}.\".format(gspn.get_name()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of places: {}.\".format(gspn.get_number_of_places()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of immediate transitions: {}.\".format(gspn.get_number_of_immediate_transitions()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of timed transitions: {}.\".format(gspn.get_number_of_timed_transitions()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building GSPNs\n",
"\n",
"[02-gspns.py](https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/02-gspns.py)\n",
"\n",
"In the following, we describe how to construct GSPNs via the `GSPNBuilder`.\n",
"First, we create an instance of the `GSPNBuilder` and set the name of the GSPN:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder = stormpy.gspn.GSPNBuilder()\n",
">>> builder.set_name(\"my_gspn\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the next step, we add an immediate transition to the GSPN.\n",
"Additionally, we define the position of the transition by setting its layout information:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> it_1 = builder.add_immediate_transition(1, 0.0, \"it_1\")\n",
">>> it_layout = stormpy.gspn.LayoutInfo(1.5, 2.0)\n",
">>> builder.set_transition_layout_info(it_1, it_layout)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We add a timed transition and set its layout information:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> tt_1 = builder.add_timed_transition(0, 0.4, \"tt_1\")\n",
">>> tt_layout = stormpy.gspn.LayoutInfo(12.5, 2.0)\n",
">>> builder.set_transition_layout_info(tt_1, tt_layout)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we add two places to the GSPN and set their layouts:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> place_1 = builder.add_place(1, 1, \"place_1\")\n",
">>> p1_layout = stormpy.gspn.LayoutInfo(6.5, 2.0)\n",
">>> builder.set_place_layout_info(place_1, p1_layout)\n",
"\n",
">>> place_2 = builder.add_place(1, 0, \"place_2\")\n",
">>> p2_layout = stormpy.gspn.LayoutInfo(18.5, 2.0)\n",
">>> builder.set_place_layout_info(place_2, p2_layout)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Places and transitions can be linked by input, output and inhibition arcs.\n",
"We add the arcs of our GSPN as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.add_output_arc(it_1, place_1)\n",
">>> builder.add_inhibition_arc(place_1, it_1)\n",
">>> builder.add_input_arc(place_1, tt_1)\n",
">>> builder.add_output_arc(tt_1, place_2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now build the GSPN:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> gspn = builder.build_gspn()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After building, we export the GSPN.\n",
"GSPNs can be saved in the PNPRO format via `export_gspn_pnpro_file(path)` and in the PNML format via `export_gspn_pnml_file(path)`.\n",
"We export the GSPN into the PNPRO format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> export_path = stormpy.examples.files.gspn_pnpro_simple\n",
">>> gspn.export_gspn_pnpro_file(export_path)"
]
}
],
"metadata": {
"date": 1598178167.1731207,
"filename": "gspns.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Generalized Stochastic Petri Nets"
},
"nbformat": 4,
"nbformat_minor": 4
}

174
doc/source/doc/.ipynb_checkpoints/parametric_models-checkpoint.ipynb

@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Parametric Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiating parametric models\n",
"\n",
"[01-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/01-parametric-models.py)\n",
"\n",
"Input formats such as prism allow to specify programs with open constants. We refer to these open constants as parameters.\n",
"If the constants only influence the probabilities or rates, but not the topology of the underlying model, we can build these models as parametric models:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> formula_str = \"P=? [F s=7 & d=2]\"\n",
">>> properties = stormpy.parse_properties(formula_str, prism_program)\n",
">>> model = stormpy.build_parametric_model(prism_program, properties)\n",
">>> parameters = model.collect_probability_parameters()\n",
">>> for x in parameters:\n",
"... print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to obtain a standard DTMC, MDP or other Markov model, we need to instantiate these models by means of a model instantiator:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy.pars\n",
">>> instantiator = stormpy.pars.PDtmcInstantiator(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before we obtain an instantiated model, we need to map parameters to values: We build such a dictionary as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> point = dict()\n",
">>> for x in parameters:\n",
"... print(x.name)\n",
"... point[x] = 0.4\n",
">>> instantiated_model = instantiator.instantiate(point)\n",
">>> result = stormpy.model_checking(instantiated_model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initial states and labels are set as for the parameter-free case."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Checking parametric models\n",
"\n",
"[02-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/02-parametric-models.py)\n",
"\n",
"It is also possible to check the parametric model directly, similar as before in [Checking properties](../getting_started.ipynb#getting-started-checking-properties):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> result = stormpy.model_checking(model, properties[0])\n",
">>> initial_state = model.initial_states[0]\n",
">>> func = result.at(initial_state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We collect the constraints ensuring that underlying model is well-formed and the graph structure does not change:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> collector = stormpy.ConstraintCollector(model)\n",
">>> for formula in collector.wellformed_constraints:\n",
"... print(formula)\n",
">>> for formula in collector.graph_preserving_constraints:\n",
"... print(formula)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Collecting information about the parametric models\n",
"\n",
"[03-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/03-parametric-models.py)\n",
"\n",
"This example shows three implementations to obtain the number of transitions with probability one in a parametric model."
]
}
],
"metadata": {
"date": 1598178167.2485256,
"filename": "parametric_models.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Parametric Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

133
doc/source/doc/.ipynb_checkpoints/reward_models-checkpoint.ipynb

@ -0,0 +1,133 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Reward Models\n",
"\n",
"In [Getting Started](../getting_started.ipynb), we mainly looked at probabilities in the Markov models and properties that refer to these probabilities.\n",
"In this section, we discuss reward models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring reward models\n",
"\n",
"[01-reward-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples/reward_models/01-reward-models.py)\n",
"\n",
"We consider the die again, but with another property which talks about the expected reward:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)\n",
">>> prop = \"R=? [F \\\"done\\\"]\"\n",
"\n",
">>> properties = stormpy.parse_properties(prop, program, None)\n",
">>> model = stormpy.build_model(program, properties)\n",
">>> assert len(model.reward_models) == 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model now has a reward model, as the property talks about rewards.\n",
"When [Building Models](building_models.ipynb) from explicit sources, the reward model is always included if it is defined in the source.\n",
"We can do model checking analogous to probabilities:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> initial_state = model.initial_states[0]\n",
">>> result = stormpy.model_checking(model, properties[0])\n",
">>> print(\"Result: {}\".format(round(result.at(initial_state), 6)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The reward model has a name which we can obtain as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> reward_model_name = list(model.reward_models.keys())[0]\n",
">>> print(reward_model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We discuss later how to work with multiple reward models.\n",
"Rewards come in multiple fashions, as state rewards, state-action rewards and as transition rewards.\n",
"In this example, we only have state-action rewards. These rewards are a vector, over which we can trivially iterate:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> assert not model.reward_models[reward_model_name].has_state_rewards\n",
">>> assert model.reward_models[reward_model_name].has_state_action_rewards\n",
">>> assert not model.reward_models[reward_model_name].has_transition_rewards\n",
">>> for reward in model.reward_models[reward_model_name].state_action_rewards:\n",
"... print(reward)"
]
}
],
"metadata": {
"date": 1598188121.7157953,
"filename": "reward_models.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Reward Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

202
doc/source/doc/.ipynb_checkpoints/schedulers-checkpoint.ipynb

@ -0,0 +1,202 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Working with Schedulers\n",
"\n",
"In non-deterministic models the notion of a scheduler (or policy) is important.\n",
"The scheduler determines which action to take at each state.\n",
"\n",
"For a given reachability property, Storm can return the scheduler realizing the resulting probability."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examining Schedulers for MDPs\n",
"\n",
"[01-schedulers.py](https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/01-schedulers.py)\n",
"\n",
"As in [Getting Started](../getting_started.ipynb), we import some required modules and build a model from the example files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.core\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
"\n",
">>> path = stormpy.examples.files.prism_mdp_coin_2_2\n",
">>> formula_str = \"Pmin=? [F \\\"finished\\\" & \\\"all_coins_equal_1\\\"]\"\n",
">>> program = stormpy.parse_prism_program(path)\n",
">>> formulas = stormpy.parse_properties(formula_str, program)\n",
">>> model = stormpy.build_model(program, formulas)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we check the model and make sure to extract the scheduler:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> result = stormpy.model_checking(model, formulas[0], extract_scheduler=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The result then contains the scheduler we want:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> assert result.has_scheduler\n",
">>> scheduler = result.scheduler\n",
">>> assert scheduler.memoryless\n",
">>> assert scheduler.deterministic\n",
">>> print(scheduler)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get the information which action the scheduler chooses in which state, we can simply iterate over the states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for state in model.states:\n",
"... choice = scheduler.get_choice(state)\n",
"... action = choice.get_deterministic_choice()\n",
"... print(\"In state {} choose action {}\".format(state, action))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examining Schedulers for Markov automata\n",
"\n",
"[02-schedulers.py](https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/02-schedulers.py)\n",
"\n",
"Currently there is no support yet for scheduler extraction on MAs.\n",
"However, if the timing information is not relevant for the property, we can circumvent this lack by first transforming the MA to an MDP.\n",
"\n",
"We build the model as before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.prism_ma_simple\n",
">>> formula_str = \"Tmin=? [ F s=4 ]\"\n",
"\n",
">>> program = stormpy.parse_prism_program(path, False, True)\n",
">>> formulas = stormpy.parse_properties_for_prism_program(formula_str, program)\n",
">>> ma = stormpy.build_model(program, formulas)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we transform the continuous-time model into a discrete-time model.\n",
"Note that all timing information is lost at this point:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> mdp, mdp_formulas = stormpy.transform_to_discrete_time_model(ma, formulas)\n",
">>> assert mdp.model_type == stormpy.ModelType.MDP"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After the transformation we have obtained an MDP where we can extract the scheduler as shown before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> result = stormpy.model_checking(mdp, mdp_formulas[0], extract_scheduler=True)\n",
">>> scheduler = result.scheduler\n",
">>> print(scheduler)\n"
]
}
],
"metadata": {
"date": 1598178167.268541,
"filename": "schedulers.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Working with Schedulers"
},
"nbformat": 4,
"nbformat_minor": 4
}

167
doc/source/doc/.ipynb_checkpoints/shortest_paths-checkpoint.ipynb

@ -0,0 +1,167 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Working with Shortest Paths\n",
"\n",
"Storm can enumerate the most probable paths of a model, leading from the initial state to a defined set of target states, which we refer to as shortest paths.\n",
"In particular, the model states visited along those paths are available as sets and can be accumulated, yielding a *sub-model*."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"The underlying implementation uses the *recursive enumeration algorithm* [[JM1999]](#jm1999), substituting distance for probability – which is why we refer to the most probable paths as the *shortest* paths.\n",
"\n",
"This algorithm computes the shortest paths recursively and in order, i.e., to find the 7th shortest path, the 1st through 6th shortest paths are computed as precursors. The next (i.e., 8th shortest) path can then be computed efficiently.\n",
"\n",
"It is crucial to note that *any* path is eligible, including those that (repeatedly) traverse loops (i.e., *non-simple* paths). This is a common case in practice: Often a large number of similar paths that differ only in the order and number of loop traversals occur successively in the sequence of shortest paths. (For applications that are only interested in simple paths, this is rather unfortunate.)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examining Shortest Paths\n",
"\n",
"[01-shortest-paths.py](https://github.com/moves-rwth/stormpy/blob/master/examples/shortest_paths/01-shortest-paths.py)\n",
"\n",
"As in [Getting Started](../getting_started.ipynb), we import some required modules and build a model from the example files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> model = stormpy.build_model(prism_program)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also import the `ShortestPathsGenerator`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> from stormpy.utility import ShortestPathsGenerator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and choose a target state (by its ID) to which we want to compute the shortest paths:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_id = 8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is also possible to specify a set of target states (as a list, e.g., `[8, 10, 11]`) or a label in the model if applicable (e.g., `\"observe0Greater1\"`).\n",
"For simplicity, we will stick to using a single state for now.\n",
"\n",
"We initialize a `ShortestPathsGenerator` instance:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> spg = ShortestPathsGenerator(model, state_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can query the k-shortest path by index. Note that 1-based indices are used, so that the 3rd shortest path indeed corresponds to index `k=3`.\n",
"Let us inspect the first three shortest paths:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for k in range(1, 4):\n",
"... path = spg.get_path_as_list(k)\n",
"... distance = spg.get_distance(k)\n",
"... print(\"{}-shortest path to state #{}: {}, with distance {}\".format(k, state_id, path, distance))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the distance (i.e., probability of the path) is also available.\n",
"Note that the paths are displayed as a backward-traversal from the target to the initial state.\n",
"\n",
"<a id='jm1999'></a>\n",
"\\[JM1999\\] Víctor M. Jiménez, Andrés Marzal. [Computing the K Shortest Paths: A New Algorithm and an Experimental Comparison](https://scholar.google.com/scholar?q=Computing+the+k+shortest+paths%3A+A+new+algorithm+and+an+experimental+comparison). International Workshop on Algorithm Engineering, 1999"
]
}
],
"metadata": {
"date": 1598178167.2826114,
"filename": "shortest_paths.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Working with Shortest Paths"
},
"nbformat": 4,
"nbformat_minor": 4
}

196
doc/source/doc/analysis.ipynb

@ -0,0 +1,196 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"Storm supports various model checking approaches that we consider in this section on analysis.\n",
"\n",
"As always:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> formula_str = \"P=? [F s=7 & d=2]\"\n",
">>> properties = stormpy.parse_properties(formula_str, prism_program)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Qualitative Analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adapting the model checking engine\n",
"\n",
"[02-analysis.py](https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/02-analysis.py)\n",
"\n",
"Instead of using the sparse representation, models can also be built symbolically:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> model = stormpy.build_symbolic_model(prism_program, properties)\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To access the result, the result has to be filtered:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> filter = stormpy.create_filter_initial_states_symbolic(model)\n",
">>> result.filter(filter)\n",
">>> assert result.min == result.max"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, result.min (or result.max) contains the result. Notice that if there are multiple initial states, result.min will not be equal to result.max.\n",
"\n",
"Besides this analysis on the DD, there are approaches that combine both representation.\n",
"Stormpy does support them, but we have not yet documented them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adapting model checking algorithms\n",
"\n",
"[03-analysis.py](https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/03-analysis.py)\n",
"\n",
"Reconsider the model checking example from the getting started guide:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> model = stormpy.build_model(prism_program, properties)\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also vary the model checking algorithm:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> env = stormpy.Environment()\n",
">>> env.solver_environment.set_linear_equation_solver_type(stormpy.EquationSolverType.native)\n",
">>> env.solver_environment.native_solver_environment.method = stormpy.NativeLinearEquationSolverMethod.power_iteration\n",
">>> result = stormpy.model_checking(model, properties[0], environment=env)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we allow to change some parameters of the algorithms. E.g., in iterative approaches,\n",
"we allow to change the number of iterations:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> env.solver_environment.native_solver_environment.maximum_iterations = 3\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that by setting such parameters, the result may be off from the actual model checking algorithm.\n",
"\n",
"Environments can be used likewise for symbolic model checking. See the example for more information."
]
}
],
"metadata": {
"date": 1598178167.1206837,
"filename": "analysis.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Analysis"
},
"nbformat": 4,
"nbformat_minor": 4
}

69
doc/source/doc/analysis.rst

@ -1,69 +0,0 @@
***************
Analysis
***************
Background
=====================
Storm supports various model checking approaches that we consider in this section on analysis.
As always::
>>> import stormpy
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> path = stormpy.examples.files.prism_dtmc_die
>>> prism_program = stormpy.parse_prism_program(path)
>>> formula_str = "P=? [F s=7 & d=2]"
>>> properties = stormpy.parse_properties(formula_str, prism_program)
Qualitative Analysis
======================
Adapting the model checking engine
==================================
.. seealso:: `02-analysis.py <https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/02-analysis.py>`_
Instead of using the sparse representation, models can also be built symbolically::
>>> model = stormpy.build_symbolic_model(prism_program, properties)
>>> result = stormpy.model_checking(model, properties[0])
To access the result, the result has to be filtered::
>>> filter = stormpy.create_filter_initial_states_symbolic(model)
>>> result.filter(filter)
>>> assert result.min == result.max
Then, result.min (or result.max) contains the result. Notice that if there are multiple initial states, result.min will not be equal to result.max.
Besides this analysis on the DD, there are approaches that combine both representation.
Stormpy does support them, but we have not yet documented them.
Adapting model checking algorithms
==================================
.. seealso:: `03-analysis.py <https://github.com/moves-rwth/stormpy/blob/master/examples/analysis/03-analysis.py>`_
Reconsider the model checking example from the getting started guide::
>>> model = stormpy.build_model(prism_program, properties)
>>> result = stormpy.model_checking(model, properties[0])
We can also vary the model checking algorithm::
>>> env = stormpy.Environment()
>>> env.solver_environment.set_linear_equation_solver_type(stormpy.EquationSolverType.native)
>>> env.solver_environment.native_solver_environment.method = stormpy.NativeLinearEquationSolverMethod.power_iteration
>>> result = stormpy.model_checking(model, properties[0], environment=env)
Finally, we allow to change some parameters of the algorithms. E.g., in iterative approaches,
we allow to change the number of iterations::
>>> env.solver_environment.native_solver_environment.maximum_iterations = 3
>>> result = stormpy.model_checking(model, properties[0])
Notice that by setting such parameters, the result may be off from the actual model checking algorithm.
Environments can be used likewise for symbolic model checking. See the example for more information.

11
doc/source/doc/building_models.ipynb

@ -4,13 +4,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building Models\n",
"\n",
"[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/m-hannah/stormpy/master?filepath=notebooks%2Fbuilding_models.ipynb)"
]
},
@ -117,7 +112,7 @@
}
],
"metadata": {
"date": 1596309564.4717214,
"date": 1598188121.587518,
"filename": "building_models.rst",
"kernelspec": {
"display_name": "Python 3",

132
doc/source/doc/dfts.ipynb

@ -0,0 +1,132 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Dynamic Fault Trees"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building DFTs\n",
"\n",
"[01-dfts.py](https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py)\n",
"\n",
"Dynamic fault trees can be loaded from either the Galileo format or from a custom JSON form.\n",
"A file containing the DFT in the Galileo format can be loaded via `load_dft_galileo_file(path)`.\n",
"The custom JSON format can be loaded from a file via `load_dft_json_file(path)` or directly from a string via `load_dft_json_string(path)`.\n",
"We start by loading a simple DFT containing an AND gate from JSON:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.dft\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path_json = stormpy.examples.files.dft_json_and\n",
">>> dft_small = stormpy.dft.load_dft_json_file(path_json)\n",
">>> print(dft_small)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we load a more complex DFT from the Galileo format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path_galileo = stormpy.examples.files.dft_galileo_hecs\n",
">>> dft = stormpy.dft.load_dft_galileo_file(path_galileo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After loading the DFT, we can display some common statistics about the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(\"DFT with {} elements.\".format(dft.nr_elements()))\n",
">>> print(\"DFT has {} BEs and {} dynamic elements.\".format(dft.nr_be(), dft.nr_dynamic()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyzing DFTs\n",
"\n",
"[01-dfts.py](https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py)\n",
"\n",
"The next step is to analyze the DFT via `analyze_dft(dft, formula)`.\n",
"Here we can use all standard properties as described in [Building properties](../getting_started.ipynb#getting-started-building-properties).\n",
"In our example we compute the Mean-time-to-failure (MTTF):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> formula_str = \"T=? [ F \\\"failed\\\" ]\"\n",
">>> formulas = stormpy.parse_properties(formula_str)\n",
">>> results = stormpy.dft.analyze_dft(dft, [formulas[0].raw_formula])\n",
">>> result = results[0]\n",
">>> print(\"MTTF: {:.2f}\".format(result))"
]
}
],
"metadata": {
"date": 1598178167.1422036,
"filename": "dfts.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Dynamic Fault Trees"
},
"nbformat": 4,
"nbformat_minor": 4
}

51
doc/source/doc/dfts.rst

@ -1,51 +0,0 @@
*******************
Dynamic Fault Trees
*******************
Building DFTs
=============
.. seealso:: `01-dfts.py <https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py>`_
Dynamic fault trees can be loaded from either the Galileo format or from a custom JSON form.
A file containing the DFT in the Galileo format can be loaded via ``load_dft_galileo_file(path)``.
The custom JSON format can be loaded from a file via ``load_dft_json_file(path)`` or directly from a string via ``load_dft_json_string(path)``.
We start by loading a simple DFT containing an AND gate from JSON::
>>> import stormpy
>>> import stormpy.dft
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> path_json = stormpy.examples.files.dft_json_and
>>> dft_small = stormpy.dft.load_dft_json_file(path_json)
>>> print(dft_small)
Top level index: 2, Nr BEs2
Next we load a more complex DFT from the Galileo format::
>>> path_galileo = stormpy.examples.files.dft_galileo_hecs
>>> dft = stormpy.dft.load_dft_galileo_file(path_galileo)
After loading the DFT, we can display some common statistics about the model::
>>> print("DFT with {} elements.".format(dft.nr_elements()))
DFT with 23 elements.
>>> print("DFT has {} BEs and {} dynamic elements.".format(dft.nr_be(), dft.nr_dynamic()))
DFT has 13 BEs and 2 dynamic elements.
Analyzing DFTs
==============
.. seealso:: `01-dfts.py <https://github.com/moves-rwth/stormpy/blob/master/examples/dfts/01-dfts.py>`_
The next step is to analyze the DFT via ``analyze_dft(dft, formula)``.
Here we can use all standard properties as described in :ref:`getting-started-building-properties`.
In our example we compute the `Mean-time-to-failure (MTTF)`::
>>> formula_str = "T=? [ F \"failed\" ]"
>>> formulas = stormpy.parse_properties(formula_str)
>>> results = stormpy.dft.analyze_dft(dft, [formulas[0].raw_formula])
>>> result = results[0]
>>> print("MTTF: {:.2f}".format(result))
MTTF: 363.89

109
doc/source/doc/engines.ipynb

@ -0,0 +1,109 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Engines"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"Storm supports different engines for building and checking a model. A detailed comparison of the different engines provided in Storm can be found on the [Storm website](http://www.stormchecker.org/documentation/usage/engines.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sparse engine\n",
"\n",
"In all of the examples so far we used the default sparse engine:\n",
"\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
" >>> prism_program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)\n",
">>> properties = stormpy.parse_properties('P=? [F \"one\"]', prism_program)\n",
">>> sparse_model = stormpy.build_sparse_model(prism_program, properties)\n",
">>> print(type(sparse_model))\n",
"<class 'stormpy.storage.storage.SparseDtmc'>\n",
">>> print(\"Number of states: {}\".format(sparse_model.nr_states))\n",
"Number of states: 13\n",
">>> print(\"Number of transitions: {}\".format(sparse_model.nr_transitions))\n",
"Number of transitions: 20The model checking was also done in the sparse engine:\n",
"\n",
">>> sparse_result = stormpy.check_model_sparse(sparse_model, properties[0])\n",
">>> initial_state = sparse_model.initial_states[0]\n",
">>> print(sparse_result.at(initial_state))\n",
"0.16666666666666666"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Symbolic engine\n",
"\n",
"Instead of using the sparse engine, one can also use a symbolic representation in terms of binary decision diagrams (BDDs).\n",
"To use the symbolic (dd) engine, we use the symbolic versions for the building and model checking:\n",
"\n",
">>> symbolic_model = stormpy.build_symbolic_model(prism_program, properties)\n",
">>> print(type(symbolic_model))\n",
"<class 'stormpy.storage.storage.SymbolicSylvanDtmc'>\n",
">>> print(\"Number of states: {}\".format(symbolic_model.nr_states))\n",
"Number of states: 13\n",
">>> print(\"Number of transitions: {}\".format(symbolic_model.nr_transitions))\n",
"Number of transitions: 20\n",
">>> symbolic_result = stormpy.check_model_dd(symbolic_model, properties[0])\n",
">>> print(symbolic_result)\n",
"[0, 1] (range)We can also filter the computed results and only consider the initial states:\n",
"\n",
">>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model)\n",
">>> symbolic_result.filter(filter)\n",
">>> print(symbolic_result.min)\n",
"0.16666650772094727It is also possible to first build the model symbolically and then transform it into a sparse model:\n",
"\n",
">>> print(type(symbolic_model))\n",
"<class 'stormpy.storage.storage.SymbolicSylvanDtmc'>\n",
">>> transformed_model = stormpy.transform_to_sparse_model(symbolic_model)\n",
">>> print(type(transformed_model))\n",
"<class 'stormpy.storage.storage.SparseDtmc'>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Hybrid engine\n",
"\n",
"A third possibility is to use the hybrid engine, a combination of sparse and dd engines.\n",
"It first builds the model symbolically.\n",
"The actual model checking is then performed with the engine which is deemed most suitable for the given task.\n",
"\n",
">>> print(type(symbolic_model))\n",
"<class 'stormpy.storage.storage.SymbolicSylvanDtmc'>\n",
">>> hybrid_result = stormpy.check_model_hybrid(symbolic_model, properties[0])\n",
">>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model)\n",
">>> hybrid_result.filter(filter)\n",
">>> print(hybrid_result)\n",
"0.166667"
]
}
],
"metadata": {
"date": 1598178167.148,
"filename": "engines.rst",
"kernelspec": {
"display_name": "Python",
"language": "python3",
"name": "python3"
},
"title": "Engines"
},
"nbformat": 4,
"nbformat_minor": 4
}

82
doc/source/doc/engines.rst

@ -1,82 +0,0 @@
***************
Engines
***************
Background
=====================
Storm supports different engines for building and checking a model. A detailed comparison of the different engines provided in Storm can be found on the `Storm website <http://www.stormchecker.org/documentation/usage/engines.html>`_.
Sparse engine
===============================
In all of the examples so far we used the default sparse engine:
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> prism_program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)
>>> properties = stormpy.parse_properties('P=? [F "one"]', prism_program)
>>> sparse_model = stormpy.build_sparse_model(prism_program, properties)
>>> print(type(sparse_model))
<class 'stormpy.storage.storage.SparseDtmc'>
>>> print("Number of states: {}".format(sparse_model.nr_states))
Number of states: 13
>>> print("Number of transitions: {}".format(sparse_model.nr_transitions))
Number of transitions: 20
The model checking was also done in the sparse engine:
>>> sparse_result = stormpy.check_model_sparse(sparse_model, properties[0])
>>> initial_state = sparse_model.initial_states[0]
>>> print(sparse_result.at(initial_state))
0.16666666666666666
Symbolic engine
===============================
Instead of using the sparse engine, one can also use a symbolic representation in terms of `binary decision diagrams (BDDs)`.
To use the symbolic (dd) engine, we use the symbolic versions for the building and model checking:
>>> symbolic_model = stormpy.build_symbolic_model(prism_program, properties)
>>> print(type(symbolic_model))
<class 'stormpy.storage.storage.SymbolicSylvanDtmc'>
>>> print("Number of states: {}".format(symbolic_model.nr_states))
Number of states: 13
>>> print("Number of transitions: {}".format(symbolic_model.nr_transitions))
Number of transitions: 20
>>> symbolic_result = stormpy.check_model_dd(symbolic_model, properties[0])
>>> print(symbolic_result)
[0, 1] (range)
We can also filter the computed results and only consider the initial states:
>>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model)
>>> symbolic_result.filter(filter)
>>> print(symbolic_result.min)
0.16666650772094727
It is also possible to first build the model symbolically and then transform it into a sparse model:
>>> print(type(symbolic_model))
<class 'stormpy.storage.storage.SymbolicSylvanDtmc'>
>>> transformed_model = stormpy.transform_to_sparse_model(symbolic_model)
>>> print(type(transformed_model))
<class 'stormpy.storage.storage.SparseDtmc'>
Hybrid engine
===============================
A third possibility is to use the hybrid engine, a combination of sparse and dd engines.
It first builds the model symbolically.
The actual model checking is then performed with the engine which is deemed most suitable for the given task.
>>> print(type(symbolic_model))
<class 'stormpy.storage.storage.SymbolicSylvanDtmc'>
>>> hybrid_result = stormpy.check_model_hybrid(symbolic_model, properties[0])
>>> filter = stormpy.create_filter_initial_states_symbolic(symbolic_model)
>>> hybrid_result.filter(filter)
>>> print(hybrid_result)
0.166667

285
doc/source/doc/exploration.ipynb

@ -0,0 +1,285 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exploring Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"Often, stormpy is used as a testbed for new algorithms.\n",
"An essential step is to transfer the (low-level) descriptions of an MDP or other state-based model into\n",
"an own algorithm. In this section, we discuss some of the functionality."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading MDPs\n",
"\n",
"[01-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/01-exploration.py)\n",
"\n",
"In [Getting Started](../getting_started.ipynb), we briefly iterated over a DTMC. In this section, we explore an MDP:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [],
"source": [
">>> import doctest\n",
">>> doctest.ELLIPSIS_MARKER = '-etc-' \n",
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_mdp_maze)\n",
">>> prop = \"R=? [F \\\"goal\\\"]\"\n",
"\n",
">>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)\n",
">>> model = stormpy.build_model(program, properties)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The iteration over the model is as before, but now, for every action, we can have several transitions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"State 0 is initial\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 1\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 2\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 3\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 4\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 5\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 6\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 7\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 8\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 9\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 10\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 11\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 12\n",
"From state 0 by action 0, with probability 0.07692307692307693, go to state 13\n",
"From state 1 by action 0, with probability 1.0, go to state 2\n",
"From state 1 by action 1, with probability 1.0, go to state 1\n",
"From state 1 by action 2, with probability 1.0, go to state 1\n",
"From state 1 by action 3, with probability 1.0, go to state 6\n",
"From state 2 by action 0, with probability 1.0, go to state 3\n",
"From state 2 by action 1, with probability 1.0, go to state 1\n",
"From state 2 by action 2, with probability 1.0, go to state 2\n",
"From state 2 by action 3, with probability 1.0, go to state 2\n",
"From state 3 by action 0, with probability 1.0, go to state 4\n",
"From state 3 by action 1, with probability 1.0, go to state 2\n",
"From state 3 by action 2, with probability 1.0, go to state 3\n",
"From state 3 by action 3, with probability 1.0, go to state 7\n",
"From state 4 by action 0, with probability 1.0, go to state 5\n",
"From state 4 by action 1, with probability 1.0, go to state 3\n",
"From state 4 by action 2, with probability 1.0, go to state 4\n",
"From state 4 by action 3, with probability 1.0, go to state 4\n",
"From state 5 by action 0, with probability 1.0, go to state 5\n",
"From state 5 by action 1, with probability 1.0, go to state 4\n",
"From state 5 by action 2, with probability 1.0, go to state 5\n",
"From state 5 by action 3, with probability 1.0, go to state 8\n",
"From state 6 by action 0, with probability 1.0, go to state 6\n",
"From state 6 by action 1, with probability 1.0, go to state 6\n",
"From state 6 by action 2, with probability 1.0, go to state 1\n",
"From state 6 by action 3, with probability 1.0, go to state 9\n",
"From state 7 by action 0, with probability 1.0, go to state 7\n",
"From state 7 by action 1, with probability 1.0, go to state 7\n",
"From state 7 by action 2, with probability 1.0, go to state 3\n",
"From state 7 by action 3, with probability 1.0, go to state 10\n",
"From state 8 by action 0, with probability 1.0, go to state 8\n",
"From state 8 by action 1, with probability 1.0, go to state 8\n",
"From state 8 by action 2, with probability 1.0, go to state 5\n",
"From state 8 by action 3, with probability 1.0, go to state 11\n",
"From state 9 by action 0, with probability 1.0, go to state 9\n",
"From state 9 by action 1, with probability 1.0, go to state 9\n",
"From state 9 by action 2, with probability 1.0, go to state 6\n",
"From state 9 by action 3, with probability 1.0, go to state 12\n",
"From state 10 by action 0, with probability 1.0, go to state 10\n",
"From state 10 by action 1, with probability 1.0, go to state 10\n",
"From state 10 by action 2, with probability 1.0, go to state 7\n",
"From state 10 by action 3, with probability 1.0, go to state 14\n",
"From state 11 by action 0, with probability 1.0, go to state 10\n",
"From state 11 by action 1, with probability 1.0, go to state 10\n",
"From state 11 by action 2, with probability 1.0, go to state 8\n",
"From state 11 by action 3, with probability 1.0, go to state 13\n",
"From state 12 by action 0, with probability 1.0, go to state 12\n",
"From state 12 by action 1, with probability 1.0, go to state 12\n",
"From state 12 by action 2, with probability 1.0, go to state 9\n",
"From state 12 by action 3, with probability 1.0, go to state 12\n",
"From state 13 by action 0, with probability 1.0, go to state 13\n",
"From state 13 by action 1, with probability 1.0, go to state 13\n",
"From state 13 by action 2, with probability 1.0, go to state 11\n",
"From state 13 by action 3, with probability 1.0, go to state 13\n",
"From state 14 by action 0, with probability 1.0, go to state 14\n"
]
}
],
"source": [
">>> for state in model.states:\n",
"... if state.id in model.initial_states:\n",
"... print(\"State {} is initial\".format(state.id))\n",
"... for action in state.actions:\n",
"... for transition in action.transitions:\n",
"... print(\"From state {} by action {}, with probability {}, go to state {}\".format(state, action, transition.value(), transition.column))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Internally, storm can hold hints to the origin of the actions, which may be helpful to give meaning and for debugging.\n",
"As the availability and the encoding of this data depends on the input model, we discuss these features in highlevel_models.\n",
"\n",
"Storm currently supports deterministic rewards on states or actions. More information can be found in [Reward Models](reward_models.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading POMDPs\n",
"\n",
"[02-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/02-exploration.py)\n",
"\n",
"Internally, POMDPs extend MDPs. Thus, iterating over the MDP is done as before.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_pomdp_maze)\n",
">>> prop = \"R=? [F \\\"goal\\\"]\"\n",
">>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)\n",
">>> model = stormpy.build_model(program, properties)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indeed, all that changed in the code above is the example we use.\n",
"And, that the model type now is a POMDP:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ModelType.MDP\n"
]
}
],
"source": [
">>> print(model.model_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Additionally, POMDPs have a set of observations, which are internally just numbered by an integer from 0 to the number of observations -1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false,
"scrolled": true
},
"outputs": [],
"source": [
">>> print(model.nr_observations)\n",
">>> for state in model.states:\n",
"... print(\"State {} has observation id {}\".format(state.id, model.observations[state.id]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sorting states\n",
"\n",
"[03-exploration.py](https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/03-exploration.py)\n",
"\n",
"Often, one may sort the states according to the graph structure.\n",
"Storm supports some of these sorting algorithms, e.g., topological sort."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading MAs\n",
"\n",
"To be continued…"
]
}
],
"metadata": {
"date": 1598178167.1595793,
"filename": "exploration.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Exploring Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

113
doc/source/doc/exploration.rst

@ -1,113 +0,0 @@
****************
Exploring Models
****************
Background
=====================
Often, stormpy is used as a testbed for new algorithms.
An essential step is to transfer the (low-level) descriptions of an MDP or other state-based model into
an own algorithm. In this section, we discuss some of the functionality.
Reading MDPs
=====================
.. seealso:: `01-exploration.py <https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/01-exploration.py>`_
In :doc:`../getting_started`, we briefly iterated over a DTMC. In this section, we explore an MDP::
>>> import doctest
>>> doctest.ELLIPSIS_MARKER = '-etc-' # doctest:+ELLIPSIS
>>> import stormpy
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_mdp_maze)
>>> prop = "R=? [F \"goal\"]"
>>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)
>>> model = stormpy.build_model(program, properties)
The iteration over the model is as before, but now, for every action, we can have several transitions::
>>> for state in model.states:
... if state.id in model.initial_states:
... print("State {} is initial".format(state.id))
... for action in state.actions:
... for transition in action.transitions:
... print("From state {} by action {}, with probability {}, go to state {}".format(state, action, transition.value(), transition.column))
-etc-
The output (omitted for brevity) contains sentences like::
From state 1 by action 0, with probability 1.0, go to state 2
From state 1 by action 1, with probability 1.0, go to state 1
Internally, storm can hold hints to the origin of the actions, which may be helpful to give meaning and for debugging.
As the availability and the encoding of this data depends on the input model, we discuss these features in :doc:`highlevel_models`.
Storm currently supports deterministic rewards on states or actions. More information can be found in :doc:`reward_models`.
Reading POMDPs
======================
.. seealso:: `02-exploration.py <https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/02-exploration.py>`_
Internally, POMDPs extend MDPs. Thus, iterating over the MDP is done as before.
>>> import stormpy
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_pomdp_maze)
>>> prop = "R=? [F \"goal\"]"
>>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)
>>> model = stormpy.build_model(program, properties)
Indeed, all that changed in the code above is the example we use.
And, that the model type now is a POMDP::
>>> print(model.model_type)
ModelType.POMDP
Additionally, POMDPs have a set of observations, which are internally just numbered by an integer from 0 to the number of observations -1 ::
>>> print(model.nr_observations)
8
>>> for state in model.states:
... print("State {} has observation id {}".format(state.id, model.observations[state.id]))
State 0 has observation id 6
State 1 has observation id 1
State 2 has observation id 4
State 3 has observation id 7
State 4 has observation id 4
State 5 has observation id 3
State 6 has observation id 0
State 7 has observation id 0
State 8 has observation id 0
State 9 has observation id 0
State 10 has observation id 0
State 11 has observation id 0
State 12 has observation id 2
State 13 has observation id 2
State 14 has observation id 5
Sorting states
==============
.. seealso:: `03-exploration.py <https://github.com/moves-rwth/stormpy/blob/master/examples/exploration/03-exploration.py>`_
Often, one may sort the states according to the graph structure.
Storm supports some of these sorting algorithms, e.g., topological sort.
Reading MAs
======================
To be continued...

260
doc/source/doc/gspns.ipynb

@ -0,0 +1,260 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Generalized Stochastic Petri Nets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading GSPNs\n",
"\n",
"[01-gspns.py](https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/01-gspns.py)\n",
"\n",
"Generalized stochastic Petri nets can be given either in the PNPRO format or in the PNML format.\n",
"We start by loading a GSPN stored in the PNML format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.gspn\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
"\n",
">>> import_path = stormpy.examples.files.gspn_pnml_simple\n",
">>> gspn_parser = stormpy.gspn.GSPNParser()\n",
">>> gspn = gspn_parser.parse(import_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After loading, we can display some properties of the GSPN:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(\"Name of GSPN: {}.\".format(gspn.get_name()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of places: {}.\".format(gspn.get_number_of_places()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of immediate transitions: {}.\".format(gspn.get_number_of_immediate_transitions()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of timed transitions: {}.\".format(gspn.get_number_of_timed_transitions()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building GSPNs\n",
"\n",
"[02-gspns.py](https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/02-gspns.py)\n",
"\n",
"In the following, we describe how to construct GSPNs via the `GSPNBuilder`.\n",
"First, we create an instance of the `GSPNBuilder` and set the name of the GSPN:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder = stormpy.gspn.GSPNBuilder()\n",
">>> builder.set_name(\"my_gspn\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the next step, we add an immediate transition to the GSPN.\n",
"Additionally, we define the position of the transition by setting its layout information:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> it_1 = builder.add_immediate_transition(1, 0.0, \"it_1\")\n",
">>> it_layout = stormpy.gspn.LayoutInfo(1.5, 2.0)\n",
">>> builder.set_transition_layout_info(it_1, it_layout)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We add a timed transition and set its layout information:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> tt_1 = builder.add_timed_transition(0, 0.4, \"tt_1\")\n",
">>> tt_layout = stormpy.gspn.LayoutInfo(12.5, 2.0)\n",
">>> builder.set_transition_layout_info(tt_1, tt_layout)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we add two places to the GSPN and set their layouts:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> place_1 = builder.add_place(1, 1, \"place_1\")\n",
">>> p1_layout = stormpy.gspn.LayoutInfo(6.5, 2.0)\n",
">>> builder.set_place_layout_info(place_1, p1_layout)\n",
"\n",
">>> place_2 = builder.add_place(1, 0, \"place_2\")\n",
">>> p2_layout = stormpy.gspn.LayoutInfo(18.5, 2.0)\n",
">>> builder.set_place_layout_info(place_2, p2_layout)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Places and transitions can be linked by input, output and inhibition arcs.\n",
"We add the arcs of our GSPN as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.add_output_arc(it_1, place_1)\n",
">>> builder.add_inhibition_arc(place_1, it_1)\n",
">>> builder.add_input_arc(place_1, tt_1)\n",
">>> builder.add_output_arc(tt_1, place_2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now build the GSPN:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> gspn = builder.build_gspn()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After building, we export the GSPN.\n",
"GSPNs can be saved in the PNPRO format via `export_gspn_pnpro_file(path)` and in the PNML format via `export_gspn_pnml_file(path)`.\n",
"We export the GSPN into the PNPRO format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> export_path = stormpy.examples.files.gspn_pnpro_simple\n",
">>> gspn.export_gspn_pnpro_file(export_path)"
]
}
],
"metadata": {
"date": 1598178167.1731207,
"filename": "gspns.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Generalized Stochastic Petri Nets"
},
"nbformat": 4,
"nbformat_minor": 4
}

84
doc/source/doc/gspns.rst

@ -1,84 +0,0 @@
**********************************
Generalized Stochastic Petri Nets
**********************************
Loading GSPNs
==============
.. seealso:: `01-gspns.py <https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/01-gspns.py>`_
Generalized stochastic Petri nets can be given either in the PNPRO format or in the PNML format.
We start by loading a GSPN stored in the PNML format::
>>> import stormpy
>>> import stormpy.gspn
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> import_path = stormpy.examples.files.gspn_pnml_simple
>>> gspn_parser = stormpy.gspn.GSPNParser()
>>> gspn = gspn_parser.parse(import_path)
After loading, we can display some properties of the GSPN::
>>> print("Name of GSPN: {}.".format(gspn.get_name()))
Name of GSPN: simple_gspn.
>>> print("Number of places: {}.".format(gspn.get_number_of_places()))
Number of places: 4.
>>> print("Number of immediate transitions: {}.".format(gspn.get_number_of_immediate_transitions()))
Number of immediate transitions: 3.
>>> print("Number of timed transitions: {}.".format(gspn.get_number_of_timed_transitions()))
Number of timed transitions: 2.
Building GSPNs
=============================
.. seealso:: `02-gspns.py <https://github.com/moves-rwth/stormpy/blob/master/examples/gspns/02-gspns.py>`_
In the following, we describe how to construct GSPNs via the ``GSPNBuilder``.
First, we create an instance of the ``GSPNBuilder`` and set the name of the GSPN::
>>> builder = stormpy.gspn.GSPNBuilder()
>>> builder.set_name("my_gspn")
In the next step, we add an immediate transition to the GSPN.
Additionally, we define the position of the transition by setting its layout information::
>>> it_1 = builder.add_immediate_transition(1, 0.0, "it_1")
>>> it_layout = stormpy.gspn.LayoutInfo(1.5, 2.0)
>>> builder.set_transition_layout_info(it_1, it_layout)
We add a timed transition and set its layout information::
>>> tt_1 = builder.add_timed_transition(0, 0.4, "tt_1")
>>> tt_layout = stormpy.gspn.LayoutInfo(12.5, 2.0)
>>> builder.set_transition_layout_info(tt_1, tt_layout)
Next, we add two places to the GSPN and set their layouts::
>>> place_1 = builder.add_place(1, 1, "place_1")
>>> p1_layout = stormpy.gspn.LayoutInfo(6.5, 2.0)
>>> builder.set_place_layout_info(place_1, p1_layout)
>>> place_2 = builder.add_place(1, 0, "place_2")
>>> p2_layout = stormpy.gspn.LayoutInfo(18.5, 2.0)
>>> builder.set_place_layout_info(place_2, p2_layout)
Places and transitions can be linked by input, output and inhibition arcs.
We add the arcs of our GSPN as follows::
>>> builder.add_output_arc(it_1, place_1)
>>> builder.add_inhibition_arc(place_1, it_1)
>>> builder.add_input_arc(place_1, tt_1)
>>> builder.add_output_arc(tt_1, place_2)
We can now build the GSPN::
>>> gspn = builder.build_gspn()
After building, we export the GSPN.
GSPNs can be saved in the PNPRO format via ``export_gspn_pnpro_file(path)`` and in the PNML format via ``export_gspn_pnml_file(path)``.
We export the GSPN into the PNPRO format::
>>> export_path = stormpy.examples.files.gspn_pnpro_simple
>>> gspn.export_gspn_pnpro_file(export_path)

189
doc/source/doc/models/.ipynb_checkpoints/building_ctmcs-checkpoint.ipynb

@ -0,0 +1,189 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Continuous-time Markov chains (CTMCs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"In this section, we explain how Stormpy can be used to build a simple CTMC.\n",
"Building CTMCs works similar to building DTMCs as in [Discrete-time Markov chains (DTMCs)](building_dtmcs.ipynb), but additionally every state is equipped with an exit rate.\n",
"\n",
"[01-building-ctmcs.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_ctmcs/01-building-ctmcs.py)\n",
"\n",
"First, we import Stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"In this example, we build the transition matrix using a numpy array"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> import numpy as np\n",
">>> transitions = np.array([\n",
"... [0, 1.5, 0, 0],\n",
"... [3, 0, 1.5, 0],\n",
"... [0, 3, 0, 1.5],\n",
"... [0, 0, 3, 0], ], dtype='float64')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following function call returns a sparse matrix with default row groups:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = stormpy.build_sparse_matrix(transitions)\n",
">>> print(transition_matrix) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Labeling\n",
"\n",
"The state labeling is created analogously to the previous example in [building DTMCs](building_dtmcs.ipynb#labeling):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling = stormpy.storage.StateLabeling(4)\n",
">>> state_labels = {'empty', 'init', 'deadlock', 'full'}\n",
">>> for label in state_labels:\n",
"... state_labeling.add_label(label)\n",
">>> state_labeling.add_label_to_state('init', 0)\n",
">>> state_labeling.add_label_to_state('empty', 0)\n",
">>> state_labeling.add_label_to_state('full', 3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exit Rates\n",
"\n",
"Lastly, we initialize a list to equip every state with an exit rate:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> exit_rates = [1.5, 4.5, 4.5, 3.0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"Now, we can collect all components, including the choice labeling and the exit rates.\n",
"To let the transition values be interpreted as rates we set rate_transitions to True:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, rate_transitions=True)\n",
">>> components.exit_rates = exit_rates"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, we can build the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> ctmc = stormpy.storage.SparseCtmc(components)\n",
">>> print(ctmc) "
]
}
],
"metadata": {
"date": 1598178167.1853151,
"filename": "building_ctmcs.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Continuous-time Markov chains (CTMCs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

328
doc/source/doc/models/.ipynb_checkpoints/building_dtmcs-checkpoint.ipynb

@ -0,0 +1,328 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Discrete-time Markov chains (DTMCs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"As described in [Getting Started](../../getting_started.ipynb),\n",
"Storm can be used to translate a model description e.g. in form of a prism file into a Markov chain.\n",
"\n",
"Here, we use Stormpy to create the components for a model and build a DTMC directly from these components without parsing a model description.\n",
"We consider the previous example of the Knuth-Yao die.\n",
"\n",
"[01-building-dtmcs.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_dtmcs/01-building-dtmcs.py)\n",
"\n",
"In the following we create the transition matrix, the state labeling and the reward models of a DTMC.\n",
"First, we import stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"We begin by creating the matrix representing the transitions in the model in terms of probabilities.\n",
"For constructing the transition matrix, we use the SparseMatrixBuilder:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder = stormpy.SparseMatrixBuilder(rows = 0, columns = 0, entries = 0, force_dimensions = False, has_custom_row_grouping = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we start with an empty matrix to later insert more entries.\n",
"If the number of rows, columns and entries is known, the matrix can be constructed using these values.\n",
"\n",
"For DTMCs each state has at most one outgoing probability distribution.\n",
"Thus, we create a matrix with trivial row grouping where each group contains one row representing the state action.\n",
"In [Markov decision processes (MDPs)](building_mdps.ipynb) we will revisit the example of the die, but extend the model with nondeterministic choice.\n",
"\n",
"We specify the transitions of the model by adding values to the matrix where the column represents the target state.\n",
"All transitions are equipped with a probability defined by the value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.add_next_value(row = 0, column = 1, value = 0.5)\n",
">>> builder.add_next_value(0, 2, 0.5)\n",
">>> builder.add_next_value(1, 3, 0.5)\n",
">>> builder.add_next_value(1, 4, 0.5)\n",
">>> builder.add_next_value(2, 5, 0.5)\n",
">>> builder.add_next_value(2, 6, 0.5)\n",
">>> builder.add_next_value(3, 7, 0.5)\n",
">>> builder.add_next_value(3, 1, 0.5)\n",
">>> builder.add_next_value(4, 8, 0.5)\n",
">>> builder.add_next_value(4, 9, 0.5)\n",
">>> builder.add_next_value(5, 10, 0.5)\n",
">>> builder.add_next_value(5, 11, 0.5)\n",
">>> builder.add_next_value(6, 2, 0.5)\n",
">>> builder.add_next_value(6, 12, 0.5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly, we add a self-loop with probability one to the final states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for s in range(7,13):\n",
"... builder.add_next_value(s, s, 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can build the matrix:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = builder.build()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It should be noted that entries can only be inserted in ascending order, i.e. row by row and column by column.\n",
"Stormpy provides the possibility to build a sparse matrix using the numpy library ([https://numpy.org/](https://numpy.org/) )\n",
"Instead of using the SparseMatrixBuilder, a sparse matrix can be build from a numpy array via the method stormpy.build_sparse_matrix.\n",
"An example is given in [building CTMCs](building_ctmcs.ipynb#transition-matrix)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Labeling\n",
"\n",
"States can be labeled with sets of propositions, for example state 0 can be labeled with “init”.\n",
"In order to specify the state labeling we create an empty labeling for the given number of states and add the labels to the labeling:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling = stormpy.storage.StateLabeling(13)\n",
"\n",
">>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}\n",
">>> for label in labels:\n",
"... state_labeling.add_label(label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Labels can be asociated with states. As an example, we label the state 0 with “init”:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling.add_label_to_state('init', 0)\n",
">>> print(state_labeling.get_states('init'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we set the associations between the remaining labels and states.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling.add_label_to_state('one', 7)\n",
">>> state_labeling.add_label_to_state('two', 8)\n",
">>> state_labeling.add_label_to_state('three', 9)\n",
">>> state_labeling.add_label_to_state('four', 10)\n",
">>> state_labeling.add_label_to_state('five', 11)\n",
">>> state_labeling.add_label_to_state('six', 12)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To set the same label for multiple states, we can use a BitVector representation for the set of states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))\n",
">>> print(state_labeling) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Defining a choice labeling is possible in a similar way."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reward Models\n",
"\n",
"Stormpy supports multiple reward models such as state rewards, state-action rewards and as transition rewards.\n",
"In this example, the actions of states which satisfy s<7 acquire a reward of 1.0.\n",
"\n",
"The state-action rewards are represented by a vector, which is associated to a reward model named “coin_flips”:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> reward_models = {}\n",
">>> action_reward = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n",
">>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector = action_reward)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"Next, we collect all components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, we can build the DTMC:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> dtmc = stormpy.storage.SparseDtmc(components)\n",
">>> print(dtmc) "
]
}
],
"metadata": {
"date": 1598178167.203723,
"filename": "building_dtmcs.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Discrete-time Markov chains (DTMCs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

211
doc/source/doc/models/.ipynb_checkpoints/building_mas-checkpoint.ipynb

@ -0,0 +1,211 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Markov automata (MAs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"We already saw the process of building [CTMCs](building_ctmcs.ipynb) and [MDPs](building_mdps.ipynb) via Stormpy.\n",
"\n",
"Markov automata use states that are probabilistic, i.e. like the states of an MDP, or Markovian, i.e. like the states of a CTMC.\n",
"\n",
"In this section, we build a small MA with five states from which the first four are Markovian.\n",
"Since we covered labeling and exit rates already in the previous examples we omit the description of these components.\n",
"The full example can be found here:\n",
"\n",
"[01-building-mas.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_mas/01-building-mas.py)\n",
"\n",
"First, we import Stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"For [building MDPS](building_mdps.ipynb#transition-matrix), we used the SparseMatrixBuilder to create a matrix with a custom row grouping.\n",
"In this example, we use the numpy library.\n",
"\n",
"In the beginning, we create a numpy array that will be used to build the transition matrix of our model.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import numpy as np\n",
">>> transitions = np.array([\n",
"... [0, 1, 0, 0, 0],\n",
"... [0.8, 0, 0.2, 0, 0],\n",
"... [0.9, 0, 0, 0.1, 0],\n",
"... [0, 0, 0, 0, 1],\n",
"... [0, 0, 0, 1, 0],\n",
"... [0, 0, 0, 0, 1]], dtype='float64')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When building the matrix we define a custom row grouping by passing a list containing the starting row of each row group in ascending order:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = stormpy.build_sparse_matrix(transitions, [0, 2, 3, 4, 5])\n",
">>> print(transition_matrix) "
]
},
{
"cell_type": "markdown",
"metadata": {
"nbsphinx": "hidden"
},
"source": [
"## Labeling and Exit Rates"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbsphinx": "hidden"
},
"outputs": [],
"source": [
"\n",
">>> state_labeling = stormpy.storage.StateLabeling(5)\n",
">>> state_labels = {'init', 'deadlock'}\n",
">>> for label in state_labels:\n",
"... state_labeling.add_label(label)\n",
">>> state_labeling.add_label_to_state('init', 0)\n",
"\n",
">>> choice_labeling = stormpy.storage.ChoiceLabeling(6)\n",
">>> choice_labels = {'alpha', 'beta'}\n",
">>> for label in choice_labels:\n",
"... choice_labeling.add_label(label)\n",
">>> choice_labeling.add_label_to_choice('alpha', 0)\n",
">>> choice_labeling.add_label_to_choice('beta', 1)\n",
"\n",
">>> exit_rates = [0.0, 10.0, 12.0, 1.0, 1.0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Markovian States\n",
"\n",
"In order to define which states have only one probability distribution over the successor states,\n",
"we build a BitVector that contains the respective Markovian states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> markovian_states = stormpy.BitVector(5, [1, 2, 3, 4])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"Now, we can collect all components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, markovian_states=markovian_states)\n",
">>> components.choice_labeling = choice_labeling\n",
">>> components.exit_rates = exit_rates"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can build the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> ma = stormpy.storage.SparseMA(components)\n",
">>> print(ma) "
]
}
],
"metadata": {
"celltoolbar": "Edit Metadata",
"date": 1598178167.2185411,
"filename": "building_mas.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Markov automata (MAs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

309
doc/source/doc/models/.ipynb_checkpoints/building_mdps-checkpoint.ipynb

@ -0,0 +1,309 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Markov decision processes (MDPs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"In [Discrete-time Markov chains (DTMCs)](building_dtmcs.ipynb) we modelled Knuth-Yao’s model of a fair die by the means of a DTMC.\n",
"In the following we extend this model with nondeterministic choice by building a Markov decision process.\n",
"\n",
"[01-building-mdps.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_mdps/01-building-mdps.py)\n",
"\n",
"First, we import Stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"Since we want to build a nondeterminstic model, we create a transition matrix with a custom row group for each state:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder = stormpy.SparseMatrixBuilder(rows=0, columns=0, entries=0, force_dimensions=False, has_custom_row_grouping=True, row_groups=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We need more than one row for the transitions starting in state 0 because a nondeterministic choice over the actions is available.\n",
"Therefore, we start a new group that will contain the rows representing actions of state 0.\n",
"Note that the row group needs to be added before any entries are added to the group:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.new_row_group(0)\n",
">>> builder.add_next_value(0, 1, 0.5)\n",
">>> builder.add_next_value(0, 2, 0.5)\n",
">>> builder.add_next_value(1, 1, 0.2)\n",
">>> builder.add_next_value(1, 2, 0.8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we have two nondeterministic choices in state 0.\n",
"With choice 0 we have probability 0.5 to got to state 1 and probability 0.5 to got to state 2.\n",
"With choice 1 we got to state 1 with probability 0.2 and go to state 2 with probability 0.8.\n",
"\n",
"For the remaining states, we need to specify the starting rows of each row group:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.new_row_group(2)\n",
">>> builder.add_next_value(2, 3, 0.5)\n",
">>> builder.add_next_value(2, 4, 0.5)\n",
">>> builder.new_row_group(3)\n",
">>> builder.add_next_value(3, 5, 0.5)\n",
">>> builder.add_next_value(3, 6, 0.5)\n",
">>> builder.new_row_group(4)\n",
">>> builder.add_next_value(4, 7, 0.5)\n",
">>> builder.add_next_value(4, 1, 0.5)\n",
">>> builder.new_row_group(5)\n",
">>> builder.add_next_value(5, 8, 0.5)\n",
">>> builder.add_next_value(5, 9, 0.5)\n",
">>> builder.new_row_group(6)\n",
">>> builder.add_next_value(6, 10, 0.5)\n",
">>> builder.add_next_value(6, 11, 0.5)\n",
">>> builder.new_row_group(7)\n",
">>> builder.add_next_value(7, 2, 0.5)\n",
">>> builder.add_next_value(7, 12, 0.5)\n",
"\n",
">>> for s in range(8, 14):\n",
"... builder.new_row_group(s)\n",
"... builder.add_next_value(s, s - 1, 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we build the transition matrix:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = builder.build()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Labeling\n",
"\n",
"We have seen the construction of a state labeling in previous examples. Therefore we omit the description here\n",
"Instead, we focus on the choices.\n",
"Since in state 0 a nondeterministic choice over two actions is available, the number of choices is 14.\n",
"To distinguish those we can define a choice labeling:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbsphinx": "hidden"
},
"outputs": [],
"source": [
">>> state_labeling = stormpy.storage.StateLabeling(13)\n",
">>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}\n",
">>> for label in labels:\n",
"... state_labeling.add_label(label)\n",
"\n",
">>> state_labeling.add_label_to_state('init', 0)\n",
">>> state_labeling.add_label_to_state('one', 7)\n",
">>> state_labeling.add_label_to_state('two', 8)\n",
">>> state_labeling.add_label_to_state('three', 9)\n",
">>> state_labeling.add_label_to_state('four', 10)\n",
">>> state_labeling.add_label_to_state('five', 11)\n",
">>> state_labeling.add_label_to_state('six', 12)\n",
">>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> choice_labeling = stormpy.storage.ChoiceLabeling(14)\n",
">>> choice_labels = {'a', 'b'}\n",
"\n",
">>> for label in choice_labels:\n",
"... choice_labeling.add_label(label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We assign the label ‘a’ to the first action of state 0 and ‘b’ to the second.\n",
"Recall that those actions where defined in row one and two of the transition matrix respectively:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> choice_labeling.add_label_to_choice('a', 0)\n",
">>> choice_labeling.add_label_to_choice('b', 1)\n",
">>> print(choice_labeling) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reward models\n",
"\n",
"In this reward model the length of the action rewards coincides with the number of choices:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> reward_models = {}\n",
">>> action_reward = [0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n",
">>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector=action_reward)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"We collect the components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models, rate_transitions=False)\n",
">>> components.choice_labeling = choice_labeling"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We build the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> mdp = stormpy.storage.SparseMdp(components)\n",
">>> print(mdp) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Partially observable Markov decision process (POMDPs)\n",
"\n",
"To build a partially observable Markov decision process (POMDP),\n",
"components.observations can be set to a list of numbers that defines the status of the observables in each state."
]
}
],
"metadata": {
"celltoolbar": "Edit Metadata",
"date": 1598178167.234528,
"filename": "building_mdps.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Markov decision processes (MDPs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

189
doc/source/doc/models/building_ctmcs.ipynb

@ -0,0 +1,189 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Continuous-time Markov chains (CTMCs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"In this section, we explain how Stormpy can be used to build a simple CTMC.\n",
"Building CTMCs works similar to building DTMCs as in [Discrete-time Markov chains (DTMCs)](building_dtmcs.ipynb), but additionally every state is equipped with an exit rate.\n",
"\n",
"[01-building-ctmcs.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_ctmcs/01-building-ctmcs.py)\n",
"\n",
"First, we import Stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"In this example, we build the transition matrix using a numpy array"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> import numpy as np\n",
">>> transitions = np.array([\n",
"... [0, 1.5, 0, 0],\n",
"... [3, 0, 1.5, 0],\n",
"... [0, 3, 0, 1.5],\n",
"... [0, 0, 3, 0], ], dtype='float64')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following function call returns a sparse matrix with default row groups:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = stormpy.build_sparse_matrix(transitions)\n",
">>> print(transition_matrix) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Labeling\n",
"\n",
"The state labeling is created analogously to the previous example in [building DTMCs](building_dtmcs.ipynb#labeling):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling = stormpy.storage.StateLabeling(4)\n",
">>> state_labels = {'empty', 'init', 'deadlock', 'full'}\n",
">>> for label in state_labels:\n",
"... state_labeling.add_label(label)\n",
">>> state_labeling.add_label_to_state('init', 0)\n",
">>> state_labeling.add_label_to_state('empty', 0)\n",
">>> state_labeling.add_label_to_state('full', 3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exit Rates\n",
"\n",
"Lastly, we initialize a list to equip every state with an exit rate:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> exit_rates = [1.5, 4.5, 4.5, 3.0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"Now, we can collect all components, including the choice labeling and the exit rates.\n",
"To let the transition values be interpreted as rates we set rate_transitions to True:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, rate_transitions=True)\n",
">>> components.exit_rates = exit_rates"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, we can build the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> ctmc = stormpy.storage.SparseCtmc(components)\n",
">>> print(ctmc) "
]
}
],
"metadata": {
"date": 1598178167.1853151,
"filename": "building_ctmcs.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Continuous-time Markov chains (CTMCs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

103
doc/source/doc/models/building_ctmcs.rst

@ -1,103 +0,0 @@
**************************************
Continuous-time Markov chains (CTMCs)
**************************************
.. check if the following doctest should be run (and hide it in Sphinx)
>>> # Skip tests if numpy is not available
>>> import pytest
>>> try:
... import numpy as np
... except ModuleNotFoundError:
... np = None
>>> if np is None:
... pytest.skip("skipping the doctest below since it's not going to work.")
Background
=====================
In this section, we explain how Stormpy can be used to build a simple CTMC.
Building CTMCs works similar to building DTMCs as in :doc:`building_dtmcs`, but additionally every state is equipped with an exit rate.
.. seealso:: `01-building-ctmcs.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_ctmcs/01-building-ctmcs.py>`_
First, we import Stormpy::
>>> import stormpy
Transition Matrix
=====================
In this example, we build the transition matrix using a numpy array
>>> import numpy as np
>>> transitions = np.array([
... [0, 1.5, 0, 0],
... [3, 0, 1.5, 0],
... [0, 3, 0, 1.5],
... [0, 0, 3, 0], ], dtype='float64')
The following function call returns a sparse matrix with default row groups::
>>> transition_matrix = stormpy.build_sparse_matrix(transitions)
>>> print(transition_matrix) # doctest: +SKIP
0 1 2 3
---- group 0/3 ----
0 ( 0 1.5 0 0 ) 0
---- group 1/3 ----
1 ( 3 0 1.5 0 ) 1
---- group 2/3 ----
2 ( 0 3 0 1.5 ) 2
---- group 3/3 ----
3 ( 0 0 3 0 ) 3
0 1 2 3
Labeling
================
The state labeling is created analogously to the previous example in :ref:`building DTMCs<doc/models/building_dtmcs:Labeling>`::
>>> state_labeling = stormpy.storage.StateLabeling(4)
>>> state_labels = {'empty', 'init', 'deadlock', 'full'}
>>> for label in state_labels:
... state_labeling.add_label(label)
>>> state_labeling.add_label_to_state('init', 0)
>>> state_labeling.add_label_to_state('empty', 0)
>>> state_labeling.add_label_to_state('full', 3)
Exit Rates
====================
Lastly, we initialize a list to equip every state with an exit rate::
>>> exit_rates = [1.5, 4.5, 4.5, 3.0]
Building the Model
====================
Now, we can collect all components, including the choice labeling and the exit rates.
To let the transition values be interpreted as rates we set `rate_transitions` to `True`::
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, rate_transitions=True)
>>> components.exit_rates = exit_rates
And finally, we can build the model::
>>> ctmc = stormpy.storage.SparseCtmc(components)
>>> print(ctmc) # doctest: +SKIP
--------------------------------------------------------------
Model type: CTMC (sparse)
States: 4
Transitions: 6
Reward Models: none
State Labels: 4 labels
* init -> 1 item(s)
* empty -> 1 item(s)
* deadlock -> 0 item(s)
* full -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------

328
doc/source/doc/models/building_dtmcs.ipynb

@ -0,0 +1,328 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Discrete-time Markov chains (DTMCs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"As described in [Getting Started](../../getting_started.ipynb),\n",
"Storm can be used to translate a model description e.g. in form of a prism file into a Markov chain.\n",
"\n",
"Here, we use Stormpy to create the components for a model and build a DTMC directly from these components without parsing a model description.\n",
"We consider the previous example of the Knuth-Yao die.\n",
"\n",
"[01-building-dtmcs.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_dtmcs/01-building-dtmcs.py)\n",
"\n",
"In the following we create the transition matrix, the state labeling and the reward models of a DTMC.\n",
"First, we import stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"We begin by creating the matrix representing the transitions in the model in terms of probabilities.\n",
"For constructing the transition matrix, we use the SparseMatrixBuilder:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder = stormpy.SparseMatrixBuilder(rows = 0, columns = 0, entries = 0, force_dimensions = False, has_custom_row_grouping = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we start with an empty matrix to later insert more entries.\n",
"If the number of rows, columns and entries is known, the matrix can be constructed using these values.\n",
"\n",
"For DTMCs each state has at most one outgoing probability distribution.\n",
"Thus, we create a matrix with trivial row grouping where each group contains one row representing the state action.\n",
"In [Markov decision processes (MDPs)](building_mdps.ipynb) we will revisit the example of the die, but extend the model with nondeterministic choice.\n",
"\n",
"We specify the transitions of the model by adding values to the matrix where the column represents the target state.\n",
"All transitions are equipped with a probability defined by the value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.add_next_value(row = 0, column = 1, value = 0.5)\n",
">>> builder.add_next_value(0, 2, 0.5)\n",
">>> builder.add_next_value(1, 3, 0.5)\n",
">>> builder.add_next_value(1, 4, 0.5)\n",
">>> builder.add_next_value(2, 5, 0.5)\n",
">>> builder.add_next_value(2, 6, 0.5)\n",
">>> builder.add_next_value(3, 7, 0.5)\n",
">>> builder.add_next_value(3, 1, 0.5)\n",
">>> builder.add_next_value(4, 8, 0.5)\n",
">>> builder.add_next_value(4, 9, 0.5)\n",
">>> builder.add_next_value(5, 10, 0.5)\n",
">>> builder.add_next_value(5, 11, 0.5)\n",
">>> builder.add_next_value(6, 2, 0.5)\n",
">>> builder.add_next_value(6, 12, 0.5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly, we add a self-loop with probability one to the final states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for s in range(7,13):\n",
"... builder.add_next_value(s, s, 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can build the matrix:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = builder.build()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It should be noted that entries can only be inserted in ascending order, i.e. row by row and column by column.\n",
"Stormpy provides the possibility to build a sparse matrix using the numpy library ([https://numpy.org/](https://numpy.org/) )\n",
"Instead of using the SparseMatrixBuilder, a sparse matrix can be build from a numpy array via the method stormpy.build_sparse_matrix.\n",
"An example is given in [building CTMCs](building_ctmcs.ipynb#transition-matrix)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Labeling\n",
"\n",
"States can be labeled with sets of propositions, for example state 0 can be labeled with “init”.\n",
"In order to specify the state labeling we create an empty labeling for the given number of states and add the labels to the labeling:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling = stormpy.storage.StateLabeling(13)\n",
"\n",
">>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}\n",
">>> for label in labels:\n",
"... state_labeling.add_label(label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Labels can be asociated with states. As an example, we label the state 0 with “init”:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling.add_label_to_state('init', 0)\n",
">>> print(state_labeling.get_states('init'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we set the associations between the remaining labels and states.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling.add_label_to_state('one', 7)\n",
">>> state_labeling.add_label_to_state('two', 8)\n",
">>> state_labeling.add_label_to_state('three', 9)\n",
">>> state_labeling.add_label_to_state('four', 10)\n",
">>> state_labeling.add_label_to_state('five', 11)\n",
">>> state_labeling.add_label_to_state('six', 12)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To set the same label for multiple states, we can use a BitVector representation for the set of states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))\n",
">>> print(state_labeling) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Defining a choice labeling is possible in a similar way."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reward Models\n",
"\n",
"Stormpy supports multiple reward models such as state rewards, state-action rewards and as transition rewards.\n",
"In this example, the actions of states which satisfy s<7 acquire a reward of 1.0.\n",
"\n",
"The state-action rewards are represented by a vector, which is associated to a reward model named “coin_flips”:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> reward_models = {}\n",
">>> action_reward = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n",
">>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector = action_reward)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"Next, we collect all components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, we can build the DTMC:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> dtmc = stormpy.storage.SparseDtmc(components)\n",
">>> print(dtmc) "
]
}
],
"metadata": {
"date": 1598178167.203723,
"filename": "building_dtmcs.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Discrete-time Markov chains (DTMCs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

151
doc/source/doc/models/building_dtmcs.rst

@ -1,151 +0,0 @@
************************************
Discrete-time Markov chains (DTMCs)
************************************
Background
=====================
As described in :doc:`../../getting_started`,
Storm can be used to translate a model description e.g. in form of a prism file into a Markov chain.
Here, we use Stormpy to create the components for a model and build a DTMC directly from these components without parsing a model description.
We consider the previous example of the Knuth-Yao die.
.. seealso:: `01-building-dtmcs.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_dtmcs/01-building-dtmcs.py>`_
In the following we create the transition matrix, the state labeling and the reward models of a DTMC.
First, we import stormpy::
>>> import stormpy
Transition Matrix
=====================
We begin by creating the matrix representing the transitions in the model in terms of probabilities.
For constructing the transition matrix, we use the SparseMatrixBuilder::
>>> builder = stormpy.SparseMatrixBuilder(rows = 0, columns = 0, entries = 0, force_dimensions = False, has_custom_row_grouping = False)
Here, we start with an empty matrix to later insert more entries.
If the number of rows, columns and entries is known, the matrix can be constructed using these values.
For DTMCs each state has at most one outgoing probability distribution.
Thus, we create a matrix with trivial row grouping where each group contains one row representing the state action.
In :doc:`building_mdps` we will revisit the example of the die, but extend the model with nondeterministic choice.
We specify the transitions of the model by adding values to the matrix where the column represents the target state.
All transitions are equipped with a probability defined by the value::
>>> builder.add_next_value(row = 0, column = 1, value = 0.5)
>>> builder.add_next_value(0, 2, 0.5)
>>> builder.add_next_value(1, 3, 0.5)
>>> builder.add_next_value(1, 4, 0.5)
>>> builder.add_next_value(2, 5, 0.5)
>>> builder.add_next_value(2, 6, 0.5)
>>> builder.add_next_value(3, 7, 0.5)
>>> builder.add_next_value(3, 1, 0.5)
>>> builder.add_next_value(4, 8, 0.5)
>>> builder.add_next_value(4, 9, 0.5)
>>> builder.add_next_value(5, 10, 0.5)
>>> builder.add_next_value(5, 11, 0.5)
>>> builder.add_next_value(6, 2, 0.5)
>>> builder.add_next_value(6, 12, 0.5)
Lastly, we add a self-loop with probability one to the final states::
>>> for s in range(7,13):
... builder.add_next_value(s, s, 1)
Finally, we can build the matrix::
>>> transition_matrix = builder.build()
It should be noted that entries can only be inserted in ascending order, i.e. row by row and column by column.
Stormpy provides the possibility to build a sparse matrix using the numpy library (https://numpy.org/ )
Instead of using the SparseMatrixBuilder, a sparse matrix can be build from a numpy array via the method `stormpy.build_sparse_matrix`.
An example is given in :ref:`building CTMCs <doc/models/building_ctmcs:Transition Matrix>`.
Labeling
====================
States can be labeled with sets of propositions, for example state 0 can be labeled with "init".
In order to specify the state labeling we create an empty labeling for the given number of states and add the labels to the labeling::
>>> state_labeling = stormpy.storage.StateLabeling(13)
>>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}
>>> for label in labels:
... state_labeling.add_label(label)
Labels can be asociated with states. As an example, we label the state 0 with "init"::
>>> state_labeling.add_label_to_state('init', 0)
>>> print(state_labeling.get_states('init'))
bit vector(1/13) [0 ]
Next, we set the associations between the remaining labels and states.::
>>> state_labeling.add_label_to_state('one', 7)
>>> state_labeling.add_label_to_state('two', 8)
>>> state_labeling.add_label_to_state('three', 9)
>>> state_labeling.add_label_to_state('four', 10)
>>> state_labeling.add_label_to_state('five', 11)
>>> state_labeling.add_label_to_state('six', 12)
To set the same label for multiple states, we can use a BitVector representation for the set of states::
>>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))
>>> print(state_labeling) # doctest: +SKIP
9 labels
* one -> 1 item(s)
* four -> 1 item(s)
* done -> 6 item(s)
* three -> 1 item(s)
* init -> 1 item(s)
* two -> 1 item(s)
* six -> 1 item(s)
* deadlock -> 0 item(s)
* five -> 1 item(s)
Defining a choice labeling is possible in a similar way.
Reward Models
====================
Stormpy supports multiple reward models such as state rewards, state-action rewards and as transition rewards.
In this example, the actions of states which satisfy `s<7` acquire a reward of 1.0.
The state-action rewards are represented by a vector, which is associated to a reward model named "coin_flips"::
>>> reward_models = {}
>>> action_reward = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
>>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector = action_reward)
Building the Model
====================
Next, we collect all components::
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models)
And finally, we can build the DTMC::
>>> dtmc = stormpy.storage.SparseDtmc(components)
>>> print(dtmc) # doctest: +SKIP
--------------------------------------------------------------
Model type: DTMC (sparse)
States: 13
Transitions: 20
Reward Models: coin_flips
State Labels: 9 labels
* three -> 1 item(s)
* six -> 1 item(s)
* done -> 6 item(s)
* four -> 1 item(s)
* five -> 1 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
* two -> 1 item(s)
* one -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------

211
doc/source/doc/models/building_mas.ipynb

@ -0,0 +1,211 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Markov automata (MAs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"We already saw the process of building [CTMCs](building_ctmcs.ipynb) and [MDPs](building_mdps.ipynb) via Stormpy.\n",
"\n",
"Markov automata use states that are probabilistic, i.e. like the states of an MDP, or Markovian, i.e. like the states of a CTMC.\n",
"\n",
"In this section, we build a small MA with five states from which the first four are Markovian.\n",
"Since we covered labeling and exit rates already in the previous examples we omit the description of these components.\n",
"The full example can be found here:\n",
"\n",
"[01-building-mas.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_mas/01-building-mas.py)\n",
"\n",
"First, we import Stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"For [building MDPS](building_mdps.ipynb#transition-matrix), we used the SparseMatrixBuilder to create a matrix with a custom row grouping.\n",
"In this example, we use the numpy library.\n",
"\n",
"In the beginning, we create a numpy array that will be used to build the transition matrix of our model.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import numpy as np\n",
">>> transitions = np.array([\n",
"... [0, 1, 0, 0, 0],\n",
"... [0.8, 0, 0.2, 0, 0],\n",
"... [0.9, 0, 0, 0.1, 0],\n",
"... [0, 0, 0, 0, 1],\n",
"... [0, 0, 0, 1, 0],\n",
"... [0, 0, 0, 0, 1]], dtype='float64')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When building the matrix we define a custom row grouping by passing a list containing the starting row of each row group in ascending order:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = stormpy.build_sparse_matrix(transitions, [0, 2, 3, 4, 5])\n",
">>> print(transition_matrix) "
]
},
{
"cell_type": "markdown",
"metadata": {
"nbsphinx": "hidden"
},
"source": [
"## Labeling and Exit Rates"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbsphinx": "hidden"
},
"outputs": [],
"source": [
"\n",
">>> state_labeling = stormpy.storage.StateLabeling(5)\n",
">>> state_labels = {'init', 'deadlock'}\n",
">>> for label in state_labels:\n",
"... state_labeling.add_label(label)\n",
">>> state_labeling.add_label_to_state('init', 0)\n",
"\n",
">>> choice_labeling = stormpy.storage.ChoiceLabeling(6)\n",
">>> choice_labels = {'alpha', 'beta'}\n",
">>> for label in choice_labels:\n",
"... choice_labeling.add_label(label)\n",
">>> choice_labeling.add_label_to_choice('alpha', 0)\n",
">>> choice_labeling.add_label_to_choice('beta', 1)\n",
"\n",
">>> exit_rates = [0.0, 10.0, 12.0, 1.0, 1.0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Markovian States\n",
"\n",
"In order to define which states have only one probability distribution over the successor states,\n",
"we build a BitVector that contains the respective Markovian states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> markovian_states = stormpy.BitVector(5, [1, 2, 3, 4])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"Now, we can collect all components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, markovian_states=markovian_states)\n",
">>> components.choice_labeling = choice_labeling\n",
">>> components.exit_rates = exit_rates"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can build the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> ma = stormpy.storage.SparseMA(components)\n",
">>> print(ma) "
]
}
],
"metadata": {
"celltoolbar": "Edit Metadata",
"date": 1598178167.2185411,
"filename": "building_mas.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Markov automata (MAs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

120
doc/source/doc/models/building_mas.rst

@ -1,120 +0,0 @@
**************************************
Markov automata (MAs)
**************************************
.. check if the following doctest should be run (and hide it in Sphinx)
>>> # Skip tests if numpy is not available
>>> import pytest
>>> try:
... import numpy as np
... except ModuleNotFoundError:
... np = None
>>> if np is None:
... pytest.skip("skipping the doctest below since it's not going to work.")
Background
=====================
We already saw the process of building :doc:`CTMCs <building_ctmcs>` and :doc:`MDPs <building_mdps>` via Stormpy.
Markov automata use states that are probabilistic, i.e. like the states of an MDP, or Markovian, i.e. like the states of a CTMC.
In this section, we build a small MA with five states from which the first four are Markovian.
Since we covered labeling and exit rates already in the previous examples we omit the description of these components.
The full example can be found here:
.. seealso:: `01-building-mas.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_mas/01-building-mas.py>`_
First, we import Stormpy::
>>> import stormpy
Transition Matrix
==================
For :ref:`building MDPS <doc/models/building_mdps:Transition Matrix>`, we used the `SparseMatrixBuilder` to create a matrix with a custom row grouping.
In this example, we use the numpy library.
In the beginning, we create a numpy array that will be used to build the transition matrix of our model.::
>>> import numpy as np
>>> transitions = np.array([
... [0, 1, 0, 0, 0],
... [0.8, 0, 0.2, 0, 0],
... [0.9, 0, 0, 0.1, 0],
... [0, 0, 0, 0, 1],
... [0, 0, 0, 1, 0],
... [0, 0, 0, 0, 1]], dtype='float64')
When building the matrix we define a custom row grouping by passing a list containing the starting row of each row group in ascending order::
>>> transition_matrix = stormpy.build_sparse_matrix(transitions, [0, 2, 3, 4, 5])
>>> print(transition_matrix) # doctest: +SKIP
0 1 2 3 4
---- group 0/4 ----
0 ( 0 1 0 0 0 ) 0
1 ( 0.8 0 0.2 0 0 ) 1
---- group 1/4 ----
2 ( 0.9 0 0 0.1 0 ) 2
---- group 2/4 ----
3 ( 0 0 0 0 1 ) 3
---- group 3/4 ----
4 ( 0 0 0 1 0 ) 4
---- group 4/4 ----
5 ( 0 0 0 0 1 ) 5
0 1 2 3 4
Markovian States
==================
In order to define which states have only one probability distribution over the successor states,
we build a BitVector that contains the respective Markovian states::
>>> markovian_states = stormpy.BitVector(5, [1, 2, 3, 4])
Building the Model
====================
.. testsetup::
# Not displayed in documentation but needed for doctests
>>> state_labeling = stormpy.storage.StateLabeling(5)
>>> state_labels = {'init', 'deadlock'}
>>> for label in state_labels:
... state_labeling.add_label(label)
>>> state_labeling.add_label_to_state('init', 0)
>>> choice_labeling = stormpy.storage.ChoiceLabeling(6)
>>> choice_labels = {'alpha', 'beta'}
>>> for label in choice_labels:
... choice_labeling.add_label(label)
>>> choice_labeling.add_label_to_choice('alpha', 0)
>>> choice_labeling.add_label_to_choice('beta', 1)
>>> exit_rates = [0.0, 10.0, 12.0, 1.0, 1.0]
Now, we can collect all components::
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, markovian_states=markovian_states)
>>> components.choice_labeling = choice_labeling
>>> components.exit_rates = exit_rates
Finally, we can build the model::
>>> ma = stormpy.storage.SparseMA(components)
>>> print(ma) # doctest: +SKIP
--------------------------------------------------------------
Model type: Markov Automaton (sparse)
States: 5
Transitions: 8
Choices: 6
Markovian St.: 4
Max. Rate.: 12
Reward Models: none
State Labels: 2 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 2 labels
* alpha -> 1 item(s)
* beta -> 1 item(s)
--------------------------------------------------------------

309
doc/source/doc/models/building_mdps.ipynb

@ -0,0 +1,309 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Markov decision processes (MDPs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"In [Discrete-time Markov chains (DTMCs)](building_dtmcs.ipynb) we modelled Knuth-Yao’s model of a fair die by the means of a DTMC.\n",
"In the following we extend this model with nondeterministic choice by building a Markov decision process.\n",
"\n",
"[01-building-mdps.py](https://github.com/moves-rwth/stormpy/blob/master/examples/building_mdps/01-building-mdps.py)\n",
"\n",
"First, we import Stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transition Matrix\n",
"\n",
"Since we want to build a nondeterminstic model, we create a transition matrix with a custom row group for each state:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder = stormpy.SparseMatrixBuilder(rows=0, columns=0, entries=0, force_dimensions=False, has_custom_row_grouping=True, row_groups=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We need more than one row for the transitions starting in state 0 because a nondeterministic choice over the actions is available.\n",
"Therefore, we start a new group that will contain the rows representing actions of state 0.\n",
"Note that the row group needs to be added before any entries are added to the group:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.new_row_group(0)\n",
">>> builder.add_next_value(0, 1, 0.5)\n",
">>> builder.add_next_value(0, 2, 0.5)\n",
">>> builder.add_next_value(1, 1, 0.2)\n",
">>> builder.add_next_value(1, 2, 0.8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we have two nondeterministic choices in state 0.\n",
"With choice 0 we have probability 0.5 to got to state 1 and probability 0.5 to got to state 2.\n",
"With choice 1 we got to state 1 with probability 0.2 and go to state 2 with probability 0.8.\n",
"\n",
"For the remaining states, we need to specify the starting rows of each row group:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> builder.new_row_group(2)\n",
">>> builder.add_next_value(2, 3, 0.5)\n",
">>> builder.add_next_value(2, 4, 0.5)\n",
">>> builder.new_row_group(3)\n",
">>> builder.add_next_value(3, 5, 0.5)\n",
">>> builder.add_next_value(3, 6, 0.5)\n",
">>> builder.new_row_group(4)\n",
">>> builder.add_next_value(4, 7, 0.5)\n",
">>> builder.add_next_value(4, 1, 0.5)\n",
">>> builder.new_row_group(5)\n",
">>> builder.add_next_value(5, 8, 0.5)\n",
">>> builder.add_next_value(5, 9, 0.5)\n",
">>> builder.new_row_group(6)\n",
">>> builder.add_next_value(6, 10, 0.5)\n",
">>> builder.add_next_value(6, 11, 0.5)\n",
">>> builder.new_row_group(7)\n",
">>> builder.add_next_value(7, 2, 0.5)\n",
">>> builder.add_next_value(7, 12, 0.5)\n",
"\n",
">>> for s in range(8, 14):\n",
"... builder.new_row_group(s)\n",
"... builder.add_next_value(s, s - 1, 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we build the transition matrix:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> transition_matrix = builder.build()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Labeling\n",
"\n",
"We have seen the construction of a state labeling in previous examples. Therefore we omit the description here\n",
"Instead, we focus on the choices.\n",
"Since in state 0 a nondeterministic choice over two actions is available, the number of choices is 14.\n",
"To distinguish those we can define a choice labeling:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbsphinx": "hidden"
},
"outputs": [],
"source": [
">>> state_labeling = stormpy.storage.StateLabeling(13)\n",
">>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}\n",
">>> for label in labels:\n",
"... state_labeling.add_label(label)\n",
"\n",
">>> state_labeling.add_label_to_state('init', 0)\n",
">>> state_labeling.add_label_to_state('one', 7)\n",
">>> state_labeling.add_label_to_state('two', 8)\n",
">>> state_labeling.add_label_to_state('three', 9)\n",
">>> state_labeling.add_label_to_state('four', 10)\n",
">>> state_labeling.add_label_to_state('five', 11)\n",
">>> state_labeling.add_label_to_state('six', 12)\n",
">>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> choice_labeling = stormpy.storage.ChoiceLabeling(14)\n",
">>> choice_labels = {'a', 'b'}\n",
"\n",
">>> for label in choice_labels:\n",
"... choice_labeling.add_label(label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We assign the label ‘a’ to the first action of state 0 and ‘b’ to the second.\n",
"Recall that those actions where defined in row one and two of the transition matrix respectively:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> choice_labeling.add_label_to_choice('a', 0)\n",
">>> choice_labeling.add_label_to_choice('b', 1)\n",
">>> print(choice_labeling) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reward models\n",
"\n",
"In this reward model the length of the action rewards coincides with the number of choices:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> reward_models = {}\n",
">>> action_reward = [0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]\n",
">>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector=action_reward)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the Model\n",
"\n",
"We collect the components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models, rate_transitions=False)\n",
">>> components.choice_labeling = choice_labeling"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We build the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> mdp = stormpy.storage.SparseMdp(components)\n",
">>> print(mdp) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Partially observable Markov decision process (POMDPs)\n",
"\n",
"To build a partially observable Markov decision process (POMDP),\n",
"components.observations can be set to a list of numbers that defines the status of the observables in each state."
]
}
],
"metadata": {
"celltoolbar": "Edit Metadata",
"date": 1598178167.234528,
"filename": "building_mdps.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Markov decision processes (MDPs)"
},
"nbformat": 4,
"nbformat_minor": 4
}

151
doc/source/doc/models/building_mdps.rst

@ -1,151 +0,0 @@
***********************************************
Markov decision processes (MDPs)
***********************************************
Background
=====================
In :doc:`building_dtmcs` we modelled Knuth-Yao's model of a fair die by the means of a DTMC.
In the following we extend this model with nondeterministic choice by building a Markov decision process.
.. seealso:: `01-building-mdps.py <https://github.com/moves-rwth/stormpy/blob/master/examples/building_mdps/01-building-mdps.py>`_
First, we import Stormpy::
>>> import stormpy
Transition Matrix
=====================
Since we want to build a nondeterminstic model, we create a transition matrix with a custom row group for each state::
>>> builder = stormpy.SparseMatrixBuilder(rows=0, columns=0, entries=0, force_dimensions=False, has_custom_row_grouping=True, row_groups=0)
We need more than one row for the transitions starting in state 0 because a nondeterministic choice over the actions is available.
Therefore, we start a new group that will contain the rows representing actions of state 0.
Note that the row group needs to be added before any entries are added to the group::
>>> builder.new_row_group(0)
>>> builder.add_next_value(0, 1, 0.5)
>>> builder.add_next_value(0, 2, 0.5)
>>> builder.add_next_value(1, 1, 0.2)
>>> builder.add_next_value(1, 2, 0.8)
In this example, we have two nondeterministic choices in state 0.
With choice `0` we have probability 0.5 to got to state 1 and probability 0.5 to got to state 2.
With choice `1` we got to state 1 with probability 0.2 and go to state 2 with probability 0.8.
For the remaining states, we need to specify the starting rows of each row group::
>>> builder.new_row_group(2)
>>> builder.add_next_value(2, 3, 0.5)
>>> builder.add_next_value(2, 4, 0.5)
>>> builder.new_row_group(3)
>>> builder.add_next_value(3, 5, 0.5)
>>> builder.add_next_value(3, 6, 0.5)
>>> builder.new_row_group(4)
>>> builder.add_next_value(4, 7, 0.5)
>>> builder.add_next_value(4, 1, 0.5)
>>> builder.new_row_group(5)
>>> builder.add_next_value(5, 8, 0.5)
>>> builder.add_next_value(5, 9, 0.5)
>>> builder.new_row_group(6)
>>> builder.add_next_value(6, 10, 0.5)
>>> builder.add_next_value(6, 11, 0.5)
>>> builder.new_row_group(7)
>>> builder.add_next_value(7, 2, 0.5)
>>> builder.add_next_value(7, 12, 0.5)
>>> for s in range(8, 14):
... builder.new_row_group(s)
... builder.add_next_value(s, s - 1, 1)
Finally, we build the transition matrix::
>>> transition_matrix = builder.build()
Labeling
================
We have seen the construction of a state labeling in previous examples. Therefore we omit the description here.
Instead, we focus on the choices.
Since in state 0 a nondeterministic choice over two actions is available, the number of choices is 14.
To distinguish those we can define a choice labeling::
>>> choice_labeling = stormpy.storage.ChoiceLabeling(14)
>>> choice_labels = {'a', 'b'}
>>> for label in choice_labels:
... choice_labeling.add_label(label)
We assign the label 'a' to the first action of state 0 and 'b' to the second.
Recall that those actions where defined in row one and two of the transition matrix respectively::
>>> choice_labeling.add_label_to_choice('a', 0)
>>> choice_labeling.add_label_to_choice('b', 1)
>>> print(choice_labeling) # doctest: +SKIP
Choice 2 labels
* a -> 1 item(s)
* b -> 1 item(s)
Reward models
==================
In this reward model the length of the action rewards coincides with the number of choices::
>>> reward_models = {}
>>> action_reward = [0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
>>> reward_models['coin_flips'] = stormpy.SparseRewardModel(optional_state_action_reward_vector=action_reward)
Building the Model
====================
.. testsetup::
# Not displayed in documentation but needed for doctests
>>> state_labeling = stormpy.storage.StateLabeling(13)
>>> labels = {'init', 'one', 'two', 'three', 'four', 'five', 'six', 'done', 'deadlock'}
>>> for label in labels:
... state_labeling.add_label(label)
>>> state_labeling.add_label_to_state('init', 0)
>>> state_labeling.add_label_to_state('one', 7)
>>> state_labeling.add_label_to_state('two', 8)
>>> state_labeling.add_label_to_state('three', 9)
>>> state_labeling.add_label_to_state('four', 10)
>>> state_labeling.add_label_to_state('five', 11)
>>> state_labeling.add_label_to_state('six', 12)
>>> state_labeling.set_states('done', stormpy.BitVector(13, [7, 8, 9, 10, 11, 12]))
We collect the components::
>>> components = stormpy.SparseModelComponents(transition_matrix=transition_matrix, state_labeling=state_labeling, reward_models=reward_models, rate_transitions=False)
>>> components.choice_labeling = choice_labeling
We build the model::
>>> mdp = stormpy.storage.SparseMdp(components)
>>> print(mdp) # doctest: +SKIP
Model type: MDP (sparse)
States: 13
Transitions: 22
Choices: 14
Reward Models: coin_flips
State Labels: 9 labels
* one -> 1 item(s)
* six -> 1 item(s)
* three -> 1 item(s)
* four -> 1 item(s)
* done -> 6 item(s)
* init -> 1 item(s)
* five -> 1 item(s)
* deadlock -> 0 item(s)
* two -> 1 item(s)
Choice Labels: 2 labels
* a -> 1 item(s)
* b -> 1 item(s)
Partially observable Markov decision process (POMDPs)
========================================================
To build a partially observable Markov decision process (POMDP),
`components.observations` can be set to a list of numbers that defines the status of the observables in each state.

174
doc/source/doc/parametric_models.ipynb

@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Parametric Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiating parametric models\n",
"\n",
"[01-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/01-parametric-models.py)\n",
"\n",
"Input formats such as prism allow to specify programs with open constants. We refer to these open constants as parameters.\n",
"If the constants only influence the probabilities or rates, but not the topology of the underlying model, we can build these models as parametric models:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> formula_str = \"P=? [F s=7 & d=2]\"\n",
">>> properties = stormpy.parse_properties(formula_str, prism_program)\n",
">>> model = stormpy.build_parametric_model(prism_program, properties)\n",
">>> parameters = model.collect_probability_parameters()\n",
">>> for x in parameters:\n",
"... print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to obtain a standard DTMC, MDP or other Markov model, we need to instantiate these models by means of a model instantiator:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy.pars\n",
">>> instantiator = stormpy.pars.PDtmcInstantiator(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before we obtain an instantiated model, we need to map parameters to values: We build such a dictionary as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> point = dict()\n",
">>> for x in parameters:\n",
"... print(x.name)\n",
"... point[x] = 0.4\n",
">>> instantiated_model = instantiator.instantiate(point)\n",
">>> result = stormpy.model_checking(instantiated_model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initial states and labels are set as for the parameter-free case."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Checking parametric models\n",
"\n",
"[02-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/02-parametric-models.py)\n",
"\n",
"It is also possible to check the parametric model directly, similar as before in [Checking properties](../getting_started.ipynb#getting-started-checking-properties):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> result = stormpy.model_checking(model, properties[0])\n",
">>> initial_state = model.initial_states[0]\n",
">>> func = result.at(initial_state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We collect the constraints ensuring that underlying model is well-formed and the graph structure does not change:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> collector = stormpy.ConstraintCollector(model)\n",
">>> for formula in collector.wellformed_constraints:\n",
"... print(formula)\n",
">>> for formula in collector.graph_preserving_constraints:\n",
"... print(formula)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Collecting information about the parametric models\n",
"\n",
"[03-parametric-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/03-parametric-models.py)\n",
"\n",
"This example shows three implementations to obtain the number of transitions with probability one in a parametric model."
]
}
],
"metadata": {
"date": 1598178167.2485256,
"filename": "parametric_models.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Parametric Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

67
doc/source/doc/parametric_models.rst

@ -1,67 +0,0 @@
*****************
Parametric Models
*****************
Instantiating parametric models
===============================
.. seealso:: `01-parametric-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/01-parametric-models.py>`_
Input formats such as prism allow to specify programs with open constants. We refer to these open constants as parameters.
If the constants only influence the probabilities or rates, but not the topology of the underlying model, we can build these models as parametric models::
>>> import stormpy
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> path = stormpy.examples.files.prism_dtmc_die
>>> prism_program = stormpy.parse_prism_program(path)
>>> formula_str = "P=? [F s=7 & d=2]"
>>> properties = stormpy.parse_properties(formula_str, prism_program)
>>> model = stormpy.build_parametric_model(prism_program, properties)
>>> parameters = model.collect_probability_parameters()
>>> for x in parameters:
... print(x)
In order to obtain a standard DTMC, MDP or other Markov model, we need to instantiate these models by means of a model instantiator::
>>> import stormpy.pars
>>> instantiator = stormpy.pars.PDtmcInstantiator(model)
Before we obtain an instantiated model, we need to map parameters to values: We build such a dictionary as follows::
>>> point = dict()
>>> for x in parameters:
... print(x.name)
... point[x] = 0.4
>>> instantiated_model = instantiator.instantiate(point)
>>> result = stormpy.model_checking(instantiated_model, properties[0])
Initial states and labels are set as for the parameter-free case.
Checking parametric models
==========================
.. seealso:: `02-parametric-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/02-parametric-models.py>`_
It is also possible to check the parametric model directly, similar as before in :ref:`getting-started-checking-properties`::
>>> result = stormpy.model_checking(model, properties[0])
>>> initial_state = model.initial_states[0]
>>> func = result.at(initial_state)
We collect the constraints ensuring that underlying model is well-formed and the graph structure does not change::
>>> collector = stormpy.ConstraintCollector(model)
>>> for formula in collector.wellformed_constraints:
... print(formula)
>>> for formula in collector.graph_preserving_constraints:
... print(formula)
Collecting information about the parametric models
==================================================
.. seealso:: `03-parametric-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples//parametric_models/03-parametric-models.py>`_
This example shows three implementations to obtain the number of transitions with probability one in a parametric model.

133
doc/source/doc/reward_models.ipynb

@ -0,0 +1,133 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Reward Models\n",
"\n",
"In [Getting Started](../getting_started.ipynb), we mainly looked at probabilities in the Markov models and properties that refer to these probabilities.\n",
"In this section, we discuss reward models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring reward models\n",
"\n",
"[01-reward-models.py](https://github.com/moves-rwth/stormpy/blob/master/examples/reward_models/01-reward-models.py)\n",
"\n",
"We consider the die again, but with another property which talks about the expected reward:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)\n",
">>> prop = \"R=? [F \\\"done\\\"]\"\n",
"\n",
">>> properties = stormpy.parse_properties(prop, program, None)\n",
">>> model = stormpy.build_model(program, properties)\n",
">>> assert len(model.reward_models) == 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model now has a reward model, as the property talks about rewards.\n",
"When [Building Models](building_models.ipynb) from explicit sources, the reward model is always included if it is defined in the source.\n",
"We can do model checking analogous to probabilities:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> initial_state = model.initial_states[0]\n",
">>> result = stormpy.model_checking(model, properties[0])\n",
">>> print(\"Result: {}\".format(round(result.at(initial_state), 6)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The reward model has a name which we can obtain as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> reward_model_name = list(model.reward_models.keys())[0]\n",
">>> print(reward_model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We discuss later how to work with multiple reward models.\n",
"Rewards come in multiple fashions, as state rewards, state-action rewards and as transition rewards.\n",
"In this example, we only have state-action rewards. These rewards are a vector, over which we can trivially iterate:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> assert not model.reward_models[reward_model_name].has_state_rewards\n",
">>> assert model.reward_models[reward_model_name].has_state_action_rewards\n",
">>> assert not model.reward_models[reward_model_name].has_transition_rewards\n",
">>> for reward in model.reward_models[reward_model_name].state_action_rewards:\n",
"... print(reward)"
]
}
],
"metadata": {
"date": 1598188121.7157953,
"filename": "reward_models.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Reward Models"
},
"nbformat": 4,
"nbformat_minor": 4
}

64
doc/source/doc/reward_models.rst

@ -1,64 +0,0 @@
**************
Reward Models
**************
In :doc:`../getting_started`, we mainly looked at probabilities in the Markov models and properties that refer to these probabilities.
In this section, we discuss reward models.
Exploring reward models
------------------------
.. seealso:: `01-reward-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples/reward_models/01-reward-models.py>`_
We consider the die again, but with another property which talks about the expected reward::
>>> import stormpy
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)
>>> prop = "R=? [F \"done\"]"
>>> properties = stormpy.parse_properties(prop, program, None)
>>> model = stormpy.build_model(program, properties)
>>> assert len(model.reward_models) == 1
The model now has a reward model, as the property talks about rewards.
When :doc:`building_models` from explicit sources, the reward model is always included if it is defined in the source.
We can do model checking analogous to probabilities::
>>> initial_state = model.initial_states[0]
>>> result = stormpy.model_checking(model, properties[0])
>>> print("Result: {}".format(round(result.at(initial_state), 6)))
Result: 3.666667
The reward model has a name which we can obtain as follows::
>>> reward_model_name = list(model.reward_models.keys())[0]
>>> print(reward_model_name)
coin_flips
We discuss later how to work with multiple reward models.
Rewards come in multiple fashions, as state rewards, state-action rewards and as transition rewards.
In this example, we only have state-action rewards. These rewards are a vector, over which we can trivially iterate::
>>> assert not model.reward_models[reward_model_name].has_state_rewards
>>> assert model.reward_models[reward_model_name].has_state_action_rewards
>>> assert not model.reward_models[reward_model_name].has_transition_rewards
>>> for reward in model.reward_models[reward_model_name].state_action_rewards:
... print(reward)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0

202
doc/source/doc/schedulers.ipynb

@ -0,0 +1,202 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Working with Schedulers\n",
"\n",
"In non-deterministic models the notion of a scheduler (or policy) is important.\n",
"The scheduler determines which action to take at each state.\n",
"\n",
"For a given reachability property, Storm can return the scheduler realizing the resulting probability."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examining Schedulers for MDPs\n",
"\n",
"[01-schedulers.py](https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/01-schedulers.py)\n",
"\n",
"As in [Getting Started](../getting_started.ipynb), we import some required modules and build a model from the example files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy\n",
">>> import stormpy.core\n",
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
"\n",
">>> path = stormpy.examples.files.prism_mdp_coin_2_2\n",
">>> formula_str = \"Pmin=? [F \\\"finished\\\" & \\\"all_coins_equal_1\\\"]\"\n",
">>> program = stormpy.parse_prism_program(path)\n",
">>> formulas = stormpy.parse_properties(formula_str, program)\n",
">>> model = stormpy.build_model(program, formulas)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we check the model and make sure to extract the scheduler:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> result = stormpy.model_checking(model, formulas[0], extract_scheduler=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The result then contains the scheduler we want:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> assert result.has_scheduler\n",
">>> scheduler = result.scheduler\n",
">>> assert scheduler.memoryless\n",
">>> assert scheduler.deterministic\n",
">>> print(scheduler)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get the information which action the scheduler chooses in which state, we can simply iterate over the states:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for state in model.states:\n",
"... choice = scheduler.get_choice(state)\n",
"... action = choice.get_deterministic_choice()\n",
"... print(\"In state {} choose action {}\".format(state, action))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examining Schedulers for Markov automata\n",
"\n",
"[02-schedulers.py](https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/02-schedulers.py)\n",
"\n",
"Currently there is no support yet for scheduler extraction on MAs.\n",
"However, if the timing information is not relevant for the property, we can circumvent this lack by first transforming the MA to an MDP.\n",
"\n",
"We build the model as before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.prism_ma_simple\n",
">>> formula_str = \"Tmin=? [ F s=4 ]\"\n",
"\n",
">>> program = stormpy.parse_prism_program(path, False, True)\n",
">>> formulas = stormpy.parse_properties_for_prism_program(formula_str, program)\n",
">>> ma = stormpy.build_model(program, formulas)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we transform the continuous-time model into a discrete-time model.\n",
"Note that all timing information is lost at this point:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> mdp, mdp_formulas = stormpy.transform_to_discrete_time_model(ma, formulas)\n",
">>> assert mdp.model_type == stormpy.ModelType.MDP"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After the transformation we have obtained an MDP where we can extract the scheduler as shown before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> result = stormpy.model_checking(mdp, mdp_formulas[0], extract_scheduler=True)\n",
">>> scheduler = result.scheduler\n",
">>> print(scheduler)\n"
]
}
],
"metadata": {
"date": 1598178167.268541,
"filename": "schedulers.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Working with Schedulers"
},
"nbformat": 4,
"nbformat_minor": 4
}

99
doc/source/doc/schedulers.rst

@ -1,99 +0,0 @@
***********************
Working with Schedulers
***********************
In non-deterministic models the notion of a scheduler (or policy) is important.
The scheduler determines which action to take at each state.
For a given reachability property, Storm can return the scheduler realizing the resulting probability.
Examining Schedulers for MDPs
=============================
.. seealso:: `01-schedulers.py <https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/01-schedulers.py>`_
As in :doc:`../getting_started`, we import some required modules and build a model from the example files::
>>> import stormpy
>>> import stormpy.core
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> path = stormpy.examples.files.prism_mdp_coin_2_2
>>> formula_str = "Pmin=? [F \"finished\" & \"all_coins_equal_1\"]"
>>> program = stormpy.parse_prism_program(path)
>>> formulas = stormpy.parse_properties(formula_str, program)
>>> model = stormpy.build_model(program, formulas)
Next we check the model and make sure to extract the scheduler::
>>> result = stormpy.model_checking(model, formulas[0], extract_scheduler=True)
The result then contains the scheduler we want::
>>> assert result.has_scheduler
>>> scheduler = result.scheduler
>>> assert scheduler.memoryless
>>> assert scheduler.deterministic
>>> print(scheduler)
___________________________________________________________________
Fully defined memoryless deterministic scheduler:
model state: choice(s)
0 0
1 0
2 1
3 0
-etc-
To get the information which action the scheduler chooses in which state, we can simply iterate over the states::
>>> for state in model.states:
... choice = scheduler.get_choice(state)
... action = choice.get_deterministic_choice()
... print("In state {} choose action {}".format(state, action))
In state 0 choose action 0
In state 1 choose action 0
In state 2 choose action 1
In state 3 choose action 0
In state 4 choose action 0
In state 5 choose action 0
-etc-
Examining Schedulers for Markov automata
========================================
.. seealso:: `02-schedulers.py <https://github.com/moves-rwth/stormpy/blob/master/examples/schedulers/02-schedulers.py>`_
Currently there is no support yet for scheduler extraction on MAs.
However, if the timing information is not relevant for the property, we can circumvent this lack by first transforming the MA to an MDP.
We build the model as before::
>>> path = stormpy.examples.files.prism_ma_simple
>>> formula_str = "Tmin=? [ F s=4 ]"
>>> program = stormpy.parse_prism_program(path, False, True)
>>> formulas = stormpy.parse_properties_for_prism_program(formula_str, program)
>>> ma = stormpy.build_model(program, formulas)
Next we transform the continuous-time model into a discrete-time model.
Note that all timing information is lost at this point::
>>> mdp, mdp_formulas = stormpy.transform_to_discrete_time_model(ma, formulas)
>>> assert mdp.model_type == stormpy.ModelType.MDP
After the transformation we have obtained an MDP where we can extract the scheduler as shown before::
>>> result = stormpy.model_checking(mdp, mdp_formulas[0], extract_scheduler=True)
>>> scheduler = result.scheduler
>>> print(scheduler)
___________________________________________________________________
Fully defined memoryless deterministic scheduler:
model state: choice(s)
0 1
1 0
2 0
3 0
4 0
-etc-

167
doc/source/doc/shortest_paths.ipynb

@ -0,0 +1,167 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Working with Shortest Paths\n",
"\n",
"Storm can enumerate the most probable paths of a model, leading from the initial state to a defined set of target states, which we refer to as shortest paths.\n",
"In particular, the model states visited along those paths are available as sets and can be accumulated, yielding a *sub-model*."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Background\n",
"\n",
"The underlying implementation uses the *recursive enumeration algorithm* [[JM1999]](#jm1999), substituting distance for probability – which is why we refer to the most probable paths as the *shortest* paths.\n",
"\n",
"This algorithm computes the shortest paths recursively and in order, i.e., to find the 7th shortest path, the 1st through 6th shortest paths are computed as precursors. The next (i.e., 8th shortest) path can then be computed efficiently.\n",
"\n",
"It is crucial to note that *any* path is eligible, including those that (repeatedly) traverse loops (i.e., *non-simple* paths). This is a common case in practice: Often a large number of similar paths that differ only in the order and number of loop traversals occur successively in the sequence of shortest paths. (For applications that are only interested in simple paths, this is rather unfortunate.)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examining Shortest Paths\n",
"\n",
"[01-shortest-paths.py](https://github.com/moves-rwth/stormpy/blob/master/examples/shortest_paths/01-shortest-paths.py)\n",
"\n",
"As in [Getting Started](../getting_started.ipynb), we import some required modules and build a model from the example files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy.examples\n",
">>> import stormpy.examples.files\n",
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> model = stormpy.build_model(prism_program)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also import the `ShortestPathsGenerator`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> from stormpy.utility import ShortestPathsGenerator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and choose a target state (by its ID) to which we want to compute the shortest paths:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> state_id = 8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is also possible to specify a set of target states (as a list, e.g., `[8, 10, 11]`) or a label in the model if applicable (e.g., `\"observe0Greater1\"`).\n",
"For simplicity, we will stick to using a single state for now.\n",
"\n",
"We initialize a `ShortestPathsGenerator` instance:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> spg = ShortestPathsGenerator(model, state_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can query the k-shortest path by index. Note that 1-based indices are used, so that the 3rd shortest path indeed corresponds to index `k=3`.\n",
"Let us inspect the first three shortest paths:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for k in range(1, 4):\n",
"... path = spg.get_path_as_list(k)\n",
"... distance = spg.get_distance(k)\n",
"... print(\"{}-shortest path to state #{}: {}, with distance {}\".format(k, state_id, path, distance))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the distance (i.e., probability of the path) is also available.\n",
"Note that the paths are displayed as a backward-traversal from the target to the initial state.\n",
"\n",
"<a id='jm1999'></a>\n",
"\\[JM1999\\] Víctor M. Jiménez, Andrés Marzal. [Computing the K Shortest Paths: A New Algorithm and an Experimental Comparison](https://scholar.google.com/scholar?q=Computing+the+k+shortest+paths%3A+A+new+algorithm+and+an+experimental+comparison). International Workshop on Algorithm Engineering, 1999"
]
}
],
"metadata": {
"date": 1598178167.2826114,
"filename": "shortest_paths.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Working with Shortest Paths"
},
"nbformat": 4,
"nbformat_minor": 4
}

63
doc/source/doc/shortest_paths.rst

@ -1,63 +0,0 @@
***************************
Working with Shortest Paths
***************************
Storm can enumerate the most probable paths of a model, leading from the initial state to a defined set of target states, which we refer to as shortest paths.
In particular, the model states visited along those paths are available as sets and can be accumulated, yielding a *sub-model*.
Background
==========
The underlying implementation uses the *recursive enumeration algorithm* [JM1999]_, substituting distance for probability – which is why we refer to the most probable paths as the *shortest* paths.
This algorithm computes the shortest paths recursively and in order, i.e., to find the 7th shortest path, the 1st through 6th shortest paths are computed as precursors. The next (i.e., 8th shortest) path can then be computed efficiently.
It is crucial to note that *any* path is eligible, including those that (repeatedly) traverse loops (i.e., *non-simple* paths). This is a common case in practice: Often a large number of similar paths that differ only in the order and number of loop traversals occur successively in the sequence of shortest paths. (For applications that are only interested in simple paths, this is rather unfortunate.)
Examining Shortest Paths
========================
.. seealso:: `01-shortest-paths.py <https://github.com/moves-rwth/stormpy/blob/master/examples/shortest_paths/01-shortest-paths.py>`_
As in :doc:`../getting_started`, we import some required modules and build a model from the example files::
>>> import stormpy.examples
>>> import stormpy.examples.files
>>> path = stormpy.examples.files.prism_dtmc_die
>>> prism_program = stormpy.parse_prism_program(path)
>>> model = stormpy.build_model(prism_program)
We also import the ``ShortestPathsGenerator``::
>>> from stormpy.utility import ShortestPathsGenerator
and choose a target state (by its ID) to which we want to compute the shortest paths::
>>> state_id = 8
It is also possible to specify a set of target states (as a list, e.g., ``[8, 10, 11]``) or a label in the model if applicable (e.g., ``"observe0Greater1"``).
For simplicity, we will stick to using a single state for now.
We initialize a ``ShortestPathsGenerator`` instance::
>>> spg = ShortestPathsGenerator(model, state_id)
Now we can query the `k`-shortest path by index. Note that 1-based indices are used, so that the 3rd shortest path indeed corresponds to index ``k=3``.
Let us inspect the first three shortest paths::
>>> for k in range(1, 4):
... path = spg.get_path_as_list(k)
... distance = spg.get_distance(k)
... print("{}-shortest path to state #{}: {}, with distance {}".format(k, state_id, path, distance))
1-shortest path to state #8: [8, 4, 1, 0], with distance 0.125
2-shortest path to state #8: [8, 4, 1, 3, 1, 0], with distance 0.03125
3-shortest path to state #8: [8, 4, 1, 3, 1, 3, 1, 0], with distance 0.0078125
As you can see, the distance (i.e., probability of the path) is also available.
Note that the paths are displayed as a backward-traversal from the target to the initial state.
.. Yeah, sorry about that. Would be more user-friendly to (un-)reverse it
.. [JM1999] Víctor M. Jiménez, Andrés Marzal. `Computing the K Shortest Paths: A New Algorithm and an Experimental Comparison <https://scholar.google.com/scholar?q=Computing+the+k+shortest+paths%3A+A+new+algorithm+and+an+experimental+comparison>`_. International Workshop on Algorithm Engineering, 1999

481
doc/source/getting_started.ipynb

@ -0,0 +1,481 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting Started\n",
"\n",
"Before starting with this guide, one should follow the instructions for [Installation](installation.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## A Quick Tour through Stormpy\n",
"\n",
"This guide is intended for people which have a basic understanding of probabilistic models and their verification. More details and further pointers to literature can be found on the\n",
"[Storm website](http://www.stormchecker.org/).\n",
"While we assume some very basic programming concepts, we refrain from using more advanced concepts of python throughout the guide.\n",
"\n",
"We start with a selection of high-level constructs in stormpy, and go into more details afterwards. More in-depth examples can be found in the [Advanced Examples](advanced_topics.ipynb).\n",
"\n",
"The code examples are also given in the [examples/](https://github.com/moves-rwth/stormpy/blob/master/examples/) folder. These boxes throughout the text will tell you which example contains the code discussed.\n",
"\n",
"We start by launching the python 3 interpreter:"
]
},
{
"cell_type": "markdown",
"metadata": {
"hide-output": false
},
"source": [
"```\n",
"$ python3\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we import stormpy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Building models\n",
"\n",
"[01-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/01-getting-started.py)\n",
"\n",
"There are several ways to create a Markov chain.\n",
"One of the easiest is to parse a description of such a Markov chain and to let Storm build the chain.\n",
"\n",
"Here, we build a Markov chain from a prism program.\n",
"Stormpy comes with a small set of examples, which we use here:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> import stormpy.examples\n",
">>> import stormpy.examples.files"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this, we can now import the path of our prism file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `prism_program` can be translated into a Markov chain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> model = stormpy.build_model(prism_program)\n",
">>> print(\"Number of states: {}\".format(model.nr_states))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
">>> print(\"Number of transitions: {}\".format(model.nr_transitions))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tells us that the model has 13 states and 20 transitions.\n",
"\n",
"Moreover, initial states and deadlocks are indicated with a labelling function. We can see the labels present in the model by:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(\"Labels: {}\".format(model.labeling.get_labels()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will investigate ways to examine the model in more detail later in [Investigating the model](#getting-started-investigating-the-model).\n",
"\n",
"\n",
"<a id='getting-started-building-properties'></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Building properties\n",
"\n",
"[02-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/02-getting-started.py)\n",
"\n",
"Storm takes properties in the prism-property format.\n",
"To express that one is interested in the reachability of any state where the prism program variable `s` is 2, one would formulate:"
]
},
{
"cell_type": "markdown",
"metadata": {
"hide-output": false
},
"source": [
"```\n",
"P=? [F s=2]\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Stormpy can be used to parse this. As the variables in the property refer to a program, the program has to be passed as an additional parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> formula_str = \"P=? [F s=2]\"\n",
">>> properties = stormpy.parse_properties(formula_str, prism_program)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that properties is now a list of properties containing a single element.\n",
"\n",
"However, if we build the model as before, then the appropriate information that the variable `s=2` in some states is not present.\n",
"In order to label the states accordingly, we should notify Storm upon building the model that we would like to preserve given properties.\n",
"Storm will then add the labels accordingly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> model = stormpy.build_model(prism_program, properties)\n",
">>> print(\"Labels in the model: {}\".format(sorted(model.labeling.get_labels())))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Model building however now behaves slightly different: Only the properties passed are preserved, which means that model building might skip parts of the model.\n",
"In particular, to check the probability of eventually reaching a state `x` where `s=2`, successor states of `x` are not relevant:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(\"Number of states: {}\".format(model.nr_states))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we consider another property, however, such as:"
]
},
{
"cell_type": "markdown",
"metadata": {
"hide-output": false
},
"source": [
"```\n",
"P=? [F s=7 & d=2]\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"then Storm is only skipping exploration of successors of the particular state `y` where `s=7` and `d=2`. In this model, state `y` has a self-loop, so effectively, the whole model is explored.\n",
"\n",
"\n",
"<a id='getting-started-checking-properties'></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Checking properties\n",
"\n",
"[03-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/03-getting-started.py)\n",
"\n",
"The last lesson taught us to construct properties and models with matching state labels.\n",
"Now default checking routines are just a simple command away:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> properties = stormpy.parse_properties(formula_str, prism_program)\n",
">>> model = stormpy.build_model(prism_program, properties)\n",
">>> result = stormpy.model_checking(model, properties[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The result may contain information about all states.\n",
"Instead, we can iterate over the results:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> assert result.result_for_all_states\n",
">>> for x in result.get_values():\n",
"... pass # do something with x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Results for all states\n",
"\n",
"Some model checking algorithms do not provide results for all states. In those cases, the result is not valid for all states, and to iterate over them, a different method is required. We will explain this later.\n",
"\n",
"A good way to get the result for the initial states is as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> initial_state = model.initial_states[0]\n",
">>> print(result.at(initial_state))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"<a id='getting-started-investigating-the-model'></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Investigating the model\n",
"\n",
"[04-getting-started.py](https://github.com/moves-rwth/stormpy/blob/master/examples/04-getting-started.py)\n",
"\n",
"One powerful part of the Storm model checker is to quickly create the Markov chain from higher-order descriptions, as seen above:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> path = stormpy.examples.files.prism_dtmc_die\n",
">>> prism_program = stormpy.parse_prism_program(path)\n",
">>> model = stormpy.build_model(prism_program)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we will exploit this, and explore the underlying Markov chain of the model.\n",
"The most basic question might be what the type of the constructed model is:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> print(model.model_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also directly explore the underlying state space/matrix.\n",
"Notice that this code can be applied to both deterministic and non-deterministic models:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for state in model.states:\n",
"... for action in state.actions:\n",
"... for transition in action.transitions:\n",
"... print(\"From state {}, with probability {}, go to state {}\".format(state, transition.value(), transition.column))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us go into some more details. For DTMCs, each state has (at most) one outgoing probability distribution.\n",
"Thus:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for state in model.states:\n",
"... assert len(state.actions) <= 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also check if a state is indeed an initial state. Notice that `model.initial_states` contains state ids, not states.:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hide-output": false
},
"outputs": [],
"source": [
">>> for state in model.states:\n",
"... if state.id in model.initial_states:\n",
"... pass"
]
}
],
"metadata": {
"date": 1598188121.7690735,
"filename": "getting_started.rst",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
},
"title": "Getting Started"
},
"nbformat": 4,
"nbformat_minor": 4
}

188
doc/source/getting_started.rst

@ -1,188 +0,0 @@
****************************
Getting Started
****************************
Before starting with this guide, one should follow the instructions for :doc:`installation`.
A Quick Tour through Stormpy
================================
This guide is intended for people which have a basic understanding of probabilistic models and their verification. More details and further pointers to literature can be found on the
`Storm website <http://www.stormchecker.org/>`_.
While we assume some very basic programming concepts, we refrain from using more advanced concepts of python throughout the guide.
We start with a selection of high-level constructs in stormpy, and go into more details afterwards. More in-depth examples can be found in the :doc:`advanced_topics`.
.. seealso:: The code examples are also given in the `examples/ <https://github.com/moves-rwth/stormpy/blob/master/examples/>`_ folder. These boxes throughout the text will tell you which example contains the code discussed.
We start by launching the python 3 interpreter::
$ python3
First we import stormpy::
>>> import stormpy
Building models
------------------------------------------------
.. seealso:: `01-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/01-getting-started.py>`_
There are several ways to create a Markov chain.
One of the easiest is to parse a description of such a Markov chain and to let Storm build the chain.
Here, we build a Markov chain from a prism program.
Stormpy comes with a small set of examples, which we use here::
>>> import stormpy.examples
>>> import stormpy.examples.files
With this, we can now import the path of our prism file::
>>> path = stormpy.examples.files.prism_dtmc_die
>>> prism_program = stormpy.parse_prism_program(path)
The ``prism_program`` can be translated into a Markov chain::
>>> model = stormpy.build_model(prism_program)
>>> print("Number of states: {}".format(model.nr_states))
Number of states: 13
>>> print("Number of transitions: {}".format(model.nr_transitions))
Number of transitions: 20
This tells us that the model has 13 states and 20 transitions.
Moreover, initial states and deadlocks are indicated with a labelling function. We can see the labels present in the model by::
>>> print("Labels: {}".format(model.labeling.get_labels()))
Labels: ...
We will investigate ways to examine the model in more detail later in :ref:`getting-started-investigating-the-model`.
.. _getting-started-building-properties:
Building properties
--------------------------
.. seealso:: `02-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/02-getting-started.py>`_
Storm takes properties in the prism-property format.
To express that one is interested in the reachability of any state where the prism program variable ``s`` is 2, one would formulate::
P=? [F s=2]
Stormpy can be used to parse this. As the variables in the property refer to a program, the program has to be passed as an additional parameter::
>>> formula_str = "P=? [F s=2]"
>>> properties = stormpy.parse_properties(formula_str, prism_program)
Notice that properties is now a list of properties containing a single element.
However, if we build the model as before, then the appropriate information that the variable ``s=2`` in some states is not present.
In order to label the states accordingly, we should notify Storm upon building the model that we would like to preserve given properties.
Storm will then add the labels accordingly::
>>> model = stormpy.build_model(prism_program, properties)
>>> print("Labels in the model: {}".format(sorted(model.labeling.get_labels())))
Labels in the model: ['(s = 2)', 'deadlock', 'init']
Model building however now behaves slightly different: Only the properties passed are preserved, which means that model building might skip parts of the model.
In particular, to check the probability of eventually reaching a state ``x`` where ``s=2``, successor states of ``x`` are not relevant::
>>> print("Number of states: {}".format(model.nr_states))
Number of states: 8
If we consider another property, however, such as::
P=? [F s=7 & d=2]
then Storm is only skipping exploration of successors of the particular state ``y`` where ``s=7`` and ``d=2``. In this model, state ``y`` has a self-loop, so effectively, the whole model is explored.
.. _getting-started-checking-properties:
Checking properties
------------------------------------
.. seealso:: `03-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/03-getting-started.py>`_
The last lesson taught us to construct properties and models with matching state labels.
Now default checking routines are just a simple command away::
>>> properties = stormpy.parse_properties(formula_str, prism_program)
>>> model = stormpy.build_model(prism_program, properties)
>>> result = stormpy.model_checking(model, properties[0])
The result may contain information about all states.
Instead, we can iterate over the results::
>>> assert result.result_for_all_states
>>> for x in result.get_values():
... pass # do something with x
.. topic:: Results for all states
Some model checking algorithms do not provide results for all states. In those cases, the result is not valid for all states, and to iterate over them, a different method is required. We will explain this later.
A good way to get the result for the initial states is as follows::
>>> initial_state = model.initial_states[0]
>>> print(result.at(initial_state))
0.5
.. _getting-started-investigating-the-model:
Investigating the model
-------------------------------------
.. seealso:: `04-getting-started.py <https://github.com/moves-rwth/stormpy/blob/master/examples/04-getting-started.py>`_
One powerful part of the Storm model checker is to quickly create the Markov chain from higher-order descriptions, as seen above::
>>> path = stormpy.examples.files.prism_dtmc_die
>>> prism_program = stormpy.parse_prism_program(path)
>>> model = stormpy.build_model(prism_program)
In this example, we will exploit this, and explore the underlying Markov chain of the model.
The most basic question might be what the type of the constructed model is::
>>> print(model.model_type)
ModelType.DTMC
We can also directly explore the underlying state space/matrix.
Notice that this code can be applied to both deterministic and non-deterministic models::
>>> for state in model.states:
... for action in state.actions:
... for transition in action.transitions:
... print("From state {}, with probability {}, go to state {}".format(state, transition.value(), transition.column))
From state 0, with probability 0.5, go to state 1
From state 0, with probability 0.5, go to state 2
From state 1, with probability 0.5, go to state 3
From state 1, with probability 0.5, go to state 4
From state 2, with probability 0.5, go to state 5
From state 2, with probability 0.5, go to state 6
From state 3, with probability 0.5, go to state 1
From state 3, with probability 0.5, go to state 7
From state 4, with probability 0.5, go to state 8
From state 4, with probability 0.5, go to state 9
From state 5, with probability 0.5, go to state 10
From state 5, with probability 0.5, go to state 11
From state 6, with probability 0.5, go to state 2
From state 6, with probability 0.5, go to state 12
From state 7, with probability 1.0, go to state 7
From state 8, with probability 1.0, go to state 8
From state 9, with probability 1.0, go to state 9
From state 10, with probability 1.0, go to state 10
From state 11, with probability 1.0, go to state 11
From state 12, with probability 1.0, go to state 12
Let us go into some more details. For DTMCs, each state has (at most) one outgoing probability distribution.
Thus::
>>> for state in model.states:
... assert len(state.actions) <= 1
We can also check if a state is indeed an initial state. Notice that ``model.initial_states`` contains state ids, not states.::
>>> for state in model.states:
... if state.id in model.initial_states:
... pass
Loading…
Cancel
Save