Browse Source

reward models from drn files

refactoring
Sebastian Junges 7 years ago
parent
commit
326c64a953
  1. 2
      doc/source/reward_models.rst
  2. 4
      examples/reward_models/01-reward-models.py
  3. 16
      lib/stormpy/examples/files/pdtmc/die.drn

2
doc/source/reward_models.rst

@ -22,6 +22,7 @@ We consider the die again, but with another property which talks about the expec
>>> assert len(model.reward_models) == 1 >>> assert len(model.reward_models) == 1
The model now has a reward model, as the property talks about rewards. The model now has a reward model, as the property talks about rewards.
When :doc:`building_models` from explicit sources, the reward model is always included if it is defined in the source.
We can do model checking analogous to probabilities:: We can do model checking analogous to probabilities::
@ -61,4 +62,3 @@ In this example, we only have state-action rewards. These rewards are a vector,
0.0 0.0

4
examples/reward_models/01-reward-models.py

@ -24,7 +24,9 @@ def example_reward_models_01():
print(reward) print(reward)
assert not model.reward_models[reward_model_name].has_transition_rewards assert not model.reward_models[reward_model_name].has_transition_rewards
model = stormpy.build_parametric_model_from_drn(stormpy.examples.files.drn_pdtmc_die)
assert len(model.reward_models) == 1
assert reward_model_name == "coin_flips"
if __name__ == '__main__': if __name__ == '__main__':
example_reward_models_01() example_reward_models_01()

16
lib/stormpy/examples/files/pdtmc/die.drn

@ -4,35 +4,35 @@
@parameters @parameters
p q p q
@reward_models @reward_models
coin_flips
@nr_states @nr_states
13 13
@model @model
state 0 init
state 0 init [1]
action 0 action 0
1 : p 1 : p
2 : (-1)*p+1 2 : (-1)*p+1
state 1
state 1 [1]
action 0 action 0
3 : q 3 : q
4 : (-1)*q+1 4 : (-1)*q+1
state 2
state 2 [1]
action 0 action 0
5 : q 5 : q
6 : (-1)*q+1 6 : (-1)*q+1
state 3
state 3 [1]
action 0 action 0
1 : p 1 : p
7 : (-1)*p+1 7 : (-1)*p+1
state 4
state 4 [1]
action 0 action 0
8 : p 8 : p
9 : (-1)*p+1 9 : (-1)*p+1
state 5
state 5 [1]
action 0 action 0
2 : p 2 : p
10 : (-1)*p+1 10 : (-1)*p+1
state 6
state 6 [1]
action 0 action 0
11 : p 11 : p
12 : (-1)*p+1 12 : (-1)*p+1

Loading…
Cancel
Save