You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

64 lines
2.1 KiB

  1. **************
  2. Reward Models
  3. **************
  4. In :doc:`getting_started`, we mainly looked at probabilities in the Markov models and properties that refer to these probabilities.
  5. In this section, we discuss reward models.
  6. Exploring reward models
  7. ------------------------
  8. .. seealso:: `01-reward-models.py <https://github.com/moves-rwth/stormpy/blob/master/examples/reward_models/01-reward-models.py>`_
  9. We consider the die again, but with another property which talks about the expected reward::
  10. >>> import stormpy
  11. >>> import stormpy.examples
  12. >>> import stormpy.examples.files
  13. >>> program = stormpy.parse_prism_program(stormpy.examples.files.prism_dtmc_die)
  14. >>> prop = "R=? [F \"done\"]"
  15. >>> properties = stormpy.parse_properties_for_prism_program(prop, program, None)
  16. >>> model = stormpy.build_model(program, properties)
  17. >>> assert len(model.reward_models) == 1
  18. The model now has a reward model, as the property talks about rewards.
  19. We can do model checking analogous to probabilities::
  20. >>> initial_state = model.initial_states[0]
  21. >>> result = stormpy.model_checking(model, properties[0])
  22. >>> print("Result: {}".format(result.at(initial_state)))
  23. Result: 3.6666666666666665
  24. The reward model has a name which we can obtain as follows::
  25. >>> reward_model_name = list(model.reward_models.keys())[0]
  26. >>> print(reward_model_name)
  27. coin_flips
  28. We discuss later how to work with multiple reward models.
  29. Rewards come in multiple fashions, as state rewards, state-action rewards and as transition rewards.
  30. In this example, we only have state-action rewards. These rewards are a vector, over which we can trivially iterate::
  31. >>> assert not model.reward_models[reward_model_name].has_state_rewards
  32. >>> assert model.reward_models[reward_model_name].has_state_action_rewards
  33. >>> assert not model.reward_models[reward_model_name].has_transition_rewards
  34. >>> for reward in model.reward_models[reward_model_name].state_action_rewards:
  35. ... print(reward)
  36. 1.0
  37. 1.0
  38. 1.0
  39. 1.0
  40. 1.0
  41. 1.0
  42. 1.0
  43. 0.0
  44. 0.0
  45. 0.0
  46. 0.0
  47. 0.0
  48. 0.0