25 changed files with 2201 additions and 1 deletions
-
2examples/multi-objective/mdp/benchmarks_numerical.sh
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/1_consensus2_3_2.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/1_consensus2_4_2.storm.output
-
94examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/1_consensus2_5_2.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/1_consensus3_3_2.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/1_consensus3_4_2.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/1_consensus3_5_2.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/2_zeroconf4.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/2_zeroconf6.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/2_zeroconf8.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/3_zeroconf-tb2_14.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/3_zeroconf-tb4_10.storm.output
-
93examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/3_zeroconf-tb4_14.storm.output
-
90examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/4_team2obj_3.storm.output
-
90examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/4_team2obj_4.storm.output
-
90examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/4_team2obj_5.storm.output
-
94examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/5_team3obj_3.storm.output
-
94examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/5_team3obj_4.storm.output
-
94examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/5_team3obj_5.storm.output
-
90examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/6_scheduler05.storm.output
-
90examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/6_scheduler25.storm.output
-
90examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/6_scheduler50.storm.output
-
87examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/7_dpm100.storm.output
-
87examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/7_dpm200.storm.output
-
87examples/multi-objective/mdp/benchmarks_numerical/2016-06-20/storm/7_dpm300.storm.output
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s consensus/consensus2_3_2.nm -prop consensus/consensus2_3_2_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 691 |
|||
Transitions: 1190 |
|||
Choices: 1190 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 363 state(s) |
|||
* one_proc_err -> 76 state(s) |
|||
choice labels: no |
|||
Size in memory: 48 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F "one_proc_err"], P>=0.8916673903 [G "one_coin_ok"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F "one_proc_err"], P>=0.8916673903 [G "one_coin_ok"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F "one_proc_err"] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.891667 [G "one_coin_ok"] (toOrigVal: 1*x + 1, intern threshold:>=-0.1083326097, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 691 |
|||
Transitions: 1190 |
|||
Choices: 1190 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 363 state(s) |
|||
* one_proc_err -> 76 state(s) |
|||
choice labels: no |
|||
Size in memory: 48 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 691 |
|||
Transitions: 1190 |
|||
Choices: 1190 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* one_proc_err -> 76 state(s) |
|||
* one_coin_ok -> 363 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 47 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.001 |
|||
Value Iterations: 0.008 |
|||
Postprocessing: 0 |
|||
Combined: 0.01 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.1083326097] |
|||
===== Statistics ============================== |
|||
peak memory usage: 190MB |
|||
CPU time: 0.068 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.145 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s consensus/consensus2_4_2.nm -prop consensus/consensus2_4_2_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 1517 |
|||
Transitions: 2616 |
|||
Choices: 2616 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 861 state(s) |
|||
* one_proc_err -> 152 state(s) |
|||
choice labels: no |
|||
Size in memory: 96 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F "one_proc_err"], P>=0.9882640457 [G "one_coin_ok"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F "one_proc_err"], P>=0.9882640457 [G "one_coin_ok"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F "one_proc_err"] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.988264 [G "one_coin_ok"] (toOrigVal: 1*x + 1, intern threshold:>=-0.0117359543, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 1517 |
|||
Transitions: 2616 |
|||
Choices: 2616 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 861 state(s) |
|||
* one_proc_err -> 152 state(s) |
|||
choice labels: no |
|||
Size in memory: 96 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 1517 |
|||
Transitions: 2616 |
|||
Choices: 2616 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* one_proc_err -> 152 state(s) |
|||
* one_coin_ok -> 861 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 103 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.001 |
|||
Value Iterations: 0.01 |
|||
Postprocessing: 0 |
|||
Combined: 0.012 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0117359543] |
|||
===== Statistics ============================== |
|||
peak memory usage: 191MB |
|||
CPU time: 0.089 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.159 |
@ -0,0 +1,94 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s consensus/consensus2_5_2.nm -prop consensus/consensus2_5_2_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 3169 |
|||
Transitions: 5468 |
|||
Choices: 5468 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 1857 state(s) |
|||
* one_proc_err -> 304 state(s) |
|||
choice labels: no |
|||
Size in memory: 193 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F "one_proc_err"], P>=0.9987286134 [G "one_coin_ok"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F "one_proc_err"], P>=0.9987286134 [G "one_coin_ok"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F "one_proc_err"] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.998729 [G "one_coin_ok"] (toOrigVal: 1*x + 1, intern threshold:>=-0.0012713866, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 3169 |
|||
Transitions: 5468 |
|||
Choices: 5468 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 1857 state(s) |
|||
* one_proc_err -> 304 state(s) |
|||
choice labels: no |
|||
Size in memory: 193 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 3169 |
|||
Transitions: 5468 |
|||
Choices: 5468 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* one_proc_err -> 304 state(s) |
|||
* one_coin_ok -> 1857 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 215 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
WARN (SparseMultiObjectiveHelper.cpp:238): Numerical issues: The overapproximation would not contain the underapproximation. Hence, a halfspace is shifted by 2.51215e-15. |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.003 |
|||
Value Iterations: 0.018 |
|||
Postprocessing: 0 |
|||
Combined: 0.022 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0012713866] |
|||
===== Statistics ============================== |
|||
peak memory usage: 192MB |
|||
CPU time: 0.128 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.2 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s consensus/consensus3_3_2.nm -prop consensus/consensus3_3_2_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 17455 |
|||
Transitions: 46702 |
|||
Choices: 46702 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 5523 state(s) |
|||
* one_proc_err -> 3360 state(s) |
|||
choice labels: no |
|||
Size in memory: 1542 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F "one_proc_err"], P>=0.7709112445 [G "one_coin_ok"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F "one_proc_err"], P>=0.7709112445 [G "one_coin_ok"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F "one_proc_err"] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.770911 [G "one_coin_ok"] (toOrigVal: 1*x + 1, intern threshold:>=-0.2290887555, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 17455 |
|||
Transitions: 46702 |
|||
Choices: 46702 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 5523 state(s) |
|||
* one_proc_err -> 3360 state(s) |
|||
choice labels: no |
|||
Size in memory: 1542 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 17455 |
|||
Transitions: 46702 |
|||
Choices: 46702 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* one_proc_err -> 3360 state(s) |
|||
* one_coin_ok -> 5523 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 1833 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.018 |
|||
Value Iterations: 0.093 |
|||
Postprocessing: 0 |
|||
Combined: 0.112 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.2290887555] |
|||
===== Statistics ============================== |
|||
peak memory usage: 200MB |
|||
CPU time: 0.508 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.604 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s consensus/consensus3_4_2.nm -prop consensus/consensus3_4_2_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 61017 |
|||
Transitions: 164012 |
|||
Choices: 164012 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 24361 state(s) |
|||
* one_proc_err -> 9648 state(s) |
|||
choice labels: no |
|||
Size in memory: 6166 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F "one_proc_err"], P>=0.9475183421 [G "one_coin_ok"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F "one_proc_err"], P>=0.9475183421 [G "one_coin_ok"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F "one_proc_err"] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.947518 [G "one_coin_ok"] (toOrigVal: 1*x + 1, intern threshold:>=-0.0524816579, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 61017 |
|||
Transitions: 164012 |
|||
Choices: 164012 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 24361 state(s) |
|||
* one_proc_err -> 9648 state(s) |
|||
choice labels: no |
|||
Size in memory: 6166 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 61017 |
|||
Transitions: 164012 |
|||
Choices: 164012 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* one_proc_err -> 9648 state(s) |
|||
* one_coin_ok -> 24361 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 6436 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.074 |
|||
Value Iterations: 0.301 |
|||
Postprocessing: 0 |
|||
Combined: 0.376 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0524816579] |
|||
===== Statistics ============================== |
|||
peak memory usage: 226MB |
|||
CPU time: 1.795 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 1.93 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s consensus/consensus3_5_2.nm -prop consensus/consensus3_5_2_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 181129 |
|||
Transitions: 487456 |
|||
Choices: 487456 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 82233 state(s) |
|||
* one_proc_err -> 25152 state(s) |
|||
choice labels: no |
|||
Size in memory: 12354 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F "one_proc_err"], P>=0.9879770423 [G "one_coin_ok"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F "one_proc_err"], P>=0.9879770423 [G "one_coin_ok"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F "one_proc_err"] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.987977 [G "one_coin_ok"] (toOrigVal: 1*x + 1, intern threshold:>=-0.0120229577, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 181129 |
|||
Transitions: 487456 |
|||
Choices: 487456 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* one_coin_ok -> 82233 state(s) |
|||
* one_proc_err -> 25152 state(s) |
|||
choice labels: no |
|||
Size in memory: 12354 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 181129 |
|||
Transitions: 487456 |
|||
Choices: 487456 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* one_proc_err -> 25152 state(s) |
|||
* one_coin_ok -> 82233 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 19129 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.228 |
|||
Value Iterations: 0.92 |
|||
Postprocessing: 0 |
|||
Combined: 1.151 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0120229577] |
|||
===== Statistics ============================== |
|||
peak memory usage: 269MB |
|||
CPU time: 5.659 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 5.914 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s zeroconf/zeroconf4.nm -prop zeroconf/zeroconf4_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 5449 |
|||
Transitions: 17152 |
|||
Choices: 16487 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 4998 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 24 state(s) |
|||
choice labels: no |
|||
Size in memory: 770 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((l = 4) & (ip = 1))], P>=0.993141 [G (error = 0)]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((l = 4) & (ip = 1))], P>=0.993141 [G (error = 0)]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((l = 4) & (ip = 1))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.993141 [G (error = 0)] (toOrigVal: 1*x + 1, intern threshold:>=-0.006859, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 5449 |
|||
Transitions: 17152 |
|||
Choices: 16487 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 4998 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 24 state(s) |
|||
choice labels: no |
|||
Size in memory: 770 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 5449 |
|||
Transitions: 17152 |
|||
Choices: 16487 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 4998 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 24 state(s) |
|||
choice labels: no |
|||
Size in memory: 657 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.006 |
|||
Value Iterations: 0.051 |
|||
Postprocessing: 0 |
|||
Combined: 0.058 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0003075787402] |
|||
===== Statistics ============================== |
|||
peak memory usage: 194MB |
|||
CPU time: 0.210 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.28 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s zeroconf/zeroconf6.nm -prop zeroconf/zeroconf6_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 10543 |
|||
Transitions: 32003 |
|||
Choices: 31008 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 9960 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 36 state(s) |
|||
choice labels: no |
|||
Size in memory: 772 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((l = 4) & (ip = 1))], P>=0.9997523901 [G (error = 0)]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((l = 4) & (ip = 1))], P>=0.9997523901 [G (error = 0)]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((l = 4) & (ip = 1))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.999752 [G (error = 0)] (toOrigVal: 1*x + 1, intern threshold:>=-0.0002476099, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 10543 |
|||
Transitions: 32003 |
|||
Choices: 31008 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 9960 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 36 state(s) |
|||
choice labels: no |
|||
Size in memory: 772 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 10543 |
|||
Transitions: 32003 |
|||
Choices: 31008 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 9960 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 36 state(s) |
|||
choice labels: no |
|||
Size in memory: 1232 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.011 |
|||
Value Iterations: 0.121 |
|||
Postprocessing: 0 |
|||
Combined: 0.132 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0002476099] |
|||
===== Statistics ============================== |
|||
peak memory usage: 201MB |
|||
CPU time: 0.373 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.491 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s zeroconf/zeroconf8.nm -prop zeroconf/zeroconf8_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 17221 |
|||
Transitions: 52163 |
|||
Choices: 50838 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 16506 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 48 state(s) |
|||
choice labels: no |
|||
Size in memory: 1542 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((l = 4) & (ip = 1))], P>=0.9999910613 [G (error = 0)]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((l = 4) & (ip = 1))], P>=0.9999910613 [G (error = 0)]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((l = 4) & (ip = 1))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.999991 [G (error = 0)] (toOrigVal: 1*x + 1, intern threshold:>=-8.9387e-06, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 17221 |
|||
Transitions: 52163 |
|||
Choices: 50838 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 16506 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 48 state(s) |
|||
choice labels: no |
|||
Size in memory: 1542 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 17221 |
|||
Transitions: 52163 |
|||
Choices: 50838 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 16506 state(s) |
|||
* ((l = 4) & (ip = 1)) -> 48 state(s) |
|||
choice labels: no |
|||
Size in memory: 2015 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.018 |
|||
Value Iterations: 0.538 |
|||
Postprocessing: 0 |
|||
Combined: 0.556 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [9.030555281e-06] |
|||
===== Statistics ============================== |
|||
peak memory usage: 201MB |
|||
CPU time: 0.906 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 1.017 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s zeroconf-tb/zeroconf-tb2_14.nm -prop zeroconf-tb/zeroconf-tb2_14_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 29572 |
|||
Transitions: 92605 |
|||
Choices: 86585 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 24820 state(s) |
|||
* (time_error = 1) -> 1914 state(s) |
|||
choice labels: no |
|||
Size in memory: 3083 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F (time_error = 1)], P>=0.81 [G (error = 0)]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F (time_error = 1)], P>=0.81 [G (error = 0)]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F (time_error = 1)] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.81 [G (error = 0)] (toOrigVal: 1*x + 1, intern threshold:>=-0.19, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 29572 |
|||
Transitions: 92605 |
|||
Choices: 86585 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 24820 state(s) |
|||
* (time_error = 1) -> 1914 state(s) |
|||
choice labels: no |
|||
Size in memory: 3083 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 29572 |
|||
Transitions: 92605 |
|||
Choices: 86585 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* (time_error = 1) -> 1914 state(s) |
|||
* (error = 0) -> 24820 state(s) |
|||
choice labels: no |
|||
Size in memory: 3491 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.031 |
|||
Value Iterations: 0.393 |
|||
Postprocessing: 0 |
|||
Combined: 0.425 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [8.059350372e-08] |
|||
===== Statistics ============================== |
|||
peak memory usage: 208MB |
|||
CPU time: 1.043 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 1.145 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s zeroconf-tb/zeroconf-tb4_10.nm -prop zeroconf-tb/zeroconf-tb4_10_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 19670 |
|||
Transitions: 63353 |
|||
Choices: 59358 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 18112 state(s) |
|||
* (time_error = 1) -> 5420 state(s) |
|||
choice labels: no |
|||
Size in memory: 1543 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F (time_error = 1)], P>=0.993141 [G (error = 0)]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F (time_error = 1)], P>=0.993141 [G (error = 0)]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F (time_error = 1)] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.993141 [G (error = 0)] (toOrigVal: 1*x + 1, intern threshold:>=-0.006859, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 19670 |
|||
Transitions: 63353 |
|||
Choices: 59358 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 18112 state(s) |
|||
* (time_error = 1) -> 5420 state(s) |
|||
choice labels: no |
|||
Size in memory: 1543 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 19670 |
|||
Transitions: 63353 |
|||
Choices: 59358 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* (time_error = 1) -> 5420 state(s) |
|||
* (error = 0) -> 18112 state(s) |
|||
choice labels: no |
|||
Size in memory: 2390 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.022 |
|||
Value Iterations: 0.344 |
|||
Postprocessing: 0 |
|||
Combined: 0.367 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.3335383858] |
|||
===== Statistics ============================== |
|||
peak memory usage: 203MB |
|||
CPU time: 0.776 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.881 |
@ -0,0 +1,93 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s zeroconf-tb/zeroconf-tb4_14.nm -prop zeroconf-tb/zeroconf-tb4_14_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 42968 |
|||
Transitions: 138025 |
|||
Choices: 130735 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 40236 state(s) |
|||
* (time_error = 1) -> 5412 state(s) |
|||
choice labels: no |
|||
Size in memory: 5135 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F (time_error = 1)], P>=0.993141 [G (error = 0)]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F (time_error = 1)], P>=0.993141 [G (error = 0)]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F (time_error = 1)] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
P>=0.993141 [G (error = 0)] (toOrigVal: 1*x + 1, intern threshold:>=-0.006859, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 42968 |
|||
Transitions: 138025 |
|||
Choices: 130735 |
|||
Reward Models: none |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* (error = 0) -> 40236 state(s) |
|||
* (time_error = 1) -> 5412 state(s) |
|||
choice labels: no |
|||
Size in memory: 5135 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 42968 |
|||
Transitions: 138025 |
|||
Choices: 130735 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* (time_error = 1) -> 5412 state(s) |
|||
* (error = 0) -> 40236 state(s) |
|||
choice labels: no |
|||
Size in memory: 5242 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.055 |
|||
Value Iterations: 1.341 |
|||
Postprocessing: 0 |
|||
Combined: 1.397 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.0003075787402] |
|||
===== Statistics ============================== |
|||
peak memory usage: 219MB |
|||
CPU time: 2.239 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 2.378 |
@ -0,0 +1,90 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s team/team2obj_3.nm -prop team/team2obj_3_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 12475 |
|||
Transitions: 15228 |
|||
Choices: 14935 |
|||
Reward Models: w_1_total |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))))) -> 546 state(s) |
|||
choice labels: no |
|||
Size in memory: 503 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3)))))], R[exp]{"w_1_total"}>=2.210204082 [C]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3)))))], R[exp]{"w_1_total"}>=2.210204082 [C]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3)))))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
R[exp]{"w_1_total"}>=2.2102 [C] (toOrigVal: 1*x + 0, intern threshold:>=2.210204082, intern reward model: objective2 (positive), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 12475 |
|||
Transitions: 15228 |
|||
Choices: 14935 |
|||
Reward Models: w_1_total |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))))) -> 546 state(s) |
|||
choice labels: no |
|||
Size in memory: 503 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 12475 |
|||
Transitions: 15228 |
|||
Choices: 14935 |
|||
Reward Models: objective1, objective2 |
|||
Labels: 3 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))))) -> 546 state(s) |
|||
choice labels: no |
|||
Size in memory: 592 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.057 |
|||
Value Iterations: 0.089 |
|||
Postprocessing: 0 |
|||
Combined: 0.147 |
|||
|
|||
Performed Refinement Steps: 5 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.8714285711] |
|||
===== Statistics ============================== |
|||
peak memory usage: 201MB |
|||
CPU time: 0.355 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.435 |
@ -0,0 +1,90 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s team/team2obj_4.nm -prop team/team2obj_4_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 96665 |
|||
Transitions: 116464 |
|||
Choices: 115289 |
|||
Reward Models: w_1_total |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
choice labels: no |
|||
Size in memory: 3996 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3)))))], R[exp]{"w_1_total"}>=2.423469388 [C]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3)))))], R[exp]{"w_1_total"}>=2.423469388 [C]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3)))))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
R[exp]{"w_1_total"}>=2.42347 [C] (toOrigVal: 1*x + 0, intern threshold:>=2.423469388, intern reward model: objective2 (positive), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 96665 |
|||
Transitions: 116464 |
|||
Choices: 115289 |
|||
Reward Models: w_1_total |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
choice labels: no |
|||
Size in memory: 3996 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 96665 |
|||
Transitions: 116464 |
|||
Choices: 115289 |
|||
Reward Models: objective1, objective2 |
|||
Labels: 3 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
choice labels: no |
|||
Size in memory: 4557 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 9.398 |
|||
Value Iterations: 0.094 |
|||
Postprocessing: 0 |
|||
Combined: 9.493 |
|||
|
|||
Performed Refinement Steps: 1 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [false] |
|||
===== Statistics ============================== |
|||
peak memory usage: 228MB |
|||
CPU time: 5.227 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 10.955 |
@ -0,0 +1,90 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s team/team2obj_5.nm -prop team/team2obj_5_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 907993 |
|||
Transitions: 1084752 |
|||
Choices: 1078873 |
|||
Reward Models: w_1_total |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
choice labels: no |
|||
Size in memory: 57802 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3)))))], R[exp]{"w_1_total"}>=2.753061224 [C]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3)))))], R[exp]{"w_1_total"}>=2.753061224 [C]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3)))))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
R[exp]{"w_1_total"}>=2.75306 [C] (toOrigVal: 1*x + 0, intern threshold:>=2.753061224, intern reward model: objective2 (positive), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 907993 |
|||
Transitions: 1084752 |
|||
Choices: 1078873 |
|||
Reward Models: w_1_total |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
choice labels: no |
|||
Size in memory: 57802 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 907993 |
|||
Transitions: 1084752 |
|||
Choices: 1078873 |
|||
Reward Models: objective1, objective2 |
|||
Labels: 3 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
choice labels: no |
|||
Size in memory: 42568 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 1249.116 |
|||
Value Iterations: 0.788 |
|||
Postprocessing: 0 |
|||
Combined: 1249.918 |
|||
|
|||
Performed Refinement Steps: 1 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [false] |
|||
===== Statistics ============================== |
|||
peak memory usage: 451MB |
|||
CPU time: 516.276 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 1265.216 |
@ -0,0 +1,94 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s team/team3obj_3.nm -prop team/team3obj_3_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 12475 |
|||
Transitions: 15228 |
|||
Choices: 14935 |
|||
Reward Models: w_1_total |
|||
Labels: 3 |
|||
* ((((status = 5) & ((t2_r1 = 1) => ((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))))) & ((t2_r2 = 1) => ((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))))) & ((t2_r3 = 1) => ((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))))) -> 546 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))))) -> 546 state(s) |
|||
choice labels: no |
|||
Size in memory: 505 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3)))))], R[exp]{"w_1_total"}>=2.210204082 [C], P>=0.5 [F ((((status = 5) & ((t2_r1 = 1) => ((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))))) & ((t2_r2 = 1) => ((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))))) & ((t2_r3 = 1) => ((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3)))))]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3)))))], R[exp]{"w_1_total"}>=2.210204082 [C], P>=0.5 [F ((((status = 5) & ((t2_r1 = 1) => ((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))))) & ((t2_r2 = 1) => ((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))))) & ((t2_r3 = 1) => ((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3)))))]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3)))))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
R[exp]{"w_1_total"}>=2.2102 [C] (toOrigVal: 1*x + 0, intern threshold:>=2.210204082, intern reward model: objective2 (positive), step bound: none) |
|||
P>=0.5 [F ((((status = 5) & ((t2_r1 = 1) => ((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))))) & ((t2_r2 = 1) => ((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))))) & ((t2_r3 = 1) => ((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3)))))] (toOrigVal: 1*x + 0, intern threshold:>= 0.5, intern reward model: objective3 (positive), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 12475 |
|||
Transitions: 15228 |
|||
Choices: 14935 |
|||
Reward Models: w_1_total |
|||
Labels: 3 |
|||
* ((((status = 5) & ((t2_r1 = 1) => ((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))))) & ((t2_r2 = 1) => ((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))))) & ((t2_r3 = 1) => ((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))))) -> 546 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))))) -> 546 state(s) |
|||
choice labels: no |
|||
Size in memory: 505 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 12475 |
|||
Transitions: 15228 |
|||
Choices: 14935 |
|||
Reward Models: objective3, objective2, objective1 |
|||
Labels: 4 |
|||
* ((((status = 5) & ((t2_r1 = 1) => ((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))))) & ((t2_r2 = 1) => ((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))))) & ((t2_r3 = 1) => ((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))))) -> 546 state(s) |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 5) & ((t1_r1 = 1) => ((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))))) & ((t1_r2 = 1) => ((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))))) & ((t1_r3 = 1) => ((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))))) -> 546 state(s) |
|||
choice labels: no |
|||
Size in memory: 711 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.065 |
|||
Value Iterations: 0.062 |
|||
Postprocessing: 0 |
|||
Combined: 0.128 |
|||
|
|||
Performed Refinement Steps: 3 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [0.7448979592] |
|||
===== Statistics ============================== |
|||
peak memory usage: 201MB |
|||
CPU time: 0.337 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.415 |
@ -0,0 +1,94 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s team/team3obj_4.nm -prop team/team3obj_4_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 96665 |
|||
Transitions: 116464 |
|||
Choices: 115289 |
|||
Reward Models: w_1_total |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* ((((status = 6) & ((t2_r1 = 1) => (((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))))) & ((t2_r2 = 1) => (((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))))) & ((t2_r3 = 1) => (((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
* ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
choice labels: no |
|||
Size in memory: 4008 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3)))))], R[exp]{"w_1_total"}>=2.423469388 [C], P>=0.5 [F ((((status = 6) & ((t2_r1 = 1) => (((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))))) & ((t2_r2 = 1) => (((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))))) & ((t2_r3 = 1) => (((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3)))))]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3)))))], R[exp]{"w_1_total"}>=2.423469388 [C], P>=0.5 [F ((((status = 6) & ((t2_r1 = 1) => (((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))))) & ((t2_r2 = 1) => (((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))))) & ((t2_r3 = 1) => (((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3)))))]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3)))))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
R[exp]{"w_1_total"}>=2.42347 [C] (toOrigVal: 1*x + 0, intern threshold:>=2.423469388, intern reward model: objective2 (positive), step bound: none) |
|||
P>=0.5 [F ((((status = 6) & ((t2_r1 = 1) => (((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))))) & ((t2_r2 = 1) => (((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))))) & ((t2_r3 = 1) => (((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3)))))] (toOrigVal: 1*x + 0, intern threshold:>= 0.5, intern reward model: objective3 (positive), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 96665 |
|||
Transitions: 116464 |
|||
Choices: 115289 |
|||
Reward Models: w_1_total |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* ((((status = 6) & ((t2_r1 = 1) => (((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))))) & ((t2_r2 = 1) => (((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))))) & ((t2_r3 = 1) => (((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
* ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
choice labels: no |
|||
Size in memory: 4008 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 96665 |
|||
Transitions: 116464 |
|||
Choices: 115289 |
|||
Reward Models: objective3, objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 6) & ((t2_r1 = 1) => (((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))))) & ((t2_r2 = 1) => (((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))))) & ((t2_r3 = 1) => (((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
* ((((status = 6) & ((t1_r1 = 1) => (((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))))) & ((t1_r2 = 1) => (((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))))) & ((t1_r3 = 1) => (((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))))) -> 4872 state(s) |
|||
choice labels: no |
|||
Size in memory: 5470 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 9.569 |
|||
Value Iterations: 0.101 |
|||
Postprocessing: 0 |
|||
Combined: 9.672 |
|||
|
|||
Performed Refinement Steps: 1 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [false] |
|||
===== Statistics ============================== |
|||
peak memory usage: 219MB |
|||
CPU time: 5.300 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 11.147 |
@ -0,0 +1,94 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s team/team3obj_5.nm -prop team/team3obj_5_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 907993 |
|||
Transitions: 1084752 |
|||
Choices: 1078873 |
|||
Reward Models: w_1_total |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* ((((status = 7) & ((t2_r1 = 1) => ((((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))) | ((m5_t2 = 1) & (2 = 1))))) & ((t2_r2 = 1) => ((((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))) | ((m5_t2 = 1) & (2 = 2))))) & ((t2_r3 = 1) => ((((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))) | ((m5_t2 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
* ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
choice labels: no |
|||
Size in memory: 57913 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(Pmax=? [F ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3)))))], R[exp]{"w_1_total"}>=2.753061224 [C], P>=0.5 [F ((((status = 7) & ((t2_r1 = 1) => ((((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))) | ((m5_t2 = 1) & (2 = 1))))) & ((t2_r2 = 1) => ((((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))) | ((m5_t2 = 1) & (2 = 2))))) & ((t2_r3 = 1) => ((((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))) | ((m5_t2 = 1) & (2 = 3)))))]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(Pmax=? [F ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3)))))], R[exp]{"w_1_total"}>=2.753061224 [C], P>=0.5 [F ((((status = 7) & ((t2_r1 = 1) => ((((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))) | ((m5_t2 = 1) & (2 = 1))))) & ((t2_r2 = 1) => ((((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))) | ((m5_t2 = 1) & (2 = 2))))) & ((t2_r3 = 1) => ((((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))) | ((m5_t2 = 1) & (2 = 3)))))]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
Pmax=? [F ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3)))))] (toOrigVal: 1*x + 0, intern threshold: none, intern reward model: objective1 (positive), step bound: none) |
|||
R[exp]{"w_1_total"}>=2.75306 [C] (toOrigVal: 1*x + 0, intern threshold:>=2.753061224, intern reward model: objective2 (positive), step bound: none) |
|||
P>=0.5 [F ((((status = 7) & ((t2_r1 = 1) => ((((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))) | ((m5_t2 = 1) & (2 = 1))))) & ((t2_r2 = 1) => ((((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))) | ((m5_t2 = 1) & (2 = 2))))) & ((t2_r3 = 1) => ((((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))) | ((m5_t2 = 1) & (2 = 3)))))] (toOrigVal: 1*x + 0, intern threshold:>= 0.5, intern reward model: objective3 (positive), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 907993 |
|||
Transitions: 1084752 |
|||
Choices: 1078873 |
|||
Reward Models: w_1_total |
|||
Labels: 3 |
|||
* init -> 1 state(s) |
|||
* ((((status = 7) & ((t2_r1 = 1) => ((((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))) | ((m5_t2 = 1) & (2 = 1))))) & ((t2_r2 = 1) => ((((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))) | ((m5_t2 = 1) & (2 = 2))))) & ((t2_r3 = 1) => ((((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))) | ((m5_t2 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
* ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
choice labels: no |
|||
Size in memory: 57913 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 907993 |
|||
Transitions: 1084752 |
|||
Choices: 1078873 |
|||
Reward Models: objective3, objective2, objective1 |
|||
Labels: 4 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
* ((((status = 7) & ((t2_r1 = 1) => ((((((m1_t2 = 1) & (1 = 1)) | ((m2_t2 = 1) & (2 = 1))) | ((m3_t2 = 1) & (3 = 1))) | ((m4_t2 = 1) & (1 = 1))) | ((m5_t2 = 1) & (2 = 1))))) & ((t2_r2 = 1) => ((((((m1_t2 = 1) & (1 = 2)) | ((m2_t2 = 1) & (2 = 2))) | ((m3_t2 = 1) & (3 = 2))) | ((m4_t2 = 1) & (1 = 2))) | ((m5_t2 = 1) & (2 = 2))))) & ((t2_r3 = 1) => ((((((m1_t2 = 1) & (1 = 3)) | ((m2_t2 = 1) & (2 = 3))) | ((m3_t2 = 1) & (3 = 3))) | ((m4_t2 = 1) & (1 = 3))) | ((m5_t2 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
* ((((status = 7) & ((t1_r1 = 1) => ((((((m1_t1 = 1) & (1 = 1)) | ((m2_t1 = 1) & (2 = 1))) | ((m3_t1 = 1) & (3 = 1))) | ((m4_t1 = 1) & (1 = 1))) | ((m5_t1 = 1) & (2 = 1))))) & ((t1_r2 = 1) => ((((((m1_t1 = 1) & (1 = 2)) | ((m2_t1 = 1) & (2 = 2))) | ((m3_t1 = 1) & (3 = 2))) | ((m4_t1 = 1) & (1 = 2))) | ((m5_t1 = 1) & (2 = 2))))) & ((t1_r3 = 1) => ((((((m1_t1 = 1) & (1 = 3)) | ((m2_t1 = 1) & (2 = 3))) | ((m3_t1 = 1) & (3 = 3))) | ((m4_t1 = 1) & (1 = 3))) | ((m5_t1 = 1) & (2 = 3))))) -> 52200 state(s) |
|||
choice labels: no |
|||
Size in memory: 51107 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 1330.372 |
|||
Value Iterations: 1.032 |
|||
Postprocessing: 0 |
|||
Combined: 1331.422 |
|||
|
|||
Performed Refinement Steps: 1 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [false] |
|||
===== Statistics ============================== |
|||
peak memory usage: 420MB |
|||
CPU time: 521.419 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 1347.627 |
@ -0,0 +1,90 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s scheduler/scheduler05.nm -prop scheduler/scheduler05_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 31965 |
|||
Transitions: 60434 |
|||
Choices: 57965 |
|||
Reward Models: energy, time |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* tasks_complete -> 390 state(s) |
|||
choice labels: no |
|||
Size in memory: 2449 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(R[exp]{"time"}min=? [F "tasks_complete"], R[exp]{"energy"}<=1.45 [F "tasks_complete"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(R[exp]{"time"}min=? [F "tasks_complete"], R[exp]{"energy"}<=1.45 [F "tasks_complete"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
R[exp]{"time"}min=? [F "tasks_complete"] (toOrigVal: -1*x + 0, intern threshold: none, intern reward model: objective1 (negative), step bound: none) |
|||
R[exp]{"energy"}<=1.45 [F "tasks_complete"] (toOrigVal: -1*x + 0, intern threshold:>=-1.45, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 31965 |
|||
Transitions: 60434 |
|||
Choices: 57965 |
|||
Reward Models: energy, time |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* tasks_complete -> 390 state(s) |
|||
choice labels: no |
|||
Size in memory: 2449 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 31965 |
|||
Transitions: 60434 |
|||
Choices: 57965 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 3 |
|||
* tasks_complete -> 390 state(s) |
|||
* prob1 -> 390 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 2314 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0.038 |
|||
Value Iterations: 0.443 |
|||
Postprocessing: 0 |
|||
Combined: 0.481 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [11.77777778] |
|||
===== Statistics ============================== |
|||
peak memory usage: 217MB |
|||
CPU time: 1.082 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 1.194 |
@ -0,0 +1,90 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s scheduler/scheduler25.nm -prop scheduler/scheduler25_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 633735 |
|||
Transitions: 1179444 |
|||
Choices: 1168375 |
|||
Reward Models: energy, time |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* tasks_complete -> 7930 state(s) |
|||
choice labels: no |
|||
Size in memory: 67562 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(R[exp]{"time"}min=? [F "tasks_complete"], R[exp]{"energy"}<=1.45 [F "tasks_complete"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(R[exp]{"time"}min=? [F "tasks_complete"], R[exp]{"energy"}<=1.45 [F "tasks_complete"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
R[exp]{"time"}min=? [F "tasks_complete"] (toOrigVal: -1*x + 0, intern threshold: none, intern reward model: objective1 (negative), step bound: none) |
|||
R[exp]{"energy"}<=1.45 [F "tasks_complete"] (toOrigVal: -1*x + 0, intern threshold:>=-1.45, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 633735 |
|||
Transitions: 1179444 |
|||
Choices: 1168375 |
|||
Reward Models: energy, time |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* tasks_complete -> 7930 state(s) |
|||
choice labels: no |
|||
Size in memory: 67562 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 633735 |
|||
Transitions: 1179444 |
|||
Choices: 1168375 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 3 |
|||
* tasks_complete -> 7930 state(s) |
|||
* prob1 -> 7930 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 46044 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 1.159 |
|||
Value Iterations: 26.03 |
|||
Postprocessing: 0 |
|||
Combined: 27.204 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [11.65925926] |
|||
===== Statistics ============================== |
|||
peak memory usage: 614MB |
|||
CPU time: 38.596 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 39.67 |
@ -0,0 +1,90 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s scheduler/scheduler50.nm -prop scheduler/scheduler50_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 2457510 |
|||
Transitions: 4564394 |
|||
Choices: 4542575 |
|||
Reward Models: energy, time |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* tasks_complete -> 30855 state(s) |
|||
choice labels: no |
|||
Size in memory: 268185 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(R[exp]{"time"}min=? [F "tasks_complete"], R[exp]{"energy"}<=1.45 [F "tasks_complete"]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(R[exp]{"time"}min=? [F "tasks_complete"], R[exp]{"energy"}<=1.45 [F "tasks_complete"]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
R[exp]{"time"}min=? [F "tasks_complete"] (toOrigVal: -1*x + 0, intern threshold: none, intern reward model: objective1 (negative), step bound: none) |
|||
R[exp]{"energy"}<=1.45 [F "tasks_complete"] (toOrigVal: -1*x + 0, intern threshold:>=-1.45, intern reward model: objective2 (negative), step bound: none) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 2457510 |
|||
Transitions: 4564394 |
|||
Choices: 4542575 |
|||
Reward Models: energy, time |
|||
Labels: 2 |
|||
* init -> 1 state(s) |
|||
* tasks_complete -> 30855 state(s) |
|||
choice labels: no |
|||
Size in memory: 268185 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 2457510 |
|||
Transitions: 4564394 |
|||
Choices: 4542575 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 3 |
|||
* tasks_complete -> 30855 state(s) |
|||
* prob1 -> 30855 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 178685 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 6.056 |
|||
Value Iterations: 196.347 |
|||
Postprocessing: 0 |
|||
Combined: 202.461 |
|||
|
|||
Performed Refinement Steps: 2 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [11.64444444] |
|||
===== Statistics ============================== |
|||
peak memory usage: 1804MB |
|||
CPU time: 243.503 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 249.802 |
@ -0,0 +1,87 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s dpm/dpm100.nm -prop dpm/dpm100_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: queue, power |
|||
Labels: 1 |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 109 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(R[exp]{"power"}min=? [C<=100], R[exp]{"queue"}<=70 [C<=100]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(R[exp]{"power"}min=? [C<=100], R[exp]{"queue"}<=70 [C<=100]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
R[exp]{"power"}min=? [C<=100] (toOrigVal: -1*x + 0, intern threshold: none, intern reward model: objective1 (negative), step bound: 100) |
|||
R[exp]{"queue"}<=70 [C<=100] (toOrigVal: -1*x + 0, intern threshold:>= -70, intern reward model: objective2 (negative), step bound: 100) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: queue, power |
|||
Labels: 1 |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 109 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 2 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 83 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0 |
|||
Value Iterations: 0.059 |
|||
Postprocessing: 0 |
|||
Combined: 0.06 |
|||
|
|||
Performed Refinement Steps: 6 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [121.6128842] |
|||
===== Statistics ============================== |
|||
peak memory usage: 190MB |
|||
CPU time: 0.124 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.19 |
@ -0,0 +1,87 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s dpm/dpm200.nm -prop dpm/dpm200_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: queue, power |
|||
Labels: 1 |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 109 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(R[exp]{"power"}min=? [C<=200], R[exp]{"queue"}<=170 [C<=200]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(R[exp]{"power"}min=? [C<=200], R[exp]{"queue"}<=170 [C<=200]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
R[exp]{"power"}min=? [C<=200] (toOrigVal: -1*x + 0, intern threshold: none, intern reward model: objective1 (negative), step bound: 200) |
|||
R[exp]{"queue"}<=170 [C<=200] (toOrigVal: -1*x + 0, intern threshold:>= -170, intern reward model: objective2 (negative), step bound: 200) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: queue, power |
|||
Labels: 1 |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 109 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 2 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 83 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0 |
|||
Value Iterations: 0.081 |
|||
Postprocessing: 0 |
|||
Combined: 0.081 |
|||
|
|||
Performed Refinement Steps: 5 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [57.46361957] |
|||
===== Statistics ============================== |
|||
peak memory usage: 190MB |
|||
CPU time: 0.151 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.214 |
@ -0,0 +1,87 @@ |
|||
SToRM |
|||
--------------- |
|||
|
|||
Version: 0.9.1 (+1781 commits) build from revision g5578a73 (DIRTY). |
|||
|
|||
Linked with GNU Linear Programming Kit v4.57. |
|||
Linked with SMT-RAT 2.1.0. |
|||
Linked with CARL. |
|||
Command line arguments: -s dpm/dpm300.nm -prop dpm/dpm300_numerical.pctl --precision 0.000001 --multiobjective:precision 0.0001 |
|||
Current working directory: /Users/tim/storm/examples/multi-objective/mdp |
|||
|
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: queue, power |
|||
Labels: 1 |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 109 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Model checking property: multi(R[exp]{"power"}min=? [C<=300], R[exp]{"queue"}<=270 [C<=300]) ... |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Preprocessor Data |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Original Formula: |
|||
-------------------------------------------------------------- |
|||
multi(R[exp]{"power"}min=? [C<=300], R[exp]{"queue"}<=270 [C<=300]) |
|||
|
|||
Objectives: |
|||
-------------------------------------------------------------- |
|||
R[exp]{"power"}min=? [C<=300] (toOrigVal: -1*x + 0, intern threshold: none, intern reward model: objective1 (negative), step bound: 300) |
|||
R[exp]{"queue"}<=270 [C<=300] (toOrigVal: -1*x + 0, intern threshold:>= -270, intern reward model: objective2 (negative), step bound: 300) |
|||
-------------------------------------------------------------- |
|||
|
|||
Original Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: queue, power |
|||
Labels: 1 |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 109 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
Preprocessed Model Information: |
|||
-------------------------------------------------------------- |
|||
Model type: MDP (sparse) |
|||
States: 636 |
|||
Transitions: 2550 |
|||
Choices: 1860 |
|||
Reward Models: objective2, objective1 |
|||
Labels: 2 |
|||
* prob1 -> 0 state(s) |
|||
* init -> 1 state(s) |
|||
choice labels: no |
|||
Size in memory: 83 kbytes |
|||
-------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
Multi-objective Model Checking Statistics: |
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
|
|||
Runtimes (in seconds): |
|||
Preprocessing: 0 |
|||
Value Iterations: 0.11 |
|||
Postprocessing: 0 |
|||
Combined: 0.11 |
|||
|
|||
Performed Refinement Steps: 5 |
|||
|
|||
--------------------------------------------------------------------------------------------------------------------------------------- |
|||
done. |
|||
Result (initial states): [46.48477804] |
|||
===== Statistics ============================== |
|||
peak memory usage: 190MB |
|||
CPU time: 0.178 seconds |
|||
=============================================== |
|||
OVERALL_TIME; 0.252 |
Write
Preview
Loading…
Cancel
Save
Reference in new issue