237 Commits (d2e36e9db35e853453b0fcc6f425075f3236562f)

Author SHA1 Message Date
Tim Quatmann 5cd4281133 Further output improvements. 5 years ago
Tim Quatmann 34d6ac9fe1 Fixed computing a state limit for the under-approximation. 5 years ago
Tim Quatmann 961baa4386 BeliefMdpExplorer: Various bugfixes for exploration restarts. Unexplored (= unreachable) states are now dropped before building the MDP since we do not get a valid MDP otherwise. 5 years ago
Tim Quatmann c2837bb749 ApproximatePOMDPModelchecker: Improved output a bit. 5 years ago
Tim Quatmann c3847d05af Scaling the rating of an observation with the current resolution. 5 years ago
Tim Quatmann c2ddea1480 First (re-) implementation of refinement. (probably needs some testing/debugging) 5 years ago
Alexander Bork 62c905fc58 Added basis for rewards in dropUnreachableStates() 5 years ago
Alexander Bork 3041b881d4 Beginning of dropUnreachableStates() 5 years ago
Tim Quatmann 79641ef131 Started to make the BeliefMdpExplorer more flexible, allowing to restart the exploration 5 years ago
Tim Quatmann 5388ed98e3 BeliefMdpExplorer: Added a few asserts so that methods can only be called in the corresponding exploration phase 5 years ago
Tim Quatmann 71e0654498 Changed method signatures to new data structures. 5 years ago
Tim Quatmann 8b0e582ef4 Use the new BeliefMdpExplorer also for the underapproximation. 5 years ago
Tim Quatmann ab26b69435 Added BeliefMdpExplorer which does most of the work when exploring (triangulated Variants of) the BeliefMdp. 5 years ago
Tim Quatmann 37da2b4e1f Added a new model checker that allows to compute trivial (but sound) bounds on the value of POMDP states 5 years ago
Tim Quatmann 0b552e6813 Renamed BeliefGrid to BeliefManager 5 years ago
Tim Quatmann 87c8555312 Using the new reward functionalities of BliefGrid. This also fixes setting rewards in a wrong way (previously, the same reward was assigned to states with the same observation). 5 years ago
Tim Quatmann a3e92d2f72 Using the new reward functionalities of BliefGrid. This also fixes setting rewards in a wrong way (previously, the same reward was assigned to states with the same observation). 5 years ago
Tim Quatmann 98bb48d3c5 BeliefGrid: Adding support for rewards. 5 years ago
Tim Quatmann 110453146d Various fixes for under/over approximation with rewards. 5 years ago
Tim Quatmann 3887e8a979 Fix for belief triangulation. More descriptive output for belief triangulation asserts. 5 years ago
Tim Quatmann b3115e9395 Code polishing and re-enabled the under-approximation. Refinement should still not be possible right now. 5 years ago
Tim Quatmann d184d67b53 Refactored under-approximation code a bit. 5 years ago
Tim Quatmann 97842f356d Fixed beliefgrid exploration. 5 years ago
Tim Quatmann b3796d740f Fixed confusing lower and upper result bounds for minimizing properties. 5 years ago
Tim Quatmann 6fee61feb1 POMDP: Started to split belief logic from exploration logic. 5 years ago
Tim Quatmann b53b6ab275 Added missing line breaks 5 years ago
Tim Quatmann 7f102c915b Improved some output 5 years ago
Tim Quatmann 558078b6e9 MakePOMDPCanonic: Improved output of error message 5 years ago
Tim Quatmann e76efd14d5 POMDP: Filling the statistics struct with information. Also incorporated aborting (SIGTERM, i.e. CTRL+C) 5 years ago
Tim Quatmann 6f3fab8e80 Added a statistics struct to the approximatePOMDP model checker 5 years ago
Tim Quatmann 0b3945ca12 Pomdp/FormulaInformation: Added template instantiations which apparently are needed with LTO 5 years ago
Alexander Bork 311362d995 Removal of some more obsolete code 5 years ago
Alexander Bork 0507da4ffa Adjusted Refinement Procedure for rewards 5 years ago
Alexander Bork 62e3a62686 Fix for belief reward computation 5 years ago
Alexander Bork 44fd26bd13 Implementation of exploration stopping in refinement procedure for newly added states 5 years ago
Alexander Bork 77b1de510f Renaming of naive underapproximation value map 5 years ago
Alexander Bork 02a325ba75 Fixed error that refinement did not stop if initial computation already yields same values for over- and under-approximation 5 years ago
Alexander Bork 054c2a906e Fixed wrong error when over- and under-approximation values are equal 5 years ago
Alexander Bork d28c982fbd Fix for missing initial belief ID in return struct 5 years ago
Alexander Bork 8e30e27eb9 Removal of obsolete code 5 years ago
Tim Quatmann 581e165fb9 Actually use the refinement precision.... 5 years ago
Tim Quatmann b3493b5888 Grid: Added cli setting to cache subsimplices. 5 years ago
Tim Quatmann a11ec691a9 Introduced options in the ApproximatePOMDPModelChecker. 5 years ago
Tim Quatmann 5bdcb66fcb Fixes for reward formulas 5 years ago
Tim Quatmann 24faf636d7 removed unused variables. 5 years ago
Alexander Bork c2582058c9 Added first version of refinement with reuse of previous results 5 years ago
Tim Quatmann 635fbc658a storm-pomdp: towards a more mature cli 5 years ago
Tim Quatmann 0d58ea5291 Adding missing template instantiation. 5 years ago
Tim Quatmann 5933467670 Silenced warnings regarding member initialization in unexpected order. 5 years ago
Sebastian Junges 63e0d772a4 do not use the 'goal' label for internal purposes, but rather __goal__. TODO: Consider if we can do without a fresh label 5 years ago