254 Commits (e1b00dae7aaa18756055f35a1b20438491d9861f)

Author SHA1 Message Date
Tim Quatmann 75d792e987 Implemented refinement heuristic. 6 years ago
Tim Quatmann 45832d3de3 BeliefMdpExplorer: Implemented extraction of optimal scheduler choices and reachable states under these choices 6 years ago
Tim Quatmann 61215e4b24 Over-Approximation: Taking current values as new lower/upper bounds for next refinement step. 6 years ago
Tim Quatmann 4ea452854f Fixes for scoring observations 6 years ago
Sebastian Junges 5f2a598f48 remove unsound 1-state computation 6 years ago
Sebastian Junges 6608f9f00d Fixed implementation from CCD16 6 years ago
Sebastian Junges 7ba3b6b8d6 Canonic POMDP in -> Canonic POMDP out 6 years ago
Sebastian Junges e22cbdb91b support for computing the winning region or from initial state, some documentation 6 years ago
Tim Quatmann 26764137f5 Fix for --unfold-belief-mdp setting 6 years ago
Tim Quatmann 3c5df045c1 Added a few assertions 6 years ago
Tim Quatmann 03889958da Added a switch to control the size of the under-approximation via command line. 6 years ago
Tim Quatmann 26a0544e4b BeiliefManager: Use flat_maps for beliefs and hash_maps for belief storage. 6 years ago
Tim Quatmann fcee1d05fa Fixed an issue with dropping unexplored states. 6 years ago
Tim Quatmann 2f020ce686 BeliefManager: Making Freudenthal happy (and fast) 6 years ago
Tim Quatmann 937659f356 First improvement step for Freudenthal triangulation 6 years ago
Tim Quatmann eca4dab6c0 Beliefmanager: expanding a belief now returns a vector instead of a map 6 years ago
Tim Quatmann 26864067cf BeliefManager: Made several methods private to hide the actual BeliefType. 6 years ago
Tim Quatmann 5cd4281133 Further output improvements. 6 years ago
Tim Quatmann 34d6ac9fe1 Fixed computing a state limit for the under-approximation. 6 years ago
Tim Quatmann 961baa4386 BeliefMdpExplorer: Various bugfixes for exploration restarts. Unexplored (= unreachable) states are now dropped before building the MDP since we do not get a valid MDP otherwise. 6 years ago
Tim Quatmann c2837bb749 ApproximatePOMDPModelchecker: Improved output a bit. 6 years ago
Tim Quatmann c3847d05af Scaling the rating of an observation with the current resolution. 6 years ago
Tim Quatmann c2ddea1480 First (re-) implementation of refinement. (probably needs some testing/debugging) 6 years ago
Alexander Bork 62c905fc58 Added basis for rewards in dropUnreachableStates() 6 years ago
Alexander Bork 3041b881d4 Beginning of dropUnreachableStates() 6 years ago
Tim Quatmann 79641ef131 Started to make the BeliefMdpExplorer more flexible, allowing to restart the exploration 6 years ago
Tim Quatmann 5388ed98e3 BeliefMdpExplorer: Added a few asserts so that methods can only be called in the corresponding exploration phase 6 years ago
Tim Quatmann 71e0654498 Changed method signatures to new data structures. 6 years ago
Tim Quatmann 8b0e582ef4 Use the new BeliefMdpExplorer also for the underapproximation. 6 years ago
Tim Quatmann ab26b69435 Added BeliefMdpExplorer which does most of the work when exploring (triangulated Variants of) the BeliefMdp. 6 years ago
Tim Quatmann 37da2b4e1f Added a new model checker that allows to compute trivial (but sound) bounds on the value of POMDP states 6 years ago
Tim Quatmann 0b552e6813 Renamed BeliefGrid to BeliefManager 6 years ago
Tim Quatmann 87c8555312 Using the new reward functionalities of BliefGrid. This also fixes setting rewards in a wrong way (previously, the same reward was assigned to states with the same observation). 6 years ago
Tim Quatmann a3e92d2f72 Using the new reward functionalities of BliefGrid. This also fixes setting rewards in a wrong way (previously, the same reward was assigned to states with the same observation). 6 years ago
Tim Quatmann 98bb48d3c5 BeliefGrid: Adding support for rewards. 6 years ago
Tim Quatmann 110453146d Various fixes for under/over approximation with rewards. 6 years ago
Tim Quatmann 3887e8a979 Fix for belief triangulation. More descriptive output for belief triangulation asserts. 6 years ago
Tim Quatmann b3115e9395 Code polishing and re-enabled the under-approximation. Refinement should still not be possible right now. 6 years ago
Tim Quatmann d184d67b53 Refactored under-approximation code a bit. 6 years ago
Tim Quatmann 97842f356d Fixed beliefgrid exploration. 6 years ago
Tim Quatmann b3796d740f Fixed confusing lower and upper result bounds for minimizing properties. 6 years ago
Tim Quatmann 6fee61feb1 POMDP: Started to split belief logic from exploration logic. 6 years ago
Tim Quatmann b53b6ab275 Added missing line breaks 6 years ago
Tim Quatmann 7f102c915b Improved some output 6 years ago
Tim Quatmann 558078b6e9 MakePOMDPCanonic: Improved output of error message 6 years ago
Tim Quatmann e76efd14d5 POMDP: Filling the statistics struct with information. Also incorporated aborting (SIGTERM, i.e. CTRL+C) 6 years ago
Tim Quatmann 6f3fab8e80 Added a statistics struct to the approximatePOMDP model checker 6 years ago
Tim Quatmann 0b3945ca12 Pomdp/FormulaInformation: Added template instantiations which apparently are needed with LTO 6 years ago
Alexander Bork 311362d995 Removal of some more obsolete code 6 years ago
Alexander Bork 0507da4ffa Adjusted Refinement Procedure for rewards 6 years ago