-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement forgetting/projection #7
Comments
This would be useful. But is that an accurate description? Forgetting It can be done that efficiently for decomposable sentences, but I'm not sure how to do it for others, even inefficiently. You might have to rebuild the sentence from scratch. A first implementation that only works on decomposable sentences would be useful enough, though not for the test that prompted this. |
Aye, good point. It's workable on DNNF only:
If it's not in DNNF, then we can lean on the Shannon Expansion: The theoretical results are tied to maintaining formula size and properties. I need to think a bit deeper on the baseline correctness of doing the variable re-writing. Your counterexample brings up two further things I was interested in adding to the simplification: #8 |
Now that This conversation was closed off as part of the Tseitin changes, but should be addressed as part of this issue (i.e., the test case can be improved with the new functionality). |
Is forgetting |
Absolutely desirable, yep. It doesn't affect model counts (at least not the bi-directional tseitin), but the models you'd explore would be cluttered, as would the graphical representation, etc. General process would be to model in arbitrary NF (putting it to NNF is easy enough on the fly), convert to CNF with aux all over the place, compile or solve, forget the aux, then analyze what's left. There will be other encoding strategies that introduce aux variables, and the ability to forget them will be quite commonly used. |
Pulling out the relevant bit of conversation from the tseitin PR:
We now have an implementation of forgetting/projecting, do you think it's worth having def projected_equivalence(T1, T2):
vars = T1.vars() & T2.vars()
return T1.project(vars).equivalent(T2.project(vars)) ...and technically you could get away with just this: def projected_equivalence(T1, T2):
return T1.project(T2.vars()).equivalent(T2.project(T1.vars())) |
Closed with #16 |
Internal code shouldn't use
That method looks a little tricky because of #11. I think it's better to go for something asymmetric, at least for that particular application. Semantic emulation could be a candidate. According to definition 3.2 here:
So maybe: def emulates(T1, T2):
return T1.project(T2.vars()).equivalent(T2) That pretty much matches the test, so I think it's on the right track. |
Aye, I think that's precisely the path (#18). You raise an interesting point about who gets to forget aux vars...this would have come up with #15, and it hasn't yet since the only real Thoughts? |
As long as you still have the original sentence for reference you know exactly which keys to delete ( |
Was thinking about the possibility of a context manager to do just that, which could be useful beyond |
Effectively sets both a variable and its negation to
true
simultaneously in the NNF, and then simplify.The text was updated successfully, but these errors were encountered: