Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement forgetting/projection #7

Closed
haz opened this issue Jul 24, 2020 · 11 comments · Fixed by #16
Closed

Implement forgetting/projection #7

haz opened this issue Jul 24, 2020 · 11 comments · Fixed by #16
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@haz
Copy link
Collaborator

haz commented Jul 24, 2020

Effectively sets both a variable and its negation to true simultaneously in the NNF, and then simplify.

@haz haz added the enhancement New feature or request label Jul 24, 2020
@haz haz self-assigned this Jul 24, 2020
@haz haz mentioned this issue Jul 24, 2020
@blyxxyz
Copy link
Collaborator

blyxxyz commented Jul 24, 2020

This would be useful. But is that an accurate description? Forgetting a shouldn't make a & ~a satisfiable, should it?

It can be done that efficiently for decomposable sentences, but I'm not sure how to do it for others, even inefficiently. You might have to rebuild the sentence from scratch.

A first implementation that only works on decomposable sentences would be useful enough, though not for the test that prompted this.

@haz
Copy link
Collaborator Author

haz commented Jul 24, 2020

Aye, good point. It's workable on DNNF only:

If it's not in DNNF, then we can lean on the Shannon Expansion: T|x \/ T|~x to forget x.

The theoretical results are tied to maintaining formula size and properties. I need to think a bit deeper on the baseline correctness of doing the variable re-writing.

Your counterexample brings up two further things I was interested in adding to the simplification: #8

@haz
Copy link
Collaborator Author

haz commented Jul 30, 2020

Now that Aux variables have a direct treatment, we should include a helper function that forgets all of the aux vars (e.g., T.forget_aux()).

This conversation was closed off as part of the Tseitin changes, but should be addressed as part of this issue (i.e., the test case can be improved with the new functionality).

@blyxxyz
Copy link
Collaborator

blyxxyz commented Jul 30, 2020

Is forgetting Aux variables desirable outside tests? I expect you wouldn't add them in the first place whenever forgetting them is computationally feasible.

@haz
Copy link
Collaborator Author

haz commented Jul 30, 2020

Absolutely desirable, yep. It doesn't affect model counts (at least not the bi-directional tseitin), but the models you'd explore would be cluttered, as would the graphical representation, etc. General process would be to model in arbitrary NF (putting it to NNF is easy enough on the fly), convert to CNF with aux all over the place, compile or solve, forget the aux, then analyze what's left.

There will be other encoding strategies that introduce aux variables, and the ability to forget them will be quite commonly used.

@haz
Copy link
Collaborator Author

haz commented Aug 6, 2020

Pulling out the relevant bit of conversation from the tseitin PR:

I think it comes down to assumptions on equivalence -- my initial thought was that they need to have the same variables, as equivalence can be cast in terms of having the exact same models. Looking at the implementation, I can see that you've taken the measure of evaluating equivalence over the set of vars corresponding to the union of the two.

Maybe projected_equivalence as a new function that forgets all vars not in the intersection? Doesn't target the aux variables specifically, but could get the job done.

We now have an implementation of forgetting/projecting, do you think it's worth having projected_equivalence or something similar? It would just be a simple wrapper around projecting on the intersection of vars:

def projected_equivalence(T1, T2):
    vars = T1.vars() & T2.vars()
    return T1.project(vars).equivalent(T2.project(vars))

...and technically you could get away with just this:

def projected_equivalence(T1, T2):
    return T1.project(T2.vars()).equivalent(T2.project(T1.vars()))

@haz haz added this to the 0.3.0 milestone Aug 7, 2020
@haz
Copy link
Collaborator Author

haz commented Aug 7, 2020

Closed with #16

@haz haz closed this as completed Aug 7, 2020
@blyxxyz
Copy link
Collaborator

blyxxyz commented Aug 7, 2020

There will be other encoding strategies that introduce aux variables, and the ability to forget them will be quite commonly used.

Internal code shouldn't use .forget_aux() for those, though, in case the original sentence already had aux variables. It should project onto the variables of the original sentence instead. So I expect it'll mainly be useful for user code that doesn't have to be as robust. It's worth having for that.

We now have an implementation of forgetting/projecting, do you think it's worth having projected_equivalence or something similar? It would just be a simple wrapper around projecting on the intersection of vars:

That method looks a little tricky because of #11. (a | ~a).projected_equivalence(a) would be false, while true.projected_equivalence(a) would be true.

I think it's better to go for something asymmetric, at least for that particular application. Semantic emulation could be a candidate. According to definition 3.2 here:

Consider theories T and T' with signatures S⊆S'. Then T' semantically emulates T if the following conditions hold:
(1) every model of T' becomes a model of T when restricted to the interpretations of symbols from S
(2) for every model M of T there is a model M' of T' that has the same domain as M, and that agrees with M on S.

So maybe:

def emulates(T1, T2):
    return T1.project(T2.vars()).equivalent(T2)

That pretty much matches the test, so I think it's on the right track.

@haz
Copy link
Collaborator Author

haz commented Aug 10, 2020

Aye, I think that's precisely the path (#18). You raise an interesting point about who gets to forget aux vars...this would have come up with #15, and it hasn't yet since the only real to_CNF we use is in checking satisfiability. If I ask for a model, I don't want arbitrary aux vars thrown in that existed due to a to_CNF call, and I also don't want an overly ambitious sub-procedure to get rid of aux vars I've introduced.

Thoughts?

@blyxxyz
Copy link
Collaborator

blyxxyz commented Aug 10, 2020

As long as you still have the original sentence for reference you know exactly which keys to delete (model.keys() - sentence.vars()). And to_CNF() doesn't change the model count, so you don't get duplicate models that way.

@haz
Copy link
Collaborator Author

haz commented Aug 11, 2020

Was thinking about the possibility of a context manager to do just that, which could be useful beyond to_CNF usage in other encoding strategies (adders, etc), but I suppose it can be shelved until it's actually needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants