-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace implementation of maximum_weighted_matching() #400
base: develop
Are you sure you want to change the base?
Conversation
Without this check, the test program declares all tests passed if it fails to open the input file.
- Hand-picked graphs to explore basic functionality. - Graphs that are known to trigger bugs in the old implementation of maximum_weighted_matching(). - Random small graphs.
The new code runs in O(V^3). It also solves a number of known issues.
dbe033a
to
5fa8626
Compare
Are the heap data structures in Boost.Heap not sufficient? They are mergeable and mutable. |
The mutable heap in Boost.Heap would work as the "plain" type of heap in the matching algo. But it looks like BGL currently does not use Boost.Heap and I don't know how you feel about adding that dependency. I also need a concatenable heap which does not currently exist in Boost. The merge feature of Boost.Heap is not sufficient. I need to merge heaps in O(log(n)) time with the option to unmerge them later in O(log(n)). The typical way to implement this is with a custom balanced binary tree. It's not rocket science but it adds another 800 lines or so. LEMON and LEDA implement the O(V E log(V)) matching algorithm. It is much faster than O(V^3) on certain classes of sparse graphs. The speedup on random sparse graphs is fairly modest in my experience. And it can be slower on dense graphs. The new code is already an order of magnitude faster than the previous version for graphs with V > 200. My feeling is that the faster algorithm adds a lot of code in exchange for little benefit. But I'm up for the challenge. If you want the best matching algorithm in BGL, I will be happy to work on it. |
Thanks for the explanation. There's no problem with adding Boost.Heap as a dependency, as Boost.Graph already depends on many other parts of Boost. Sounds like the efficient algorithm would require adding a new data structure to Boost.Heap to start with, which shouldn't be too difficult, although I'm aware that the maintainer is not all that active any more. Given that the new implementation is much faster anyway, let's defer the efficient algorithm to later. Ultimately it would be nice to have a top-level algorithm that uses a heuristic to pick the fastest algorithm but users are still free to call specific algorithms. (Best of both worlds.) |
I think the concatenable queue may be so special-purpose that it could just stay in BGL, but generalizing it is definitely also a valid option.
Agreed. It occurs to me that the O(V E log(V)) algorithm also needs an
I understand. There is no hurry from my side. Thanks for supporting this effort. |
What's the "nearest ancestor" problem referred to in the documentation for the fast Gabow algorithm? Is that LCA or something else? |
To be honest, I don't know. I kept this comment from the documentation by Yi Ji as I saw no reason to remove it. I know about the existence of that fast algorithm. I tried to read the paper by Gabow but I can not make heads or tails of it. Mehlhorn and Schaefer made an offhand remark that this algorithm may be unpractical (https://dl.acm.org/doi/10.1145/944618.944622 page 7) but that was a long time ago. I'm not aware of any public available implementation. |
Doesn't surprise me too much. Bender et. al. made a similar remark about the theoretically optimal algorithm for LCA: it just ain't worth it. |
Citation LCA remark: https://www.sciencedirect.com/science/article/abs/pii/S0196677405000854 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't even got to the code proper yet but here are a few requests and questions to start with.
Unlike <tt>max_cardinality_matching</tt>, the initial matching and augmenting path finder are not parameterized, | ||
because the algorithm maintains blossoms, dual variables and node labels across all augmentations. | ||
<p> | ||
The implementation deviates from the description in [<a href="bibliography.html#galil86">77</a>] on a few points: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to provide more explanation and evidence about why these deviations from an algorithm published in a refereed journal are good. Even better would be to get your deviations published in a journal. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point, but I'd like to understand how you want me to solve it.
The text already describes the benefit of each deviation. If you just want me to elaborate, I can e.g. expand each bullet into a full paragraph, provide some details, etc.
The reference paper has a focus on time complexity, not practical efficiency. From a complexity point of view, these deviations are trivial tinkering since the time complexity remains the same. If you want, I can provide specific arguments to point this out.
Reusing labels is not my original idea. It is widely used, for example Mehlhorn and Schäfer [1], LEMON, Kolmogorov [2]. But they all use different base algorithms, O(V E log(V)) and O(V^2 E) respectively, therefore the performance trade-off is different. I did some informal benchmarking on random graphs, but printing benchmarks in the documentation seems out-of-place to me.
[1] https://dl.acm.org/doi/abs/10.1145/944618.944622
[2] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=930b9f9c3bc7acf3277dd2361076d40ff03774b2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expanded the text a bit to argue explicitly that each change preserves the overall time complexity and correctness of the algorithm.
This is a re-implementation of maximum_weighted_matching, based on a paper by Zvi Galil.
The new code runs in time O(V^3).
A new set of testcases are also added.
Resolves #199 #223 #399
The code has been tested extensively on large random graphs, using LEMON as a reference.
Faster algorithms are known for this problem. I initially planned to implement the O(VElog(V)) algorithm by Galil, Micali and Gabow. However it needs a mutable heap with features that are not readily available in the BGL, and it needs a special kind of mergeable priority queue. While possible, I feel that the amount of code would be disproportionate. So I decided to fall back to a simpler O(V^3) algorithm, essentially the same algorithm that inspired the previous implementation.
Feedback is very welcome. I will already mention a few points that may draw criticism:
brute_force_maximum_weighted_matching()
unchanged. This function is not very useful in my opinion, but it was part of the public API and there is no need to change it.maximum_weighted_matching()
is backwards compatible with the previous code. But I removed the classweighted_augmenting_path_finder
, which was essentially an internal detail although it lived in the globalboost
namespace.