-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mesh Offsetting #192
Comments
You run the risk of self-intersection. Also, offsetting by the tangent of the vertex won't offset the adjacent face by a fixed amount, which may give uneven tolerance across the face. |
So maybe we can compute the offset of each selected faces and add them to the vertices? Not sure about self intersection though. |
Yeah, I was thinking about adding the vertex normal as a const input to the |
To avoid self intersections, you need to guarantee that the input is convex. So what is needed is a convex decomposition operation (then offset each convex piece, then union to get the final result).
I don't think a vertex has any specific "tangent", maybe you mean the vertex normal? |
The reason a Minkowski sum is the "right" way is that for a general vertex (not symmetric, more than three faces), when you offset the faces, they will no longer meet up at that same single vertex (in any position). The simplest alternative is just to offset the triangles exactly along their face normals, then stitch the edge and vertex gaps with new faces, but again it only works on convex polyhedra and it explodes the number of faces rapidly. An alternative approach to adjusting tolerance is to design things parametrically in the first place, so they can simply be rapidly regenerated to adjusted dimensions. This is the approach I always took with OpenSCAD, and it'll work equally well with Manifold. I tried for some time (years ago) to create a 3D straight skeleton algorithm, which would solve mesh offsetting, but it turned out to be more difficult than I imagined. I'm not tempted to spend much time on it anymore, but I agree it would be a nice algorithm to have. |
The dumbest possible Minkowski algorithm I know (for a convex “tool” object) is to go through each face of one mesh, hull the other mesh to each of the vertices of each face, and then union them all together with the original mesh. The hulls are all parallelizable, there are a ton of duplicate vertices, and it’s one giant union… I wonder how fast it would be? EDIT: Seems like there might be some simple exact convex decomposition algorithms out there too, though the hulling step here might be slow… |
I think it will not scale well for meshes with over a thousand faces...
We want to avoid exact convex decomposition not because the algorithm itself is hard, but because exact convex decomposition will create many parts for very simple geometries, e.g. cube - sphere. In that case it is not better than the "dumbest possible" solution you said. I actually implemented something that can deal with concave geometries, see #415 (comment) The reason I was holding it back is because
In short: I procrastinated about this. I prefer doing dumb optimizations that doesn't exercise my brain that much :P |
What? cube and sphere are already convex. The convex decomposition of a convex geometry is just the one geometry edit: ah, sorry i misinterpreted the dash as a random separator, not an difference operation. |
The algorithm I was implementing is from the paper "Offset Triangular Mesh Using the Multiple Normal Vectors of a Vertex" and "A 3D surface offset method for STL-format models". Instead of vertex normals, they use normals from incident faces/edges. Each convex edge contribute two normals, one from each incident face. For concave edges, the algorithm computes an "average" vector. Below is a diagram for the 2D case: This is the case when there is only 1 concave edge. Note that the "normal" in this case is no longer a unit vector. When there are two consecutive concave edges, where consecutive means consider all the incident edges in cw/ccw order, there is a 3D case. However, when you get to three or more consecutive concave edges, this method doesn't really work, the paper just say something like compute the solution for every 3 consecutive normals and then average them... After this we still have another problem: How should we compute the "blend surface" by subdivision? Again, for the convex case it is simple, just add the two vectors and normalize them. However, when we have concave edges, this does not work because the "average normal" is not a unit vector, normalizing the sum will lose the length value. Take the average of the length of the two vectors is not a solution either. I experimentally find out (read: guess and try to compute it) that if the "average normal" is the "average" of 1 or 2 normals, say It would be great if someone can give me some help on the math things. I don't know much about the math here. |
I have to admit, that brute force approach to offsetting has some nice robustness properties. I'm surprised it's that fast - obviously not great at scale, but I think most people using an offset probably want to apply it to fairly simple objects anyway. I don't think we need to do a general Minkowski op - we got started here because we noticed that the only thing anyone ever used a Minkowski for was a sphere anyway, so better to stick with offsetting for simplicity. It also gives @pca006132's CSG tree optimizations a chance to shine, with all those repeated ops. |
It might be nice to do Linear Sweeps too. I wrote a small sweeping implementation using CoACD to see how bad it would be: # Linear Sweep
import coacd
a_mesh = a.to_mesh()
parts = coacd.run_coacd(coacd.Mesh(a_mesh.vert_properties[:, :3], a_mesh.tri_verts)) # a list of convex hulls.
print("COACD found", len(parts), "parts")
sweep = manifold3d.Manifold()
for i, part in enumerate(parts):
manifold_part = manifold3d.Manifold.from_mesh(manifold3d.Mesh(part[0], part[1]))
swept_convex_part = manifold3d.Manifold.batch_hull(
[manifold_part,
manifold_part.translate(0.4, 0.4, 0.4)])
sweep += swept_convex_part
return sweep Problem here is:
CoACD takes 97.5% of the time 😅 Sweeping each triangle individually: a_mesh = a.to_mesh()
swept = manifold3d.Manifold()
for tri in a_mesh.tri_verts:
hull = manifold3d.Manifold.hull_points(
[a_mesh.vert_properties[tri[0], :3],
a_mesh.vert_properties[tri[1], :3],
a_mesh.vert_properties[tri[2], :3],
a_mesh.vert_properties[tri[0], :3] + 0.4,
a_mesh.vert_properties[tri[1], :3] + 0.4,
a_mesh.vert_properties[tri[2], :3] + 0.4])
swept += hull
return swept Results in a better mesh, and completes in only Manifold is too much faster than the supporting libraries 😂 |
CAD Kernels (like OpenCASCADE) mesh all surfaces by subdivision of some 2D function parameterized by u,v. For a 3-way fillet blend, I think CAD kernels try to use circular/spherical blends when possible, but will also use a polynomial to construct an interpolating surface when circular blends are not possible… I’d have to check how OpenCascade does it… My naive recommendation would be to use barycentric interpolation of the triangle positions/normals to perform the interpolation… but I’m on the go right now, so I can’t review my usual sources 😅 If you want to see how a CAD Kernel handles it really quick, you can use my little CAD playground webapp to create a box, subtract a box from that, and select the inner three corners with the fillet tool… |
Interesting idea. Out of curiosity, what makes you attracted to the full minkowski sum op? I was always surprised it was included in OpenSCAD at all, since in all my time using that program I could never find a useful mental model for that op, so I pretty much never used it. |
I’ll admit, the only non-convex/non-convex sum I’ve ever wanted to do is a toroid sweep along a spline path (and there are far more efficient ways of doing that). My primary fascination with Minkowski sums has been in collision detection, where two arbitrary objects collide when the “Minkowski Difference” intersects the origin: The bottleneck for making physics engines that don’t rely on convex primitives has always been the difficulty in computing the Minkowski sums in a reasonably fast manner (which, of course, we’ll never do exactly on meshes quickly). I find the visualization mostly educational 😄 Maybe it’s fine to have a brute force implementation for it, since few will ever use it outside of as a curiosity… Either way, I think the convex/convex case, and the non-convex/convex case have enough fun uses that optimized implementations would make a worthwhile addition to the toolkit. The unionRound implementation is a fun example of that: https://github.com/TLC123/OpenSCAD_stuff (This function in particular would benefit the most from @pca006132 ’s fast mesh offsetting method, since the slowest part is constructing a bunch of Minkowski() sphere-dilated offset meshes 😄) |
Interesting. I actually asked my friends doing research in robotics about usage of minkowski sum, they said that it can be used in collision detection but it is so slow that nobody really uses it. Regarding CoCAD: It does approximate convex decomposition. I think it is faster than exact convex decomposition and give you a lot less parts. However, those approximate convex parts are not suitable for doing minkowski sum, you will get self-intersections. And it is surprising to know that the decomposition step is slower than doing hundreds of unions. I guess our optimizations are worth it. Btw thanks for the pointer for computing blends, will look into it. I am lacking a lot of the basic knowledge about graphics and CAD. |
I tested the per triangle minkowski sum with openscad, it is much slower than convex decomposition when the operand is large, e.g. spheres with $fn=100, which a lot of users do use. |
I think approximate methods are slower because they are iterative, and aiming for the smallest number of hulls possible. I’m hopeful there are fast and exact methods (like this guys’s greedy one: https://www.reddit.com/r/GraphicsProgramming/s/0jGSbAPnxF ) that still produce vastly fewer convex parts than the number of triangles on the mesh 😄 I’d probably try to implement it by modifying the existing hull algorithm… Could you share the OpenSCAD code you tested and the relative timings? |
I have a new dumb algorithm in mind for convex decomposition:
It could be interesting to select the plane’s splitting angle, there are a few options:
My cheater heuristic will probably be:
And, depending on how fast “split by plane” is, this could all run at usable speeds 😄 The CGAL Pages describing their algorithms have been wonderful for clarifying my thinking: Looking forward to trying this out once I’m back from holiday 😄 |
I feel like picking a plane snapped to the nearest cardinal direction is probably the fastest way of split by plane, considering how the collider works. I was thinking about something similar for the algorithm I was working on, but instead of finding concave edges I try to find pairs of faces that the swept volume will intersect (which I call conflicting faces). With those pairs, just do the decomposition by the bisecting plane. For efficiency, try to randomize a bit and try several possible cuts for each level (a bit like Monte Carlo tree search but not as sophisticated), as one cut may eliminate multiple pairs of conflicting faces. It will also prioritize cuts that have roughly equal number of pairs of conflicting faces on both parts, as we can do the decomposition of two parts in parallel. This can work for the algorithm I was working on because it does not require the parts to be convex, this is just to avoid having conflicting faces. And yeah the CGAL algorithm description is amazing. |
Interesting. These ideas remind me a bit of binary space partitioning trees (BSP), which are sometimes used for mesh booleans. Sounds like they might be better for this. |
Bisecting plane sounds good, but how do you calculate it for the swept surface?
(Author of RealTimeCSG and Chisel, the best brush package for Unity) I think the tricky part is that going to brushes from meshes needs a convex decomposition 😄 I’m tempted to add a “method” arg to the function for the different speed and exactness tradeoffs… it would be cool to be able to trade exactness for not having your computer crash trying to do the operation… I think a good additional heuristic for splitting planes is trying to get it to contain multiple concave edges at once, which should vastly cut down on the number of convex parts for “blocky” objects (though it’ll do little for “organic” objects). |
No, I am I only calculate it for two planes, because I only need to separate them so their swept volumes will not be intersecting. For the heuristic, I think apart from whatever geometry things that you come up with, adding Monte Carlo tree search (e.g. try not only the edge with the highest weight, but several ones and use the best one) will probably help a bit. |
Hrmmm, testing out my Solid Exact Convex Decomposition, it seems like it takes about as long as CoACD and VHACD, but outputs 2-6x more convex parts (close to the order of as many concave edges as there are on the model)... So far, not seeing the benefit except for very blocky models, which it decomposes great. Perhaps it's time to explore the surface decomposition methods... Or the CGAL Method: Or perhaps another one... |
The issue with the number of convex parts is the reason we think that mesh offsetting using minkowski sum and convex decomposition will not scale. For the openscad test, I will post it later this week. I am currently in hospital and does not have access to my computer for several days. I was just modifying the minkowski sum code for manifold backend in openscad, and run them against some benchmarks (in the manifold integration PR for openscad). |
Btw, wondering if porting CGAL's convex decomposition function will make it significantly faster. For openscad, iirc the bottleneck is currently in the convex decomposition part. CGAL is only accepts Nef polyhedron for the convex decomposition, which uses exact arithmetic and operations on them are known to be pretty slow. |
I don't know - it was just an idle thought. I agree though, Voronoi partitions are quite useful. In 2D, the dual of a voronoi partition is a Delaunay triangulation, right? I wonder what the 3D equivalent is? |
The 3D Equivalent is the Delaunay Tetrahedralization, which is enormously useful for Finite Element Simulation. (See WebGL FEM Simulation I made a while ago: https://zalo.github.io/TetSim/ ; consider “Digital Molecular Matter”) I think the broken out Geogram library is probably a better/cleaner bet for getting both 3D Voronoi and 3D Delaunay (though it might need the full Geogram for constraining the tetrahedralization to the bounds of the manifold), but there are also specialized libraries for this, like fTetWild: Worthy of note is the fact that tetrahedra are also convex… but they’re far too numerous to do minkowski sums or rigidbody physics with; I’d go straight for FEM simulation if fTetWild was built into Manifold. The benefit of FEM is that you get deformations straight-forwardly… so any woodworking or cooking simulation would get to benefit from deformations and soft body physics implicitly 😄 |
I think I've made a theoretical breakthrough... when the voronoi regions are set to the circumcircles of each triangle with a reflex edge, then the voronoi diagram perfectly cuts along each triangle edge in one pass (with a ~ There's a funny thing though... for obtuse triangles, the circumcenter is outside the triangle, which you might think causes issues... but Voro++ resolves it correctly by putting the voronoi region correctly outside the cell's coordinate. The second thing I had to solve for is there are a bunch of situations where the circumcircles of two triangles perfectly overlap... I think I have a solution where I joggle the vertices of just those triangles going in, and then unjoggle them coming out (which should restore the original shape, and is a built-in option in I think I'm converging on an exact algorithm that has a constant hull output of 2*(reflex edges) (which is better than CGAL's worst case of (reflex edges)^2), which means it'll be ready for codification into C++ soon... where we'll be able to test how useful it is for the operations that need it 💪 Also, if I can use Voro++'s "Custom Walls" feature with manifolds for a restricted tesellation, then that should save on needing to intersect with the original part, which will probably also vastly improve performance... maybe 10x! |
I'm beginning the C++ Integration... so far, I have Voro++ and a Voronoi Fracture Function (with Python Binding) integrated: The actual VoACD Implementation is going to take a little more diagnostic troubleshooting to match the Python Implementation... specifically, I think, the subroutine that finds all triangles with at least one reflex edge, and the circumcircle routine... This is probably just my noobishness with Halfedges showing, but could someone clarify how the HalfEdge indices, HalfEdge.face indices, and vertPos indices are related? Periodically, I see lines using a HalfEdge.face index right into the vertPos array... which implies that edges, vertices, and faces are all interchangeable somehow😕? |
Can you point out which line it is? I don't think it is correct. Each face |
Yeah, the only funny thing is that |
And this is probably obvious, but does |
Yes. Alternatively you can also do |
@zalo thinking about it, I think the algorithm CGAL uses is worse-case optimal, as there is a construction that requires |
Where does it say that? I can’t think of a construction that requires r^2; mine pretty much only does r*2, iirc… (also, fwiw, I’m keeping my implementation warm as the default branch here https://github.com/zalo/manifold ; I tried to break it out from Manifold, but it relies too much on internal data structures to be separated cleanly.) |
@zalo The lower bound is here: Did not read into their construction to see if it really holds though. |
*see figure 4. |
This is only r^2 if you only make planar slices from the vertical axis. You can get it down to 8 convex parts if you first slice it horizontally in between the top and bottom, and then proceed with their vertical plane slice algorithm. My algorithm theoretically gets you 12 convex parts, since creates voronoi chunks on each face adjacent to a reflex edge. I'll admit though, some of the chunks coming out of my algorithm appear to be too-large, causing them to overlap with other reflex edges (outputting not-exactly convex parts), and the theoretical bound on convex parts is sort of worthless unless we can think of a straight-forward algorithm to achieve it in the general case. 😅 |
@zalo I was trying to use your branch in openscad, but it says there are many degenerate triangles or degenerate voronoi cells, it is normal? e.g.
I admit though, the triangles are pretty bad... maybe we need to do remeshing or mesh refinement? |
Yeah, that’s pretty normal (though it doesn’t always cause problems), but you probably do need to do some form of mesh refinement… The issue with the circumcircle formulation is that long-skinny triangles tend to have massive circumcircles that are well-outside of their bounds… I suspect my voronoi formulation is only “guaranteed”(?) to work if the surface triangulation is Delaunay… or if all the circumcenters lie within the borders of the triangles 🤔 Perhaps the formulation is still too immature to use without mesh refinement 🙃 |
Interesting, maybe can combine with mesh refinement later. I am just curious about how this performs comparing with the one in CGAL. (when I was waiting for a meeting, picked up the old code and run them again) |
Hi, what do you guys think about approach in https://github.com/iGame-Lab/PFPOffset, is it worth consideration? |
Looks interesting, it seems that it does not rely on sampling, and seems that it can scale to pretty large meshes. Probably need to play with their demo and see. |
Certainly interesting - I'm curious what kinds of results it gets with a uniform offset - they seem to show a lot of examples of offsets that vary from one side of the object to the other, which makes it harder to evaluate the results. It also appears that repo has a dependency on CGAL, which we would not want to take. Can you look at see how feasible it would be to remove that dependency or replace it with functions we have in Manifold (Manifold is in some sense a CGAL competitor)? |
@zalo hi, I try tess, but I did not found the interface to give the wall. did you know how to input a mesh as wall? |
@yxdragon Voro++ (and thus Tess) can’t use meshes as walls because the meshes might make the Voronoi cell non-convex. The code is in my old PR #663 , but the secret is that I use manifold to convex hull the vertices of each Voronoi cell (which are guaranteed to be convex already) and manifold again to intersect (“^”) those cells with the mesh. This allows them to be properly non-convex, while taking advantage of all of manifold’s existing acceleration logic 😁 In this other demo ( https://zalo.github.io/StochasticLatticeGen/ ), I just triangulate each face of each Voronoi cell and intersect each triangle with the input mesh to get a soup of surface edges. |
@zalo In voro++'s C+ demo, there is container.add_wall function. but I did not found it in Tess. Only input a cube bounding box to limit it. did you know how to add wall? |
@yxdragon No; move this to this thread for Tess: wackywendell/tess#3 I don’t know any Python library that exposes them, and they will not work for arbitrary meshes anyway! I recommend you use manifold to intersect the cells with your mesh! |
@starseeker maybe you will be interested in this issue. I have some idea, but don't really have time to work on it for now... |
Mesh offsetting seems very useful for adding tolerance to parts, especially when combined with face selection.
I wonder if I can just offset each vertices according to their tangent? Although we may need to recompute the tangent if we only selected a subset of faces connecting to that vertex.OK this will not work.The text was updated successfully, but these errors were encountered: