-
Notifications
You must be signed in to change notification settings - Fork 326
feat(skeleton): skeleton line width control and conversion from nanometers #820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat(skeleton): skeleton line width control and conversion from nanometers #820
Conversation
966ad0e
to
bfce2a6
Compare
src/skeleton/frontend.ts
Outdated
highp uint lineEndpointIndex = getLineEndpointIndex(); | ||
highp uint vertexIndex = aVertexIndex.x * lineEndpointIndex + aVertexIndex.y * (1u - lineEndpointIndex); | ||
highp uint vertexIndex2 = aVertexIndex.y * lineEndpointIndex + aVertexIndex.x * (1u - lineEndpointIndex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused why I needed to add this other vertexIndex. The original seems to access the data for the opposite node for each line segment.
d040bfa
to
4d0f4d8
Compare
…ls. Migrated skeleton shader editor to control vertex shader instead of fragment shader (match behavior of annotation layer).
4d0f4d8
to
f596890
Compare
I added support for non-float vertex attributes built on top of this branch since I already made the step here of moving the editable shader from being a fragment shader to a vertex. This simplifies things because we don't have to worry about the flat qualifier and provoking vertex but that is an alternative. There are some backwards compatibility issues here, maybe the non-float change should be merged with #819 to deal with backwards compatibility. I could keep the original name and vCustom as floats and |
Taking radius into account when rendering skeletons has been requested a lot and would certainly be a valuable feature. However, I think it may be better to implement it directly as a separate rendering mode for a few reasons:
The main downside, I suppose, would be less flexibility in deriving the radius from arbitrary properties in the shader. But is that really needed? |
@jbms We were discussing using cylinders and we would be happy with that approach but one critical property of skeletons that we don't want to lose is the ability to set a minimum rendered width. This allows skeletons to be used to spot the neuron within the 3d view. In this PR, it is achieved by wrapping nanometersToPixels with a max(1.0, x). The other part of it, setting a max nanometer before the pixel conversion is not important. Sidenote: I should use the skeleton information for the segment list's right click to jump to a segment when the mesh data is not available. Would you suggest implementing the cylinders using a similar approach as the ellipse annotation? Should |
Wanting to ensure it is at least one pixel makes sense --- we could ensure that even if rendered as a cylinder. Using the ellipse approach would be simplest in some ways, but would result in a large number of triangles. The ideal approach, I think, would be to render them as "cylinder imposters" --- https://stackoverflow.com/questions/9595300/cylinder-impostor-in-glsl You can alter the depth used for depth testing in the shader, so that it ultimately produces an identical result to real cylinder geometry. I think it would also be fine if it is not 100% perfect. I originally attempted to do the imposter approach for the ellipsoid annotations but it was proving to be too complicated. If you prefer to go with the ellipsoid approach of using a real triangulated cylinder that would be fine also. As far as a nanometersToPixels function --- there are a couple issues there:
Certainly I can see that scaling annotations to physical units would be useful --- ideally there would be some way to do it while taking proper coordinate transforms into account, but not sure how that might be done exactly. |
I wonder if "physical_to_pixels" might be a better name.. I'm similarly concerned that people will have vertex data that will not necessarily be aligned in units to the physical units they want to display, and giving them access to the shader lets them normalize any issues away.. for example physical_to_pixels(radius*1000) would be possible if radius were given in the wrong unit relative to the display. |
"For 3-d perspective projection views, even if there is uniform scaling for non-point annotations there is more than one location involved meaning there isn't a single scale factor." I think perhaps you didn't understand that this is the point of this function, it will dynamically calculate the line width based on the camera location, so that when the line is closer it is scaled differently than when that same line is farther away. |
For a line annotation, the "nanometersToPixels" mapping may not be the same for the two points (if they aren't at the same depth relative to the perspective camera view). Currently, each annotation is rendered via a number of vertices ---- 4 for each line, plus 4 more for each endpoint marker --- and the user-defined shader runs as part of the vertex shader for each vertex, and is expected to produce identical output for each vertex, since the user currently has no (documented) way to vary the behavior depending on which vertex is being processed. In order for |
just in case it is useful here is a dynamic demo for skeletons. Do i understand that your worry is for extending this metaphor for annotations (specifically ones with >1 vertex) as if there were different scaling needed for pointA and pointB because they are going to in general be different distances from the camera. Are you opposed to merging this in for just skeletons now, and then worrying later about how to implement it for annotations, or worried that introducing this to skeletons will make there be a disconnect between the annotation and skeleton shader API that we should avoid? |
I suppose it is fine to add this just to skeletons for now if that is useful, provided that we can figure out a reasonable way to handle the coordinate transforms and units. |
@jbms I added an implementation for the line annotation, looks like this float sourceDimToPixels(float value) {
vec2 lineOffset = getLineOffset();
highp vec3 vertexA = projectModelVectorToSubspace(getVertexPosition0());
highp vec3 vertexB = projectModelVectorToSubspace(getVertexPosition1());
highp vec3 vertex = mix(vertexA, vertexB, lineOffset.x);
return nanometersToPixels(value, vertex, uProjection, uModelView, 1.0 / uLineParams.x);
} For now I renamed it to sourceDimToPixels to pixels as that seems to be more accurate as our skeleton uses a 1x1x1 source dimension whereas local annotations by default use voxel scale by default. This way, we shouldn't have to worry about the units if I understand it correctly. Below is a screenshot showing a line annotation in blue that I have roughly positioned in line with a segment of the skeleton. It could be useful to expose the source dimensions in the shader. ![]() I see the issue with coordinate transforms, I'll look into that correction. (I'm not sure what is right here but it doesn't look right) |
added setLineWidth, setNodeDiameter, nanometersToPixels. Migrated skeleton shader editor to control vertex shader instead of fragment shader (match behavior of annotation layer). There have been requests to add the
nanometersToPixels
function for annotations.Example shader: