Skip to content

feat(skeleton): skeleton line width control and conversion from nanometers #820

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

chrisj
Copy link
Contributor

@chrisj chrisj commented Jul 30, 2025

added setLineWidth, setNodeDiameter, nanometersToPixels. Migrated skeleton shader editor to control vertex shader instead of fragment shader (match behavior of annotation layer). There have been requests to add the nanometersToPixels function for annotations.

Example shader:

void main() {
  setLineWidth(max(1.0, nanometersToPixels(min(1500.0, radius))));
  setNodeDiameter(nanometersToPixels(radius));
  
  if (compartment == 1.0) {
    emitRGB(vec3(1.0, 0.0, 0.0));
  }
  
  if (compartment == 2.0) {
    emitRGBA(vec4(1.0, 0.0, 1.0, 1.0));
  }
  
  if (compartment == 3.0) {
    emitRGB(vec3(0.0, 1.0, 1.0));
  }
}
Screenshot 2025-07-30 at 2 08 58 PM

@chrisj chrisj force-pushed the cj-skeleton-shader-line-width-with-nanometer-conversion branch 3 times, most recently from 966ad0e to bfce2a6 Compare July 30, 2025 18:26
highp uint lineEndpointIndex = getLineEndpointIndex();
highp uint vertexIndex = aVertexIndex.x * lineEndpointIndex + aVertexIndex.y * (1u - lineEndpointIndex);
highp uint vertexIndex2 = aVertexIndex.y * lineEndpointIndex + aVertexIndex.x * (1u - lineEndpointIndex);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused why I needed to add this other vertexIndex. The original seems to access the data for the opposite node for each line segment.

@chrisj chrisj force-pushed the cj-skeleton-shader-line-width-with-nanometer-conversion branch 2 times, most recently from d040bfa to 4d0f4d8 Compare July 31, 2025 14:22
…ls. Migrated skeleton shader editor to control vertex shader instead of fragment shader (match behavior of annotation layer).
@chrisj chrisj force-pushed the cj-skeleton-shader-line-width-with-nanometer-conversion branch from 4d0f4d8 to f596890 Compare July 31, 2025 14:25
@chrisj
Copy link
Contributor Author

chrisj commented Aug 4, 2025

I added support for non-float vertex attributes built on top of this branch since I already made the step here of moving the editable shader from being a fragment shader to a vertex. This simplifies things because we don't have to worry about the flat qualifier and provoking vertex but that is an alternative. There are some backwards compatibility issues here, maybe the non-float change should be merged with #819 to deal with backwards compatibility.

I could keep the original name and vCustom as floats and prop_name() as the new type.

3a11257

@jbms
Copy link
Collaborator

jbms commented Aug 5, 2025

Taking radius into account when rendering skeletons has been requested a lot and would certainly be a valuable feature. However, I think it may be better to implement it directly as a separate rendering mode for a few reasons:

  1. We could then render as cylinders (with appropriate shading) rather than just wider lines.
  2. It is simpler to deal with the coordinate transforms and physical units outside of the user-defined shader.

The main downside, I suppose, would be less flexibility in deriving the radius from arbitrary properties in the shader. But is that really needed?

@chrisj
Copy link
Contributor Author

chrisj commented Aug 6, 2025

@jbms We were discussing using cylinders and we would be happy with that approach but one critical property of skeletons that we don't want to lose is the ability to set a minimum rendered width. This allows skeletons to be used to spot the neuron within the 3d view. In this PR, it is achieved by wrapping nanometersToPixels with a max(1.0, x). The other part of it, setting a max nanometer before the pixel conversion is not important.

Sidenote: I should use the skeleton information for the segment list's right click to jump to a segment when the mesh data is not available.

Would you suggest implementing the cylinders using a similar approach as the ellipse annotation?

Should nanometersToPixels still be added as a regular annotation feature? I got that request after implementing it for skeletons.

@jbms
Copy link
Collaborator

jbms commented Aug 6, 2025

Wanting to ensure it is at least one pixel makes sense --- we could ensure that even if rendered as a cylinder.

Using the ellipse approach would be simplest in some ways, but would result in a large number of triangles. The ideal approach, I think, would be to render them as "cylinder imposters" --- https://stackoverflow.com/questions/9595300/cylinder-impostor-in-glsl You can alter the depth used for depth testing in the shader, so that it ultimately produces an identical result to real cylinder geometry. I think it would also be fine if it is not 100% perfect.

I originally attempted to do the imposter approach for the ellipsoid annotations but it was proving to be too complicated.

If you prefer to go with the ellipsoid approach of using a real triangulated cylinder that would be fine also.

As far as a nanometersToPixels function --- there are a couple issues there:

  1. Physical units is not necessarily meters --- could be a different unit.
  2. It would be better not to favor "nano" vs another base scaling.
  3. There can sometimes be non-uniform scaling, though that is admittedly fairly rare.
  4. For 3-d perspective projection views, even if there is uniform scaling for non-point annotations there is more than one location involved meaning there isn't a single scale factor.

Certainly I can see that scaling annotations to physical units would be useful --- ideally there would be some way to do it while taking proper coordinate transforms into account, but not sure how that might be done exactly.

@fcollman
Copy link
Contributor

fcollman commented Aug 7, 2025

"The main downside, I suppose, would be less flexibility in deriving the radius from arbitrary properties in the shader. But is that really needed?"

I think the short answer to this is yes..

here is an example using chris's branch.
image

  #uicontrol float myelin_thrsh slider(min=0, max=100, default=90)
  #uicontrol float myelin_thickness slider(min=0, max=50, default=10)
  
  void main() { 
    if (myelin() > myelin_thrsh) {    
      setLineWidth(myelin_thickness);
  	  setNodeDiameter(myelin_thickness);
    } else {
      setLineWidth(uLineWidth);
  	  setNodeDiameter(uNodeDiameter);
    }
  }

which was able to be done with this shader, where the user had defined a mylenation score along the skeleton.

In general you could use this to transform skeleton vertex data into customized line widths to emphasize certain portions of a skeleton.

@fcollman
Copy link
Contributor

fcollman commented Aug 7, 2025

I wonder if "physical_to_pixels" might be a better name.. I'm similarly concerned that people will have vertex data that will not necessarily be aligned in units to the physical units they want to display, and giving them access to the shader lets them normalize any issues away.. for example physical_to_pixels(radius*1000) would be possible if radius were given in the wrong unit relative to the display.

@fcollman
Copy link
Contributor

fcollman commented Aug 7, 2025

"For 3-d perspective projection views, even if there is uniform scaling for non-point annotations there is more than one location involved meaning there isn't a single scale factor."

I think perhaps you didn't understand that this is the point of this function, it will dynamically calculate the line width based on the camera location, so that when the line is closer it is scaled differently than when that same line is farther away.

@jbms
Copy link
Collaborator

jbms commented Aug 13, 2025

"For 3-d perspective projection views, even if there is uniform scaling for non-point annotations there is more than one location involved meaning there isn't a single scale factor."

I think perhaps you didn't understand that this is the point of this function, it will dynamically calculate the line width based on the camera location, so that when the line is closer it is scaled differently than when that same line is farther away.

For a line annotation, the "nanometersToPixels" mapping may not be the same for the two points (if they aren't at the same depth relative to the perspective camera view). Currently, each annotation is rendered via a number of vertices ---- 4 for each line, plus 4 more for each endpoint marker --- and the user-defined shader runs as part of the vertex shader for each vertex, and is expected to produce identical output for each vertex, since the user currently has no (documented) way to vary the behavior depending on which vertex is being processed. In order for nanometersToPixels to work correctly it would need to give different results depending on which vertex is being processed. This might work though it would potentially be confusing to users given how annotation shaders currently work. With skeletons, in contrast, the shader explicitly just operates on a single vertex so nanometersToPixels is well defined.

@fcollman
Copy link
Contributor

just in case it is useful here is a dynamic demo for skeletons.

example link

Do i understand that your worry is for extending this metaphor for annotations (specifically ones with >1 vertex) as if there were different scaling needed for pointA and pointB because they are going to in general be different distances from the camera.

Are you opposed to merging this in for just skeletons now, and then worrying later about how to implement it for annotations, or worried that introducing this to skeletons will make there be a disconnect between the annotation and skeleton shader API that we should avoid?

@jbms
Copy link
Collaborator

jbms commented Aug 13, 2025

I suppose it is fine to add this just to skeletons for now if that is useful, provided that we can figure out a reasonable way to handle the coordinate transforms and units.

@chrisj
Copy link
Contributor Author

chrisj commented Aug 14, 2025

@jbms I added an implementation for the line annotation, looks like this

float sourceDimToPixels(float value) {
  vec2 lineOffset = getLineOffset();
  highp vec3 vertexA = projectModelVectorToSubspace(getVertexPosition0());
  highp vec3 vertexB = projectModelVectorToSubspace(getVertexPosition1());
  highp vec3 vertex = mix(vertexA, vertexB, lineOffset.x);
  return nanometersToPixels(value, vertex, uProjection, uModelView, 1.0 / uLineParams.x);
}

For now I renamed it to sourceDimToPixels to pixels as that seems to be more accurate as our skeleton uses a 1x1x1 source dimension whereas local annotations by default use voxel scale by default. This way, we shouldn't have to worry about the units if I understand it correctly. Below is a screenshot showing a line annotation in blue that I have roughly positioned in line with a segment of the skeleton.

It could be useful to expose the source dimensions in the shader.

Screenshot 2025-08-14 at 9 38 58 AM

I see the issue with coordinate transforms, I'll look into that correction. (I'm not sure what is right here but it doesn't look right)
Screenshot 2025-08-14 at 9 47 40 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants