Skip to content

Neuroglancer Precomputed Mesh #8236

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 54 commits into
base: master
Choose a base branch
from
Open

Neuroglancer Precomputed Mesh #8236

wants to merge 54 commits into from

Conversation

frcroth
Copy link
Member

@frcroth frcroth commented Nov 27, 2024

URL of deployed dev instance (used for testing):

Steps to test:

  • Test previously created meshes. They should still work even though the protocol has changed (I only tested l4_sample meshfile_4_4_1)
  • Test neuroglancer precomputed meshes, I tested:
    • gs://fafb-ffn1-20190805/segmentation/
    • gs://neuroglancer-janelia-flyem-hemibrain/v1.0/segmentation
  • Check that meshes can be loaded, downloaded (frontend, backend via route), and align with data

TODOs:

  • Frontend
    • Adapt new protocol
  • Backend
    • Move all neuroglancer precomputed mesh handling to different service?
    • Add enum for mesh types
      • Replaced the magic values with constants, I think that way its fine for now.
    • ??? Maybe change API to have a mesh object instead of different keys for name, path, type
    • Support private meshes (using credentials?)
      • not for now (since we do not know if these exist)
    • Remove last TODO in DSFullMeshService
      • mapping not supported
    • Can some sharding stuff be unified between array and mesh?
    • Remove "get" for shardingSpec lookup
    • Support non sharding meshes?
      • Let's not do this for now, since I haven't encountered any non-sharded meshes, wait until someone actually requests it.
    • Tests?
    • Caching
    • Extract stuff for Law of Demeter violations
    • Allow meshes to be found somewhere else other than that "mesh"
    • Do we need to send meshFilePath everywhere or do we want to create that path in the datastore each time
      • I'll keep it for now, since otherwise we would need to request the layer info json every time. Although with Virtual datasets #8384 we plan to also add references to other special files, so then the meshfile could also be referenced in the datasource properties (/ later in DB) and then we do not need to send it around anymore.
    • STL download
    • Test regular meshes
    • Remove unused stuff from storage case classes (https://scm.slack.com/archives/C5AKLAV0B/p1744642460772369?thread_ts=1742830834.799849&cid=C5AKLAV0B)
    • Clean up code duplication in meshfileservice
    • Meshes are loaded, but very far away. Why?
    • Rename Stuff in NeuroglancerPrecomputedMeshService (we do not need neuroglancer precomputed in the name anymore)

Issues:


(Please delete unneeded items, merge only when none are left open)

Copy link
Contributor

coderabbitai bot commented Nov 27, 2024

📝 Walkthrough

Walkthrough

This change introduces comprehensive support for loading and handling meshes from remote Neuroglancer precomputed datasets. It adds new backend services, data structures, and sharding utilities for parsing, caching, and reading Neuroglancer mesh metadata and chunk data. The frontend is updated to use richer mesh file information, replacing simple mesh file names with structured objects containing metadata. Numerous type and API changes propagate these new structures throughout the codebase. JNI and Scala native interfaces are updated to support vertex quantization decoding. Extensive tests and dependency injection updates are included to ensure correct integration and manifest parsing.

Changes

Files/Paths Change Summary
frontend/javascripts/types/api_flow_types.ts Renamed and expanded APIMeshFile to APIMeshFileInfo with additional metadata fields.
frontend/javascripts/admin/admin_rest_api.ts, frontend/javascripts/admin/api/mesh.ts, frontend/javascripts/oxalis/model/actions/annotation_actions.ts, frontend/javascripts/oxalis/model/sagas/mesh_saga.ts, frontend/javascripts/oxalis/store.ts, frontend/javascripts/oxalis/view/context_menu.tsx, frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/load_mesh_menu_item_label.tsx, frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segment_list_item.tsx, frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segments_view.tsx Replaced usage of APIMeshFile with APIMeshFileInfo and updated property access from meshFileName to name throughout mesh-related actions, reducers, sagas, and UI components. Updated function signatures and prop types accordingly.
frontend/javascripts/oxalis/api/api_latest.ts, frontend/javascripts/oxalis/controller/url_manager.ts, frontend/javascripts/oxalis/model/reducers/annotation_reducer.ts, frontend/javascripts/oxalis/model/sagas/proofread_saga.ts Updated internal property access for mesh file identifiers from meshFileName to name in API, URL manager, reducers, and sagas.
frontend/javascripts/admin/api/mesh.ts Refined mesh-related API types, replacing string mesh file references with structured APIMeshFileInfo objects. Updated request and response types, added new request objects, and improved type safety for chunk and segment info.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala Added a new service to handle Neuroglancer precomputed mesh files: exploring, caching, parsing metadata, listing mesh chunks, reading chunk data, and handling quantization bits.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala Introduced data structures and helper logic for parsing and enriching Neuroglancer mesh segment manifests, mesh chunk info, and LOD transforms.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMesh.scala Added a wrapper class for Neuroglancer meshes, exposing sharding specification and transform matrix conversion utilities.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala Introduced a trait to handle Neuroglancer sharding: reading shard indices, minishard indices, parsing chunk info, and decompressing chunk data as per the Neuroglancer specification.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala Extended header and sharding specification with mesh path, hashing logic, sharding bitmasks, and helper methods for chunk-to-shard mapping and path generation.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala Refactored to delegate sharding logic to the new sharding utils trait and sharding specification, removing internal sharding caches and parsing logic.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala Refactored mesh file info to include type and path, introduced mesh file type enum, delegated Neuroglancer logic to helper trait, and updated method signatures for chunk listing and reading.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala Added support for loading full meshes from remote Neuroglancer mesh files, including handling of mesh file path/type, segment IDs, LOD transforms, and quantization bits. Updated STL conversion logic accordingly.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala Integrated Neuroglancer mesh service for mesh file exploration, chunk listing, and chunk reading, with conditional routing based on mesh file type. Updated controller methods and dependencies.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala Registered the new Neuroglancer mesh file service as an eager singleton for dependency injection.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/draco/NativeDracoToStlConverter.scala, webknossos-jni/src/dracoToStlDecoder.cpp, webknossos-jni/src/include/com_scalableminds_webknossos_datastore_draco_NativeDracoToStlConverter.h Updated native STL conversion to support vertex quantization bits, including parameter propagation and quantization decoding logic.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshService.scala, webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshServiceHolder.scala, webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/FullMeshHelper.scala, webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala Changed package declarations and imports to reflect new mesh service organization.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala, webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala, webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataConverter.scala Added or updated imports for new mesh services.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala, webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala, webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingController.scala, webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala, webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala, webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala Updated import paths for mesh services and request types to new package locations.
test/backend/MurmurHashTestSuite.scala, test/backend/NeuroglancerMeshTestSuite.scala Added tests for MurmurHash3 correctness and Neuroglancer segment manifest parsing.
CHANGELOG.unreleased.md Added changelog entry for viewing meshes from Neuroglancer Precomputed Datasets.

Assessment against linked issues

Objective Addressed Explanation
Support loading meshes from remote neuroglancer datasets (#6826)

Poem

A mesh from afar, with sharding and hash,
Now hops to our code in a quantum leap dash!
With Neuroglancer’s secrets decoded anew,
Chunks and LODs, we fetch and we view.
From Scala to JS, our types are refined—
🐇 This rabbit is thrilled with the features combined!
Meshes remote, you’re now part of the crew!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c76cf4a and a374e04.

📒 Files selected for processing (1)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: backend-tests
  • GitHub Check: build-smoketest-push

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@frcroth frcroth force-pushed the neuroglancer-mesh branch from 2dc4df5 to 036169b Compare March 24, 2025 14:25
@frcroth frcroth force-pushed the neuroglancer-mesh branch from 7d7ec5e to 1924d98 Compare March 31, 2025 11:55
@frcroth frcroth marked this pull request as ready for review April 23, 2025 08:24
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

🔭 Outside diff range comments (4)
frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segment_list_item.tsx (1)

76-96: 🛠️ Refactor suggestion

Prevent “undefined” mesh names from slipping through to the action

loadPrecomputedMesh expects a non‑empty string, yet we pass currentMeshFile?.name.
APIMeshFileInfo.name is typed as string, but callers can still send an empty string or undefined (because the object comes from the backend and is marked optional).
Guard against that to avoid runtime errors and misleading menu‑state:

-    disabled: !currentMeshFile,
+    disabled: !currentMeshFile || !currentMeshFile.name,
...
-            currentMeshFile?.name,
+            // name is guaranteed by the disabled‑guard above
+            currentMeshFile!.name,
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (1)

66-71: ⚠️ Potential issue

mappingNameForMeshFile is executed for Neuroglancer files although unused

mappingNameForMeshFile is computed before you branch on fileType.
For Neuroglancer meshes meshFile.name may be meaningless, and the lookup could throw or perform unnecessary I/O.
Move the call into the legacy‑mesh branch to avoid wasted work and unexpected failures.

frontend/javascripts/admin/api/mesh.ts (1)

106-112: 🛠️ Refactor suggestion

Possible O (N²) copy when slicing ArrayBuffers

Inside the loop every slice allocates a new buffer. For large meshes this becomes expensive.

Use dataForChunks.map((_, i) => dracoDataChunks.subarray(...)) and pass Uint8Array views to Draco instead.
If the worker interface forces copies, leave a comment explaining the trade‑off.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

63-69: ⚠️ Potential issue

Remote‑mesh dispatch misses the “path‑without‑name” use‑case

loadFor demands both meshFileName and meshFilePath to route to loadFullMeshFromRemoteNeuroglancerMeshFile.
However, the Neuroglancer “precomputed” format is referenced by a single URL / GCS path and does not necessarily have the concept of a mesh file name.
If meshFilePath is defined but meshFileName is None, the call silently falls back to ad‑hoc meshing, giving a confusing “mag.neededForAdHoc” error.

Consider relaxing the guard to something like:

-    fullMeshRequest.meshFileName match {
-      case Some(_) if fullMeshRequest.meshFilePath.isDefined =>
+    (fullMeshRequest.meshFileName, fullMeshRequest.meshFilePath) match {
+      case (_, Some(_)) =>   // path present ⇒ neuroglancer

or explicitly validating the required arguments and returning a dedicated error.

🧹 Nitpick comments (24)
frontend/javascripts/oxalis/controller/segment_mesh_controller.ts (1)

198-198: Remove debug console.log or gate it
The console.log("Mesh using scale and lod ", scale, lod); will flood the console on each mesh addition. Consider using a debug-level logger or wrapping it behind a verbosity flag to avoid noisy logs in production.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala (1)

1-10: Architecture reorganization improves code organization.

Moving this helper to the mesh subpackage and updating the imports creates a clearer separation of concerns for mesh-related functionality. This reorganization supports the introduction of the Neuroglancer precomputed mesh handling by grouping related services together.

This architectural change follows good practices by:

  1. Creating a dedicated namespace for mesh-related services
  2. Properly importing dependencies from the parent package
  3. Maintaining the interface while improving code organization
frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segments_view.tsx (1)

1443-1449: Null‑safety guard before dereferencing currentMeshFile

Inside handleLoadMeshesFromFile you already early‑return when
currentMeshFile == null, but the TypeScript compiler cannot infer that
inside the nested callback.
Add a guard (or ?.) to silence potential undefined access and avoid
run‑time errors after refactors:

-      this.props.loadPrecomputedMesh(
+      const { currentMeshFile } = this.props;
+      if (!currentMeshFile) return;               // extra safety
+      this.props.loadPrecomputedMesh(
         segment.id,
         segment.somePosition,
         segment.someAdditionalCoordinates,
-        this.props.currentMeshFile.name,
+        currentMeshFile.name,
       );
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMesh.scala (1)

22-33: meshInfoVersion is hard‑coded – expose it via config or companion of MeshInfo

The constant meshInfoVersion = 7 will drift once Neuroglancer updates
its spec. A mismatch causes silent parsing errors that are hard to
diagnose.
Expose the supported versions in a central place (e.g. companion object of
NeuroglancerPrecomputedMeshInfo) and reference it here to avoid
duplication.

frontend/javascripts/oxalis/model/actions/annotation_actions.ts (3)

243-257: Default callback type does not match _.noop signature

_.noop has type (...args: any[]) => undefined, therefore
(meshes: Array<APIMeshFileInfo>) => void is not satisfied and the
parameter is silently ignored by TypeScript.
Define an explicit no‑op that honours the expected parameter to retain type
safety and future maintainability:

-  callback: (meshes: Array<APIMeshFileInfo>) => void = _.noop,
+  callback: (meshes: Array<APIMeshFileInfo>) => void = () => {},

305-310: Consider normalising mesh‑file list inside the action creator

updateMeshFileListAction forwards raw backend data directly to the
store. Duplicates or unsorted entries can bleed into UI components.
Adding a small normalisation step (e.g. uniqBy(name), alphabetical sort)
here prevents scattered clean‑up code elsewhere.

Not a blocker, just a suggestion for future polish.


377-379: Deferred’s rejection channel is typed as unknown – widen to Error

const readyDeferred = new Deferred<APIMeshFileInfo[], unknown>();

unknown forces callers to down‑cast; use Error (or a union of the
specific error types) to preserve traceability.

-const readyDeferred = new Deferred<APIMeshFileInfo[], unknown>();
+const readyDeferred = new Deferred<APIMeshFileInfo[], Error>();
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala (1)

79-90: Bypass empty‑spec look‑ups for non‑sharded datasets

When shardingSpecification is empty, both getMinishardInfo and
getPathForShard still allocate objects and perform hashing although the
result is deterministic. A micro‑optimisation:

if (shardingSpecification.isEmpty) {
  // fall back to unsharded path directly
} else {
  ...
}

Not critical but could save many heap allocations during volume loading.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1)

65-80: Avoid the extra byte‑copy when parsing minishard index

ByteBuffer.allocate(bytes.length) followed by put(bytes) duplicates the array.
Using ByteBuffer.wrap(bytes) eliminates the copy and reduces GC pressure:

- val buf = ByteBuffer.allocate(bytes.length)
- buf.put(bytes)
+ val buf = ByteBuffer.wrap(bytes)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala (2)

93-99: Expose hash as an ADT to fail fast on unsupported algorithms

Throwing an IllegalArgumentException at runtime delays error discovery.
Replacing the raw String with a sealed trait (or an Enumeration) ensures unsupported hashes are blocked at deserialisation/compile‑time and improves type‑safety.

sealed trait ShardingHash; object ShardingHash {
  case object Identity                 extends ShardingHash
  case object MurmurHash3_X86_128      extends ShardingHash
  //
}

128-135: Avoid floating‑point arithmetic in getPathForShard

Using Float for the digit width introduces rounding errors for large bit counts.

- String.format(s"%1$$${(shard_bits.toFloat / 4).ceil.toInt}s", shardNumber.toHexString)
+ val width = math.ceil(shard_bits / 4.0).toInt
+ String.format(s"%1$$${width}s", shardNumber.toHexString)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (1)

37-47: Deduplicate mesh file list before returning to the client

meshFiles ++ neuroglancerMeshFiles may return duplicate entries when the same physical mesh is exposed via both protocols.
Consider de‑duplicating on path (or a composite key) to avoid confusing the UI:

val allMeshFiles = (meshFiles ++ neuroglancerMeshFiles).groupBy(_.path).values.map(_.head).toList
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1)

26-34: Prefer idiomatic camel‑case field names in Scala case classes

All fields are written in snake_case (lod_scale_multiplier, vertex_quantization_bits, …).
While this mirrors the JSON schema, it violates the project’s Scala style‑guide and leaks the JSON naming into the domain model.

Consider keeping the internal model idiomatic and add an explicit Reads/Writes that maps the JSON keys:

-case class NeuroglancerPrecomputedMeshInfo(
-    lod_scale_multiplier: Double,
-    transform: Array[Double],
-    sharding: Option[ShardingSpecification],
-    vertex_quantization_bits: Int,
-)
+case class NeuroglancerPrecomputedMeshInfo(
+    lodScaleMultiplier: Double,
+    transform: Array[Double],
+    sharding: Option[ShardingSpecification],
+    vertexQuantizationBits: Int,
+)
implicit val jsonFormat: OFormat[NeuroglancerPrecomputedMeshInfo] =
  (__ \ "lod_scale_multiplier").format[Double] and
  (__ \ "transform").format[Array[Double]] and
  (__ \ "sharding").formatNullable[ShardingSpecification] and
  (__ \ "vertex_quantization_bits").format[Int]
  )(NeuroglancerPrecomputedMeshInfo.apply, unlift(NeuroglancerPrecomputedMeshInfo.unapply))
frontend/javascripts/admin/api/mesh.ts (2)

13-16: Comment and type mismatch – this is a 3 × 4, not 4 × 3 matrix

transform has three Vector4 rows → 3 × 4 matrix, yet the comment says 4x3.

-  transform: [Vector4, Vector4, Vector4]; // 4x3 matrix
+  transform: [Vector4, Vector4, Vector4]; // 3x4 matrix (row‑major)

24-27: ListMeshChunksRequest can reuse existing types

Instead of a bespoke type locally, expose it from the shared API typings to avoid drift:

export interface ListMeshChunksRequest {
  meshFile: APIMeshFileInfo;
  segmentId: number;
}
frontend/javascripts/oxalis/model/sagas/mesh_saga.ts (3)

815-818: Linear search over mesh files is O(n); consider a Map keyed by name

availableMeshFiles.find(...) is executed for every mesh load.
Given typical projects have many mesh files (>100), cache them keyed by name to avoid repeated scans.


931-934: Dead code – meshLodInfo.transform; has no effect

The statement does nothing and should be removed.


1068-1072: Guard against empty input but still call heavy merge

mergeGeometries is called only when length > 0, yet even a single small geometry merge is expensive.
Shortcut when sortedBufferGeometries.length === 1 and reuse the original geometry.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

232-238: Transform loses translation components

When constructing transform for Neuroglancer meshes only the diagonal (scale) terms are kept; translation as well as potential shearing is dropped:

Array(
  Array(lodTransform(0)(0), 0, 0),
  Array(0, lodTransform(1)(1), 0),
  Array(0, 0, lodTransform(2)(2))
)

If the original dataset is not centred at the origin the resulting STL will be mis‑aligned.
Either copy the full matrix or document why translation is guaranteed to be zero for this format.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala (3)

20-37: Missing resource closing & potential large‑allocation leak

LittleEndianDataInputStream backed by ByteArrayInputStream is never closed.
While GC will eventually collect it, explicitly closing in a try/finally (or using Using) is safer, especially when parsing many manifests in parallel.


50-58: Manifest parsing loop is hard to read and error‑prone

The double for‑comprehension with a mutable tuple makes the intent opaque; a reader needs to mentally unfold 3×numChunks iterations.

A clearer, allocation‑free alternative:

val x = Array.fill(numChunksPerLod(lod))(dis.readInt)
val y = Array.fill(numChunksPerLod(lod))(dis.readInt)
val z = Array.fill(numChunksPerLod(lod))(dis.readInt)
val currentChunkPositions = x.indices.map(i => Vec3Int(x(i), y(i), z(i))).toList

This improves readability and avoids tuple field access.


149-167: Inefficient per‑chunk pow + repeated multiplications

computeGlobalPosition is invoked for every chunk and performs Math.pow(2, lod) each time.
Pre‑compute lodFactor = (1 << lod) * segmentInfo.lodScales(lod) * lodScaleMultiplier once per LOD and reuse it.
This will noticeably speed up manifest expansion for large segments.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (2)

141-147: versionForMeshFile silently falls back to 0 on read error

Returning 0 when the attribute is missing masks corrupted mesh files and skips the compatibility guard set up elsewhere.
Consider propagating the error (or at least logging a warning) so that operators notice malformed files early.


270-281: Restoring request order allocates & copies unnecessarily

dataSorted.flatMap(d => d._1).toArray concatenates all chunks, triggering an extra traversal and allocation.

You can pre‑allocate the output array once:

val total = requestsReordered.map(_._1.byteSize).sum
val out   = new Array[Byte](total)
var pos   = 0
for ((bytes, _) <- dataSorted) {
  System.arraycopy(bytes, 0, out, pos, bytes.length)
  pos += bytes.length
}

Minor but measurable when downloading thousands of chunks.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5847b7 and 5fb3d22.

📒 Files selected for processing (45)
  • CHANGELOG.unreleased.md (1 hunks)
  • frontend/javascripts/admin/admin_rest_api.ts (2 hunks)
  • frontend/javascripts/admin/api/mesh.ts (5 hunks)
  • frontend/javascripts/oxalis/api/api_latest.ts (4 hunks)
  • frontend/javascripts/oxalis/controller/segment_mesh_controller.ts (1 hunks)
  • frontend/javascripts/oxalis/controller/url_manager.ts (1 hunks)
  • frontend/javascripts/oxalis/model/actions/annotation_actions.ts (4 hunks)
  • frontend/javascripts/oxalis/model/reducers/annotation_reducer.ts (1 hunks)
  • frontend/javascripts/oxalis/model/sagas/mesh_saga.ts (21 hunks)
  • frontend/javascripts/oxalis/model/sagas/proofread_saga.ts (1 hunks)
  • frontend/javascripts/oxalis/store.ts (2 hunks)
  • frontend/javascripts/oxalis/view/context_menu.tsx (3 hunks)
  • frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/load_mesh_menu_item_label.tsx (2 hunks)
  • frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segment_list_item.tsx (4 hunks)
  • frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segments_view.tsx (5 hunks)
  • frontend/javascripts/types/api_flow_types.ts (1 hunks)
  • test/backend/MurmurHashTestSuite.scala (1 hunks)
  • test/backend/NeuroglancerMeshTestSuite.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (6 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/MurmurHash3.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala (3 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/draco/NativeDracoToStlConverter.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataConverter.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshService.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshServiceHolder.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (7 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/FullMeshHelper.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (10 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMesh.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1 hunks)
  • webknossos-jni/src/dracoToStlDecoder.cpp (3 hunks)
  • webknossos-jni/src/include/com_scalableminds_webknossos_datastore_draco_NativeDracoToStlConverter.h (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingController.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (11)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingController.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (3)
  • webknossos (59-61)
  • UserAccessRequest (29-29)
  • UserAccessRequest (30-61)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (2)
  • FullMeshRequest (21-33)
  • FullMeshRequest (35-37)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (1)
  • MeshFileService (60-289)
test/backend/MurmurHashTestSuite.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/MurmurHash3.scala (2)
  • MurmurHash3 (11-126)
  • hash64 (121-125)
frontend/javascripts/oxalis/store.ts (1)
frontend/javascripts/types/api_flow_types.ts (1)
  • APIMeshFileInfo (939-948)
frontend/javascripts/oxalis/view/context_menu.tsx (1)
frontend/javascripts/types/api_flow_types.ts (1)
  • APIMeshFileInfo (939-948)
frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segment_list_item.tsx (1)
frontend/javascripts/types/api_flow_types.ts (1)
  • APIMeshFileInfo (939-948)
webknossos-jni/src/include/com_scalableminds_webknossos_datastore_draco_NativeDracoToStlConverter.h (1)
webknossos-jni/src/dracoToStlDecoder.cpp (1)
  • jbyteArray (21-105)
frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segments_view.tsx (1)
frontend/javascripts/types/api_flow_types.ts (1)
  • APIMeshFileInfo (939-948)
frontend/javascripts/oxalis/model/sagas/mesh_saga.ts (6)
frontend/javascripts/types/api_flow_types.ts (3)
  • APIMeshFileInfo (939-948)
  • APIDataset (240-243)
  • APISegmentationLayer (102-110)
frontend/javascripts/oxalis/model/sagas/effect-generators.ts (1)
  • select (12-14)
frontend/javascripts/oxalis/model/actions/annotation_actions.ts (1)
  • updateCurrentMeshFileAction (312-320)
frontend/javascripts/oxalis/constants.ts (2)
  • Vector3 (13-13)
  • Vector4 (14-14)
frontend/javascripts/admin/api/mesh.ts (1)
  • MeshLodInfo (13-16)
frontend/javascripts/libs/BufferGeometryUtils.ts (1)
  • mergeGeometries (1367-1367)
frontend/javascripts/oxalis/model/actions/annotation_actions.ts (1)
frontend/javascripts/types/api_flow_types.ts (1)
  • APIMeshFileInfo (939-948)
frontend/javascripts/admin/api/mesh.ts (3)
frontend/javascripts/oxalis/constants.ts (2)
  • Vector4 (14-14)
  • Vector3 (13-13)
frontend/javascripts/types/api_flow_types.ts (2)
  • APIMeshFileInfo (939-948)
  • APIDataSourceId (188-188)
app/models/dataset/Dataset.scala (1)
  • dataSourceId (89-89)
🔇 Additional comments (54)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/FullMeshHelper.scala (1)

1-1: Align package declaration with new mesh subpackage
The package has been updated to com.scalableminds.webknossos.datastore.services.mesh to match the refactoring of mesh-related classes into a dedicated subpackage. All related imports and references should now resolve correctly.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala (1)

5-5: Update import to new mesh package for AdHocMeshServiceHolder
The import path for AdHocMeshServiceHolder now reflects the mesh subpackage. This aligns with the rearrangement of mesh services; ensure the binding in configure() uses this class correctly.

CHANGELOG.unreleased.md (1)

21-21: Document Neuroglancer precomputed mesh support in changelog
The new feature entry under "Added" accurately reflects PR #8236. Verify that this entry aligns with formatting and that no additional migration or client updates are required.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (1)

21-21: Adjust mesh service imports to new mesh subpackage
The import for AdHocMeshRequest, AdHocMeshService, and AdHocMeshServiceHolder has been updated to the services.mesh namespace. Confirm no residual imports remain pointing to the old package.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (1)

22-22: Correct import path for FullMeshRequest
The FullMeshRequest import is updated to reference the services.mesh package. Ensure all usages of FullMeshRequest in this class and related modules have been updated accordingly.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataConverter.scala (1)

5-5: Verify no dependency cycle introduced
The new import DataTypeFunctors from services.mesh is necessary for convertData, but please verify this does not create a cyclic dependency between core services and the mesh subpackage.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1)

22-22: Approve injection of mesh utilities
The import and injection of MeshFileService and MeshMappingHelper align with the new mesh handling architecture. No immediate issues detected.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshServiceHolder.scala (2)

1-1: Package refactor consistency check
The package declaration was updated to services.mesh. Ensure that the file path and build configuration reflect this move so that the class is resolved correctly.


3-3: Approve imports for mesh service holder
Adding BinaryDataService and MappingService imports is correct and necessary for configuring AdHocMeshServiceHolder.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (1)

21-21: Approve updated mesh imports
The imports of AdHocMeshRequest, AdHocMeshService, and AdHocMeshServiceHolder from services.mesh correctly follow the package restructuring and align with their usage in creating ad-hoc meshes.

frontend/javascripts/oxalis/model/sagas/proofread_saga.ts (1)

186-186: Property name updated correctly to match new mesh file info structure.

The change from currentMeshFile.meshFileName to currentMeshFile.name aligns with the broader refactoring to use the APIMeshFileInfo type (which uses name as the identifier property) instead of the previous APIMeshFile type (which used meshFileName).

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingController.scala (1)

25-26: Import paths updated to match backend restructuring.

The import changes reflect the reorganization where FullMeshRequest has been moved to the mesh subpackage while UserAccessRequest remains in the original package. This separation aligns with the broader architecture changes to support Neuroglancer precomputed meshes.

frontend/javascripts/oxalis/controller/url_manager.ts (1)

303-303:

❓ Verification inconclusive

Property name updated correctly but type definitions may need review.

The change from currentMeshFile?.meshFileName to currentMeshFile?.name aligns with the broader refactoring effort, but the type definitions in the file still use meshFileName (see PrecomputedMeshUrlDescriptor on line 40 and UrlStateByLayer on line 47).

While functionally correct (the property access has been updated), you might want to verify whether the type definitions should also be updated for consistency, or if this inconsistency is intentional for URL backward compatibility.


🏁 Script executed:

#!/bin/bash
# Check for other occurrences of meshFileName in type definitions across the codebase
grep -r "meshFileName.*:" --include="*.ts" frontend/javascripts/

Length of output: 2009


Verify consistency of meshFileName type definitions

You’ve updated the property access in url_manager.ts from meshFileName to name, but the type definitions throughout the codebase still reference meshFileName. Please review whether these should be renamed to name or intentionally left for backward‐compatibility.

Impacted files include (non‑exhaustive list):

  • frontend/javascripts/oxalis/api/api_latest.ts
  • frontend/javascripts/oxalis/controller/url_manager.ts
  • frontend/javascripts/oxalis/model/actions/segmentation_actions.ts
  • frontend/javascripts/oxalis/model/actions/annotation_actions.ts
  • frontend/javascripts/oxalis/model/sagas/mesh_saga.ts
  • frontend/javascripts/oxalis/model_initialization.ts
  • frontend/javascripts/oxalis/store.ts
  • frontend/javascripts/types/schemas/url_state.schema.ts
  • frontend/javascripts/test/puppeteer/dataset_rendering.screenshot.ts

Ensure you update the corresponding interfaces (e.g., PrecomputedMeshUrlDescriptor, UrlStateByLayer) and schema definitions if the rename should be global.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala (1)

18-18: Import path updated to reflect package restructuring.

The import statement has been updated to reference mesh-related classes from the dedicated mesh subpackage instead of the root services package. This change aligns with the broader backend service restructuring effort to separate Neuroglancer mesh handling components.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)

25-26: Import statements reorganized to match mesh services restructuring.

The imports have been updated to reflect the new package structure where mesh-related services (AdHocMeshRequest, AdHocMeshService, AdHocMeshServiceHolder) are now in the dedicated mesh subpackage, while BinaryDataService remains in the root services package. This is consistent with the backend restructuring for better separation of mesh handling components.

frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/load_mesh_menu_item_label.tsx (3)

4-4: Updated import for the new mesh file info type.

The import has been updated to use the new APIMeshFileInfo type, which is part of the frontend protocol adaptation for supporting Neuroglancer precomputed meshes.


7-7: Updated prop type to use the enhanced mesh file info structure.

The component's prop type has been updated to use APIMeshFileInfo instead of the previous type, reflecting the richer mesh file metadata structure now being used throughout the application.


24-24: Updated property access to match the new type structure.

The tooltip text now references currentMeshFile.name instead of the previous property (likely meshFileName), consistent with the enhanced mesh file info structure. This ensures proper display of mesh names in the UI.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshService.scala (4)

1-1: Package declaration updated as part of mesh services reorganization.

The service has been moved to the new dedicated mesh subpackage to better organize mesh-related functionality, supporting the implementation of Neuroglancer precomputed mesh handling.


15-15: Updated imports to reference services from the parent package.

Now that this service is in the mesh subpackage, it needs to explicitly import BinaryDataService and MappingService from the parent package. This maintains the correct dependencies while supporting the package reorganization.


18-21: Updated Akka imports with Apache Pekko.

The imports have been updated to use the Apache Pekko library (a fork of Akka) while maintaining the same functionality. This ensures compatibility with the actor system being used throughout the application.


23-23: Minor import adjustment for ByteOrder.

The import statement has been adjusted to include all classes from java.nio package, which is consistent with the package reorganization and keeps the existing functionality intact.

test/backend/MurmurHashTestSuite.scala (1)

1-19: Good test for the MurmurHash3 implementation.

The test verifies that the MurmurHash3.hash64 function returns the expected hash for "Hello World!" with seed 0. This provides confidence in the correctness of your hashing implementation, which is critical for the Neuroglancer precomputed mesh sharding logic.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (2)

7-11: Appropriate restructuring of mesh-related imports.

The import statement has been reorganized to reflect the architectural change of moving mesh services into a dedicated package, improving code organization.


35-35: Correctly registered the new NeuroglancerPrecomputedMeshService as a singleton.

The NeuroglancerPrecomputedMeshService is properly registered in the Guice dependency injection container as an eager singleton, ensuring it's initialized at application startup along with other mesh-related services.

frontend/javascripts/oxalis/model/reducers/annotation_reducer.ts (1)

491-491: Updated property access to match the new APIMeshFileInfo structure.

The change from meshFileName to name correctly aligns with the updated APIMeshFileInfo type. This ensures consistency with the broader refactoring for supporting Neuroglancer precomputed meshes.

frontend/javascripts/oxalis/store.ts (2)

55-55: Updated import to use the new APIMeshFileInfo type.

Correctly imported the new APIMeshFileInfo type that includes additional fields required for Neuroglancer precomputed meshes.


627-628: Updated state types to use APIMeshFileInfo.

The state property types have been properly updated to use the new APIMeshFileInfo type which includes additional metadata like file path and type, necessary for handling Neuroglancer precomputed meshes.

frontend/javascripts/oxalis/view/context_menu.tsx (3)

139-139: Updated type import for mesh file handling.

The type import has been changed from APIMeshFile to APIMeshFileInfo to accommodate the enhanced mesh file structure with additional metadata fields that support Neuroglancer precomputed meshes.


166-166: Updated property type to use richer mesh file info structure.

The currentMeshFile property type has been updated to APIMeshFileInfo from the previous APIMeshFile to align with the broader refactoring of mesh file handling across the codebase.


966-966: Updated property access for consistency with new mesh file info structure.

The property access has been updated from currentMeshFile.meshFileName to currentMeshFile.name to align with the new APIMeshFileInfo type, which uses a more semantic naming convention for its properties.

frontend/javascripts/oxalis/api/api_latest.ts (4)

2299-2299: Updated property access in mesh files mapping.

Changed from accessing meshFile.meshFileName to meshFile.name to align with the revised APIMeshFileInfo type structure, maintaining consistent property access across the codebase.


2317-2317: Updated property access in getActiveMeshFile method.

Modified the property access from meshFileName to name to align with the updated APIMeshFileInfo type, ensuring consistent property naming across mesh-related functionality.


2350-2350: Updated property access in mesh file lookup.

Changed the property reference in the find predicate from el.meshFileName to el.name to maintain consistency with the updated mesh file structure.


2402-2402: Updated destructuring to align with new property naming.

The destructuring now extracts name from currentMeshFile and locally aliases it to meshFileName for backward compatibility within the function implementation. This approach maintains the internal function logic while adapting to the updated type structure.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/draco/NativeDracoToStlConverter.scala (1)

13-14: Added vertex quantization bits parameter to JNI method.

The native method dracoToStl has been enhanced to accept an additional vertexQuantizationBits parameter, which is crucial for correctly decoding Neuroglancer precomputed meshes. This parameter enables proper interpretation of quantized vertex data during the conversion process.

This change aligns with the broader effort to support Neuroglancer precomputed mesh formats, which use quantized vertices that need to be properly decoded during the mesh rendering pipeline.

frontend/javascripts/types/api_flow_types.ts (1)

939-943: Type enhancement for mesh files looks good.

The renaming from APIMeshFile to APIMeshFileInfo along with the addition of path and fileType fields provides the necessary metadata structure to support Neuroglancer precomputed meshes. This change aligns well with the PR objectives of adapting the frontend to handle the new protocol for mesh data.

webknossos-jni/src/include/com_scalableminds_webknossos_datastore_draco_NativeDracoToStlConverter.h (1)

13-16: JNI signature correctly extended for vertex quantization support.

The method signature has been updated to include an additional integer parameter (vertexQuantizationBits), which is necessary for properly decoding Neuroglancer mesh vertex data. This change works in conjunction with the implementation in dracoToStlDecoder.cpp where this parameter is used to apply the correct scaling factors during mesh conversion.

test/backend/NeuroglancerMeshTestSuite.scala (1)

1-29: Well-structured test for Neuroglancer segment manifest parsing.

This test provides good validation of the core functionality needed for Neuroglancer precomputed mesh support. It uses real-world data from Google Cloud Storage to verify that the manifest parser correctly extracts crucial properties like chunk shape, grid origin, levels of detail, and chunk positions. These assertions effectively test that the parser can handle the specific format used by Neuroglancer precomputed meshes.

frontend/javascripts/admin/admin_rest_api.ts (2)

51-51: Type import properly updated.

The import has been correctly updated to use the new APIMeshFileInfo type instead of APIMeshFile, aligning with the type definition changes.


2030-2032: Return type correctly updated for mesh file API.

The return type of getMeshfilesForDatasetLayer has been updated to reflect the enhanced mesh file info structure, ensuring that the admin REST API properly handles the richer mesh file metadata needed for Neuroglancer precomputed meshes.

frontend/javascripts/oxalis/view/right-border-tabs/segments_tab/segment_list_item.tsx (2)

52-54: Update import but also remove now‑unused type

The old APIMeshFile alias is no longer referenced anywhere in this file, so the updated import is correct.
Please double‑check and delete any stale re‑exports or helper types that still expose APIMeshFile from api_flow_types to avoid confusion in downstream files.


237-244: Prop typing looks good

currentMeshFile was updated to the new richer structure — this keeps the component contract in sync with the API change. No further action required.

webknossos-jni/src/dracoToStlDecoder.cpp (2)

9-15: extractUint16 endianness & aliasing concerns

Bit‑casting a float to uint32_t and masking the lower 16 bits assumes:

  1. IEEE‑754 layout.
  2. Host little‑endian order (so the quantised value is in the least‑significant bits).

If either assumption fails, decoding will silently mis‑place meshes.

Consider storing the quantised value in the upper 16 bits (as Neuroglancer often does) or at least document the expectation. A more future‑proof variant:

uint16_t extractUint16LE(float value) {
  static_assert(sizeof(float) == 4, "unexpected float size");
  uint32_t bits;
  std::memcpy(&bits, &value, sizeof(bits));
  return static_cast<uint16_t>(bits & 0xFFFFu);
}

On big‑endian targets you’d need bits >> 16.


54-66: 🛠️ Refactor suggestion

Avoid repeated pow and integer division pitfalls

1 / (pow(2, bits) - 1) performs an integer division because 1 is an int.
Use a double literal and compute the factor once:

-  auto vertexQuantizationFactor = 1 / (pow(2, double(vertexQuantizationBits)) - 1);
+  const double vertexQuantizationFactor =
+      1.0 / (std::pow(2.0, static_cast<double>(vertexQuantizationBits)) - 1.0);

This prevents accidental zero when vertexQuantizationBits > 0 and slightly improves performance by moving pow outside the vertex loop.

Likely an incorrect or invalid review comment.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/MurmurHash3.scala (1)

40-44: ⚠️ Potential issue

BigInt(...).toInt loses the high bit

Casting an unsigned 32‑bit little‑endian chunk through BigInt(...).toInt converts it to signed Int; values ≥ 0x8000_0000 wrap negative and later arithmetic relies on & 0xFFFFFFFF.
Although you mask later, the intermediate negative Int alters rotation counts in Integer.rotateLeft, changing the hash.

Decode as unsigned:

-      val k1 = BigInt(block.slice(0, 4).reverse).toInt
+      val k1 = java.lang.Integer.toUnsignedLong(
+                 java.nio.ByteBuffer.wrap(block.take(4).reverse).getInt
+               ).toInt

Do likewise for k2..k4.

Likely an incorrect or invalid review comment.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (2)

31-34: Off‑by‑one in shardIndexRange may skip the final byte

Range.Long(0, end, 1) is exclusive on the upper bound, so the last byte (index end‑1) is included, but byte end is not.
Because the spec defines the shard index size as minishardCount * 16 bytes, the exclusive range is correct only if end is the first byte after the index.
Please confirm the backend expects an exclusive range. If the underlying readBytes treats ranges as inclusive, use Range.Long(0, end - 1, 1) instead.


136-145: Decompress unconditionally when data_encoding == "gzip"

You only gunzip if the first two bytes are 0x1f 0x8b.
For very small chunks the minishard may already have been decompressed upstream, but for correctness it is safer to:

  1. Trust the data_encoding flag instead of magic‑byte sniffing, or
  2. Attempt decompression and fall back on the raw bytes if ZipIO.gunzip detects “not gzipped”.
- case "gzip" if rawBytes(0) == 31 && rawBytes(1) == -117 => ZipIO.gunzip(rawBytes)
+ case "gzip"                                             => ZipIO.safeGunzipOrRaw(rawBytes)

(This assumes safeGunzipOrRaw or similar helper exists/gets added.)

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala (1)

100-111: Simplify and guarantee correctness of minishardMask

The current loop can be reduced to a shift and subtract, which is clearer and avoids edge‑case off‑by‑ones:

- var minishardMask = 1L
- for (_ <- 0 until minishard_bits - 1) {
-   minishardMask <<= 1
-   minishardMask |= 1
- }
- minishardMask
+ (1L << minishard_bits) - 1

[ suggest_nitpick ]

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1)

43-51: AlfuCache value is never evicted

A fixed size of 100 mesh roots is chosen, but no TTL is used. For long‑running WK instances with many datasets this may silently push older meshes out of the cache while still in use and force expensive re‑reads.

Confirm that:

  1. 100 is sufficient for worst‑case deployments.
  2. LRU eviction is really what we want here (otherwise add TTL).

Otherwise consider AlfuCache(maxEntries, ttl = Some(...)).

frontend/javascripts/oxalis/model/sagas/mesh_saga.ts (3)

1017-1035: Geometry is scaled twice

bufferGeometry.scale(...chunkScale) followed by addMeshFromGeometry(…, chunkScale, …) results in double application if addMeshFromGeometry also scales.
Verify the implementation; otherwise omit one of them.


707-713:

❌ Incorrect review comment

Dereferencing .name leaks into state that still stores meshFileName

updateCurrentMeshFileAction still expects meshFileName, but now we pass availableMeshFiles[0].name.
Double‑check reducers/selectors – any place still assuming the old property will break.

Run the following search to verify:


🏁 Script executed:

#!/bin/bash
rg -n '"meshFileName"' | head

Length of output: 700


🏁 Script executed:

#!/bin/bash
# Locate the definition of updateCurrentMeshFileAction to verify its expected payload property
rg -n "updateCurrentMeshFileAction" -A 5 frontend/javascripts

Length of output: 6909


Ignore .name dereference concern
The updateCurrentMeshFileAction signature is defined as

updateCurrentMeshFileAction(layerName: string, meshFileName: string | null | undefined)

so passing availableMeshFiles[0].name (a string) correctly populates the meshFileName state. No places assume any other property, and the URL‐state schema’s required: ["meshFileName"] simply reflects that the action stores this string. You can safely disregard this review comment.

Likely an incorrect or invalid review comment.


951-953:

❓ Verification inconclusive

extractScaleFromMatrix ignores non‑uniform scaling / shear

Picking [0][0], [1][1], [2][2] only works for axis‑aligned uniform scales without rotation.
If the matrix includes rotation/shear those entries are not pure scales.

Either:

  1. clarify in docstring that the backend guarantees axis alignment, or
  2. compute scale as sqrt(sum(row_i^2)).
function extractScaleFromMatrix(m: [Vector4, Vector4, Vector4]): Vector3 {
  return [
    Math.hypot(m[0][0], m[0][1], m[0][2]),
    Math.hypot(m[1][0], m[1][1], m[1][2]),
    Math.hypot(m[2][0], m[2][1], m[2][2]),
  ];
}

🏁 Script executed:

#!/bin/bash
# Locate type definitions for Vector4 and Vector3
rg -n "type Vector4" .
rg -n "type Vector3" .

# Inspect the extractScaleFromMatrix implementation
rg -n "function extractScaleFromMatrix" .

Length of output: 5373


🏁 Script executed:

#!/bin/bash
# Display lines around extractScaleFromMatrix definition
sed -n '900,1020p' frontend/javascripts/oxalis/model/sagas/mesh_saga.ts

# Find usages of extractScaleFromMatrix
rg -n "extractScaleFromMatrix" -C3 frontend/javascripts/oxalis/model/sagas/mesh_saga.ts

Length of output: 4703



---

🏁 Script executed:

```shell
#!/bin/bash
# Find definition of getMeshfileChunksForSegment
rg -n "getMeshfileChunksForSegment" -n frontend/javascripts

Length of output: 244


Review extractScaleFromMatrix for non‑uniform scale and shear

The current implementation in
frontend/javascripts/oxalis/model/sagas/mesh_saga.ts (lines 951–953)
only returns the diagonal entries of the 3×4 transform matrix:

function extractScaleFromMatrix(transform: [Vector4, Vector4, Vector4]): Vector3 {
  return [transform[0][0], transform[1][1], transform[2][2]];
}

This is only valid when the matrix is axis‑aligned with no rotation or shear. If the backend ever returns a rotated or sheared matrix, those diagonals won’t reflect true scale.

Please either:

  • Add a docstring/comments stating that the backend guarantees axis‑aligned, uniform scale (no rotation/shear), or

  • Update the function to compute each scale component as the vector norm of its basis row, e.g.:

    function extractScaleFromMatrix(m: [Vector4, Vector4, Vector4]): Vector3 {
      return [
        Math.hypot(m[0][0], m[0][1], m[0][2]),
        Math.hypot(m[1][0], m[1][1], m[1][2]),
        Math.hypot(m[2][0], m[2][1], m[2][2]),
      ];
    }

— Frontend/javascripts/oxalis/model/sagas/mesh_saga.ts:951–953

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

199-203: Verify that 0 is a valid sentinel for vertexQuantizationBits

Local mesh files now call the decoder with vertexQuantizationBits = 0.
If the native implementation expects “not provided” to be signalled differently (e.g. -1) this will break existing meshes.

Please double‑check the C++ side or guard the value explicitly.

Comment on lines +975 to 984
value={this.props.currentMeshFile != null ? this.props.currentMeshFile.name : null}
onChange={this.handleMeshFileSelected}
size="small"
loading={this.props.availableMeshFiles == null}
popupMatchSelectWidth={false}
>
{this.props.availableMeshFiles ? (
this.props.availableMeshFiles.map((meshFile: APIMeshFile) => (
<Option key={meshFile.meshFileName} value={meshFile.meshFileName}>
this.props.availableMeshFiles.map((meshFile: APIMeshFileInfo) => (
<Option key={meshFile.name} value={meshFile.name}>
{formatMeshFile(meshFile)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use a stable & unique key/value for Select <Option> elements

key/value are currently meshFile.name.
If two files share the same name (see comment above) Ant Design will emit
a “duplicate key” warning and the user can only select the first entry.

-<Option key={meshFile.name} value={meshFile.name}>
+<Option
+  key={`${meshFile.name}-${meshFile.fileType ?? "unknown"}`}
+  value={meshFile.name /* or `${name}|${fileType}` to keep uniqueness */}>

Follow‑up: adjust handleMeshFileSelected / reducer to accept the
composite identifier or switch to the new path field, which should be
unique by construction.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
value={this.props.currentMeshFile != null ? this.props.currentMeshFile.name : null}
onChange={this.handleMeshFileSelected}
size="small"
loading={this.props.availableMeshFiles == null}
popupMatchSelectWidth={false}
>
{this.props.availableMeshFiles ? (
this.props.availableMeshFiles.map((meshFile: APIMeshFile) => (
<Option key={meshFile.meshFileName} value={meshFile.meshFileName}>
this.props.availableMeshFiles.map((meshFile: APIMeshFileInfo) => (
<Option key={meshFile.name} value={meshFile.name}>
{formatMeshFile(meshFile)}
value={this.props.currentMeshFile != null ? this.props.currentMeshFile.name : null}
onChange={this.handleMeshFileSelected}
size="small"
loading={this.props.availableMeshFiles == null}
popupMatchSelectWidth={false}
>
{this.props.availableMeshFiles ? (
this.props.availableMeshFiles.map((meshFile: APIMeshFileInfo) => (
<Option
key={`${meshFile.name}-${meshFile.fileType ?? "unknown"}`}
value={meshFile.name /* or `${meshFile.name}|${meshFile.fileType}` to keep uniqueness */}>
{formatMeshFile(meshFile)}

Comment on lines 145 to 155
minishardInfo = mesh.shardingSpecification.getMinishardInfo(segmentId)
shardUrl = mesh.shardingSpecification.getPathForShard(vaultPath, minishardInfo._1)
minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardInfo._2.toInt)
chunkRange <- mesh.getChunkRange(segmentId, minishardIndex)
chunk <- mesh.getChunk(chunkRange, shardUrl)
segmentManifest = NeuroglancerSegmentManifest.fromBytes(chunk)
meshSegmentInfo = enrichSegmentInfo(segmentManifest,
mesh.meshInfo.lod_scale_multiplier,
mesh.transformAsMatrix,
chunkRange.start,
segmentId)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

32‑bit cast may overflow for large minishards

minishardInfo._2.toInt truncates a potentially 64‑bit offset. The Neuroglancer format allows 2³²‑1 minishards.

Replace with Long throughout or validate range first.

-      minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardInfo._2.toInt)
+      val minishardIdx = minishardInfo._2
+      _ <- Fox.assert(minishardIdx <= Int.MaxValue, "Minishard index > Int.MaxValue")
+      minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardIdx.toInt)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
minishardInfo = mesh.shardingSpecification.getMinishardInfo(segmentId)
shardUrl = mesh.shardingSpecification.getPathForShard(vaultPath, minishardInfo._1)
minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardInfo._2.toInt)
chunkRange <- mesh.getChunkRange(segmentId, minishardIndex)
chunk <- mesh.getChunk(chunkRange, shardUrl)
segmentManifest = NeuroglancerSegmentManifest.fromBytes(chunk)
meshSegmentInfo = enrichSegmentInfo(segmentManifest,
mesh.meshInfo.lod_scale_multiplier,
mesh.transformAsMatrix,
chunkRange.start,
segmentId)
minishardInfo = mesh.shardingSpecification.getMinishardInfo(segmentId)
shardUrl = mesh.shardingSpecification.getPathForShard(vaultPath, minishardInfo._1)
val minishardIdx = minishardInfo._2
_ <- Fox.assert(minishardIdx <= Int.MaxValue, "Minishard index > Int.MaxValue")
minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardIdx.toInt)
chunkRange <- mesh.getChunkRange(segmentId, minishardIndex)
chunk <- mesh.getChunk(chunkRange, shardUrl)
segmentManifest = NeuroglancerSegmentManifest.fromBytes(chunk)
meshSegmentInfo = enrichSegmentInfo(
segmentManifest,
mesh.meshInfo.lod_scale_multiplier,
mesh.transformAsMatrix,
chunkRange.start,
segmentId
)

Comment on lines +138 to +147
neuroglancerOffsetStart: Long,
segmentId: Long): List[MeshLodInfo] = {
val bytesPerLod = segmentInfo.chunkByteSizes.map(_.sum)
val totalMeshSize = bytesPerLod.sum
val meshByteStartOffset = neuroglancerOffsetStart - totalMeshSize
val chunkByteOffsetsInLod = segmentInfo.chunkByteSizes.map(_.scanLeft(0L)(_ + _)) // builds cumulative sum

def getChunkByteOffset(lod: Int, currentChunk: Int): Long =
// get past the finer lods first, then take offset in selected lod
bytesPerLod.take(lod).sum + chunkByteOffsetsInLod(lod)(currentChunk)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Chunk byte‑offset computation appears inverted

val meshByteStartOffset = neuroglancerOffsetStart - totalMeshSize
...
byteOffset = meshByteStartOffset + getChunkByteOffset(...)

neuroglancerOffsetStart is the beginning of the segment blob (manifest + chunks).
The chunk data actually starts after the manifest, i.e. at manifestEnd, not before manifestStart.
Current formula yields a negative displacement that points before the segment, so the service will read the wrong bytes or hit I/O errors.

Expected:

val meshByteStartOffset = neuroglancerOffsetStart + manifestSize

where manifestSize = segmentInfoSizeInBytes.
Please verify against the Neuroglancer spec.

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (2)

159-172: 🛠️ Refactor suggestion

Improved validation for segment IDs in mesh chunk requests.

The current implementation only extracts the segment ID from the first request (line 165) but then performs a runtime check that all requests have the same segment ID (line 166).

Implement upfront validation as suggested in the previous review:

- segmentId <- Fox.option2Fox(meshChunkDataRequests.head.segmentId) ?~> "Segment id parameter is required"
- _ = Fox.bool2Fox(meshChunkDataRequests.flatMap(_.segmentId).distinct.length == 1) ?~> "All requests must have the same segment id"
+ val ids = meshChunkDataRequests.flatMap(_.segmentId).distinct
+ _ <- Fox.bool2Fox(ids.nonEmpty) ?~> "Segment id parameter is required"
+ _ <- Fox.bool2Fox(ids.size == 1) ?~> "All requests must target the same segment ID"
+ val segmentId = ids.head

This validates upfront that the requests have exactly one distinct segment ID and fails immediately if this condition is not met, which is clearer and safer.


146-149: ⚠️ Potential issue

Potential integer overflow in minishard index handling.

The code casts a potentially 64-bit offset to a 32-bit integer with minishardInfo._2.toInt, which could cause an overflow if the value exceeds Integer.MAX_VALUE.

Implement the safety check mentioned in the previous review:

- minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardInfo._2.toInt)
+ val minishardIdx = minishardInfo._2
+ _ <- Fox.bool2Fox(minishardIdx <= Int.MaxValue) ?~> s"Minishard index ${minishardIdx} exceeds Int.MaxValue"
+ minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardIdx.toInt)
🧹 Nitpick comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

173-203: Complex conditional logic in readMeshChunkAsStl method.

The method now branches based on meshFileType to handle both standard and Neuroglancer precomputed mesh formats. However, there's significant code duplication in the two branches. Both paths construct a MeshChunkDataRequest and call a readMeshChunk method, differing only in the service used and some parameters.

Consider refactoring to reduce duplication:

-  (dracoMeshChunkBytes, encoding) <- meshFileType match {
-    case Some("neuroglancerPrecomputed") =>
-      for {
-        (dracoMeshChunkBytes, encoding) <- neuroglancerPrecomputedMeshService.readMeshChunk(
-          meshFilePath,
-          Seq(MeshChunkDataRequest(chunkInfo.byteOffset, chunkInfo.byteSize, segmentId))
-        ) ?~> "mesh.file.loadChunk.failed"
-      } yield (dracoMeshChunkBytes, encoding)
-    case _ =>
-      for {
-        (dracoMeshChunkBytes, encoding) <- meshFileService.readMeshChunk(
-          organizationId,
-          datasetDirectoryName,
-          layerName,
-          MeshChunkDataRequestList(MeshFileInfo(meshfileName, meshFileType, meshFilePath, None, 7),
-                                 List(MeshChunkDataRequest(chunkInfo.byteOffset, chunkInfo.byteSize, None)))
-        ) ?~> "mesh.file.loadChunk.failed"
-      } yield (dracoMeshChunkBytes, encoding)
-  }
+  val chunkRequest = MeshChunkDataRequest(chunkInfo.byteOffset, chunkInfo.byteSize, segmentId)
+  (dracoMeshChunkBytes, encoding) <- meshFileType match {
+    case Some("neuroglancerPrecomputed") =>
+      neuroglancerPrecomputedMeshService.readMeshChunk(meshFilePath, Seq(chunkRequest)) ?~> "mesh.file.loadChunk.failed"
+    case _ =>
+      val meshFileInfo = MeshFileInfo(meshfileName, meshFileType, meshFilePath, None, 7)
+      val requestList = MeshChunkDataRequestList(meshFileInfo, List(chunkRequest.copy(segmentId = None)))
+      meshFileService.readMeshChunk(organizationId, datasetDirectoryName, layerName, requestList) ?~> "mesh.file.loadChunk.failed"
+  }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5fb3d22 and 72ae035.

📒 Files selected for processing (5)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/MurmurHash3.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (7 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1 hunks)
  • webknossos-jni/src/dracoToStlDecoder.cpp (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • webknossos-jni/src/dracoToStlDecoder.cpp
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/MurmurHash3.scala
🧰 Additional context used
🧠 Learnings (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (3)
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala:55-66
Timestamp: 2025-04-23T09:22:26.763Z
Learning: In the NeuroglancerMesh class, shardingSpecification is defined as a concrete ShardingSpecification value, not an Option. It uses meshInfo.sharding.getOrElse(ShardingSpecification.empty) to provide a default empty specification if none is present, ensuring that mesh.shardingSpecification is never null.
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala:55-66
Timestamp: 2025-04-23T09:22:26.763Z
Learning: In the NeuroglancerMesh class, shardingSpecification is defined as a concrete ShardingSpecification value, not an Option. It uses meshInfo.sharding.getOrElse(ShardingSpecification.empty) to provide a default empty specification if none is present, ensuring that mesh.shardingSpecification is never null.
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala:134-135
Timestamp: 2025-04-23T09:25:04.094Z
Learning: In Neuroglancer precomputed meshes, the `vertex_quantization_bits` value must be either 10 or 16, as specified in the format definition. It represents the number of bits needed for each vertex position coordinate within a mesh fragment.
🔇 Additional comments (13)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (6)

21-33: Extended FullMeshRequest to support Neuroglancer precomputed meshes.

The case class now includes two new optional fields: meshFilePath and meshFileType, which are required to identify and access remote Neuroglancer precomputed mesh files. This is a good design choice as it maintains backward compatibility while adding new functionality.


40-41: New dependency injection for Neuroglancer mesh support.

The service now injects the new NeuroglancerPrecomputedMeshService, following proper dependency injection patterns.


57-69: Updated routing logic to handle remote Neuroglancer mesh files.

The loadFor method now branches based on the presence of meshFilePath, allowing it to select the appropriate loading method for remote Neuroglancer precomputed mesh files while maintaining support for existing mesh formats.


213-214: Added support for vertex quantization in Draco-to-STL conversion.

The dracoToStl method now accepts a vertexQuantizationBits parameter to properly decode quantized vertex data in Neuroglancer meshes, ensuring correct mesh geometry representation.


217-255: New method to load Neuroglancer precomputed mesh files.

The loadFullMeshFromRemoteNeuroglancerMeshFile method provides dedicated handling for Neuroglancer precomputed meshes. I appreciate:

  • The clear constraint enforcement (line 227) that mapping is not supported
  • The proper LOD selection logic
  • The extraction of transform and vertex quantization bits before the chunk loop

This implementation avoids the performance issue mentioned in a previous review by fetching vertex quantization bits once instead of per chunk.


238-239: Efficient extraction of vertex quantization bits.

The implementation correctly retrieves the vertex quantization bits once before processing all chunks, addressing the performance concern raised in a previous review.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (7)

26-34: Well-structured NeuroglancerPrecomputedMeshInfo case class.

The case class clearly defines the necessary metadata for Neuroglancer precomputed meshes, including transform, sharding specification, and vertex quantization bits. The JSON format implementation enables proper serialization and deserialization.


36-44: Efficient caching for mesh information.

The service properly implements caching using AlfuCache to avoid repeatedly loading mesh information, which is good for performance, especially when working with large mesh datasets.


45-51: Robust remote mesh info loading.

The loadRemoteMeshInfo method correctly validates the transform array length (expecting 12 elements) before creating a NeuroglancerMesh instance, ensuring data integrity.


53-93: Comprehensive mesh exploration implementation.

The exploreMeshes method successfully navigates through dataset structures to identify and expose Neuroglancer precomputed meshes. It appropriately:

  1. Locates data layers of the correct format
  2. Navigates to remote paths
  3. Reads necessary metadata
  4. Constructs standardized MeshFileInfo records

This approach integrates well with the existing mesh discovery mechanisms.


95-127: Correct implementation of global positioning and transform computation.

The computeGlobalPosition and getLodTransform methods properly handle the complex matrix calculations required for Neuroglancer's multi-scale, quantized coordinate system. The transform matrix construction properly incorporates grid origins, vertex offsets, and LOD scales.


129-140: Mesh chunk listing with appropriate scale handling.

The method correctly calculates the scale factor for vertex coordinates using the vertex quantization bits, which is essential for proper mesh rendering. This is in line with the Neuroglancer specification where vertex_quantization_bits must be either 10 or 16, as mentioned in the past review comments.


174-179: Clean implementation of vertex quantization bits retrieval.

The getVertexQuantizationBits method correctly retrieves the vertex_quantization_bits value from the mesh info. This is crucial for properly decoding the mesh data, as it determines the precision of vertex positions.

Co-authored-by: Philipp Otto <[email protected]>
Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok made it though the code. Thanks for the effort you two (and the debugging crew as well) 🎉.

Please find my comments / questions below. Please re-request review once backend and/or frontend feedback is applied. Afaik frontend wasnt much of feedback.

Note: I did not test yet and would defer this until the first feedback is implemented to avoid too many testing iterations from my side.

@@ -54,13 +51,14 @@ export function getMeshfileChunksForSegment(
if (editableMappingTracingId != null) {
params.append("editableMappingTracingId", editableMappingTracingId);
}
const payload: ListMeshChunksRequest = {
meshFile,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought: Why is the mesh file not identified by an id and instead the whole object is sent? Usually we have ids for such objects, don't we? 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, we do not have an id for mesh files. ok 🤷. Looks like the file name is the id

Comment on lines +1016 to +1018
if (chunkScale != null) {
bufferGeometry.scale(...chunkScale);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chunkScale is now applied to the bufferGeometry and the targetGroup in segmentMeshController.addMeshFromGeometry. Are you sure that's correct?

segmentationLayer: APISegmentationLayer,
id: number,
segmentId: number,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome. Thanks for renaming the plain id to segmentId here. IT is not really related, but could you please consider doing the same for loadPrecomputedMeshForSegmentId and _getChunkLoadingDescriptors 😇?

}

}
trait NeuroglancerMeshHelper {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also used by MeshFileService which is for local and not NG Meshes. Therefore please adjust the name of this trait and then please consider moving NeuroglancerSegmentManifest into its own file 🙏

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we use some of the neuroglancer formats in our own meshfile too, so it may actually be correct to call this so. I didn’t dive into this too much yet, though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I make the distinction between Neuroglancer and Neuroglancer Precomputed. The format for "Local" meshes is also based on the neuroglancer precomputed mesh format, that is why it also uses the NeuroglancerMeshHelper and NeuroglancerSegmentManifest. In naming, things that are specific to the real remote neuroglancer meshes are called "Neuroglancer Precomputed"

unmappedSegmentId = Some(segmentId)
)
}
override def computeGlobalPosition(segmentInfo: NeuroglancerSegmentManifest,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does a MeshFileService use a NeuroglancerSegmentManifest which is for NG meshes and not wk local meshes?

IMO there is no clear domain distinction here. Could this maybe be improved?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above, wk local meshes are Neuroglancer meshes

Comment on lines 165 to 166
segmentId <- Fox.option2Fox(meshChunkDataRequests.head.segmentId) ?~> "Segment id parameter is required"
_ = Fox.bool2Fox(meshChunkDataRequests.flatMap(_.segmentId).distinct.length == 1) ?~> "All requests must have the same segment id"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would move these checks up after lin 162 to return an error early before the request in loadRemoteMeshInfo might be sent

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea

Comment on lines +55 to +66
if(vertexQuantizationBits > 0) {
auto vertexQuantizationFactor = 1 / (pow(2, double(vertexQuantizationBits)) - 1);
pos[0][0] = float(extractUint16(pos[0][0])) * vertexQuantizationFactor;
pos[0][1] = float(extractUint16(pos[0][1])) * vertexQuantizationFactor;
pos[0][2] = float(extractUint16(pos[0][2])) * vertexQuantizationFactor;
pos[1][0] = float(extractUint16(pos[1][0])) * vertexQuantizationFactor;
pos[1][1] = float(extractUint16(pos[1][1])) * vertexQuantizationFactor;
pos[1][2] = float(extractUint16(pos[1][2])) * vertexQuantizationFactor;
pos[2][0] = float(extractUint16(pos[2][0])) * vertexQuantizationFactor;
pos[2][1] = float(extractUint16(pos[2][1])) * vertexQuantizationFactor;
pos[2][2] = float(extractUint16(pos[2][2])) * vertexQuantizationFactor;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An AI prompt just told me that draco might already able to account for the vertexQuantizationBits if we use it correctly. So we would not need this here.

So maybe that's worth a look

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But not a must imo as this seems to work already. (I still haven't tested though)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, that particular vertex quantization is neuroglancer specific.

@MichaelBuessemeyer MichaelBuessemeyer self-requested a review April 28, 2025 07:11
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1)

145-147: ⚠️ Potential issue

Potential 32-bit truncation for large minishard indices.

Converting minishardInfo._2 from Long to Int without range checking could cause data loss. Since the Neuroglancer format allows up to 2³²-1 minishards, this truncation could be problematic for large values.

-      minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardInfo._2.toInt)
+      minishardNumber = minishardInfo._2
+      _ <- Fox.fromBool(minishardNumber <= Int.MaxValue) ?~> s"Minishard number $minishardNumber exceeds Int.MaxValue, not supported"
+      minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardNumber.toInt)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (1)

48-51: Consider making fileType an enum.

As suggested in the PR comments, the fileType field could be an enum instead of a string to avoid typos and make handling different types more secure.

object MeshFileType {
  val Local = "local"
  val NeuroglancerPrecomputed = "neuroglancerPrecomputed"
}

Then use this enum when creating instances of MeshFileInfo.

🧹 Nitpick comments (10)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (8)

18-18: Update documentation URL to use a permanent reference.

As pointed out by MichaelBuessemeyer, the URL is outdated. Consider using a GitHub permalink to ensure the reference remains stable over time.

- // Implemented according to https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/sharded.md,
+ // Implemented according to https://github.com/google/neuroglancer/blob/master/src/datasource/precomputed/sharded.md,

42-43: Update documentation URL for shard index format.

Similar to the earlier suggestion, update this URL to use a permalink.

- // See https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/sharded.md#shard-index-format
+ // See https://github.com/google/neuroglancer/blob/master/src/datasource/precomputed/sharded.md#shard-index-format

53-54: Remove unnecessary parentheses.

The parentheses around shardIndexRange.end are redundant and can be removed.

-    val miniShardIndexStart: Long = (shardIndexRange.end) + parsedShardIndex(minishardNumber)._1
-    val miniShardIndexEnd: Long = (shardIndexRange.end) + parsedShardIndex(minishardNumber)._2
+    val miniShardIndexStart: Long = shardIndexRange.end + parsedShardIndex(minishardNumber)._1
+    val miniShardIndexEnd: Long = shardIndexRange.end + parsedShardIndex(minishardNumber)._2

67-68: Update documentation URL to use a permalink.

The reference to the Neuroglancer documentation should use a permalink.

-     From: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/sharded.md#minishard-index-format
+     From: https://github.com/google/neuroglancer/blob/master/src/datasource/precomputed/sharded.md#minishard-index-format

72-72: Consider using a more descriptive variable name.

The variable n could be renamed to something more descriptive like chunkCount to improve code readability.

-    val n = bytes.length / 24
+    val chunkCount = bytes.length / 24

Then update all subsequent references to n in the method.


82-82: Update documentation URL to use a permalink.

Another documentation URL that should be updated to use a permalink.

-     From: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/sharded.md#minishard-index-format
+     From: https://github.com/google/neuroglancer/blob/master/src/datasource/precomputed/sharded.md#minishard-index-format

93-93: Update documentation URL to use a permalink.

Another documentation URL that should be updated to use a permalink.

-     From: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/sharded.md#minishard-index-format
+     From: https://github.com/google/neuroglancer/blob/master/src/datasource/precomputed/sharded.md#minishard-index-format

134-136: Remove unnecessary parentheses.

Similar to the earlier suggestion, the parentheses around shardIndexRange.end are redundant.

-      chunkStart = (shardIndexRange.end) + chunkSpecification._2
-      chunkEnd = (shardIndexRange.end) + chunkSpecification._2 + chunkSpecification._3
+      chunkStart = shardIndexRange.end + chunkSpecification._2
+      chunkEnd = shardIndexRange.end + chunkSpecification._2 + chunkSpecification._3
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

185-201: Type-based branching for mesh chunk reading.

The code now branches based on mesh file type to use the appropriate service for reading mesh chunks. Consider using an enum for the file type to make handling different types more secure and avoid typos.

-        case Some("neuroglancerPrecomputed") =>
+        case Some(MeshFileType.NeuroglancerPrecomputed) =>

Where MeshFileType would be an enum defined as:

object MeshFileType {
  val Local = "local"
  val NeuroglancerPrecomputed = "neuroglancerPrecomputed"
}
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1)

164-165: Segment ID validation could be moved earlier.

The check for consistent segment IDs across requests could be moved earlier in the method to fail fast before loading mesh info.

      vaultPath <- dataVaultService.getVaultPath(RemoteSourceDescriptor(new URI(meshFilePath), None))
+      _ <- Fox.fromBool(meshChunkDataRequests.flatMap(_.segmentId).distinct.length <= 1) ?~> "All requests must have the same segment id"
      mesh <- neuroglancerPrecomputedMeshInfoCache.getOrLoad(vaultPath, loadRemoteMeshInfo)
      segmentId <- meshChunkDataRequests.head.segmentId.toFox ?~> "Segment id parameter is required"
-      _ = Fox.fromBool(meshChunkDataRequests.flatMap(_.segmentId).distinct.length == 1) ?~> "All requests must have the same segment id"

Also, note that using Fox.fromBool without integrating it into the for-comprehension (via <-) won't actually enforce the check. The result is calculated but ignored.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eca5d9b and ed26de2.

📒 Files selected for processing (18)
  • CHANGELOG.unreleased.md (1 hunks)
  • frontend/javascripts/oxalis/controller/url_manager.ts (1 hunks)
  • frontend/javascripts/types/api_flow_types.ts (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (6 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshService.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (6 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (10 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingController.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala (1 hunks)
✅ Files skipped from review due to trivial changes (5)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/BinaryDataController.scala
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSRemoteDatastoreClient.scala
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/VolumeTracingService.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
  • frontend/javascripts/types/api_flow_types.ts
🚧 Files skipped from review as they are similar to previous changes (9)
  • CHANGELOG.unreleased.md
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingController.scala
  • frontend/javascripts/oxalis/controller/url_manager.ts
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/AdHocMeshService.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedArray.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala
🧰 Additional context used
🧠 Learnings (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (3)
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala:55-66
Timestamp: 2025-04-23T09:22:26.829Z
Learning: In the NeuroglancerMesh class, shardingSpecification is defined as a concrete ShardingSpecification value, not an Option. It uses meshInfo.sharding.getOrElse(ShardingSpecification.empty) to provide a default empty specification if none is present, ensuring that mesh.shardingSpecification is never null.
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala:55-66
Timestamp: 2025-04-23T09:22:26.829Z
Learning: In the NeuroglancerMesh class, shardingSpecification is defined as a concrete ShardingSpecification value, not an Option. It uses meshInfo.sharding.getOrElse(ShardingSpecification.empty) to provide a default empty specification if none is present, ensuring that mesh.shardingSpecification is never null.
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala:134-135
Timestamp: 2025-04-23T09:25:04.102Z
Learning: In Neuroglancer precomputed meshes, the `vertex_quantization_bits` value must be either 10 or 16, as specified in the format definition. It represents the number of bits needed for each vertex position coordinate within a mesh fragment.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (1)
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending `FoxImplicits` have access to an implicit conversion from `Option[A]` to `Fox[A]`, where `None` is converted to an empty Fox that fails gracefully in for-comprehensions.
🧬 Code Graph Analysis (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (2)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
  • ZipIO (16-310)
  • gunzip (123-139)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
  • VaultPath (18-101)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: build-smoketest-push
  • GitHub Check: backend-tests
🔇 Additional comments (17)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1)

23-28: The cache configuration looks good.

The AlfuCache instances have default configured TTL and size limits as confirmed in the learnings, providing automatic protection against unbounded memory growth.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (8)

30-31: New fields for Neuroglancer precomputed mesh support.

The FullMeshRequest case class has been extended with mesh file path and type fields to support remote Neuroglancer meshes.


40-40: Added dependency injection for Neuroglancer precomputed mesh service.

The new Neuroglancer mesh service is properly injected into the DSFullMeshService.


63-65: Added branch for Neuroglancer precomputed mesh files.

The routing now properly handles the case where a mesh file path is provided, directing to the Neuroglancer mesh loading path.


180-183: Parameter changes to include transform information.

The method signature now includes transform parameters needed for proper positioning of meshes.


215-227: New method for loading Neuroglancer precomputed meshes.

This new method provides a comprehensive implementation for loading Neuroglancer precomputed meshes, with proper validation of unsupported features like mappings.


225-225: Mapping not supported error check.

Good error messaging clarifying that mappings aren't supported for Neuroglancer precomputed meshes.


227-227: Consider validating the selected LOD.

The LOD selection doesn't verify whether the requested LOD is valid. Consider adding a check to ensure the selected LOD exists.

-      selectedLod = fullMeshRequest.lod.getOrElse(0)
+      selectedLod = fullMeshRequest.lod.getOrElse(0)
+      _ <- Fox.fromBool(selectedLod < chunkInfos.lods.length) ?~> s"LOD $selectedLod not available. Available LODs: 0-${chunkInfos.lods.length - 1}"

236-238: Vertex quantization bits are fetched outside the chunk loop.

Good optimization to fetch the vertex quantization bits once before processing all chunks, rather than querying for each chunk.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshService.scala (3)

26-34: Well-structured mesh info data model.

The NeuroglancerPrecomputedMeshInfo case class provides a clean representation of Neuroglancer mesh metadata with appropriate JSON serialization.


43-43: Appropriate cache size limit.

The cache for mesh info is limited to 100 entries, which provides a good balance between memory usage and performance.


134-135: Vertex quantization scaling is correctly implemented.

As noted in the learnings, vertex quantization bits in Neuroglancer are limited to 10 or 16, making the current implementation with math.pow safe from precision loss.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (5)

65-65: Now extends NeuroglancerMeshHelper.

The MeshFileService now mixes in NeuroglancerMeshHelper to leverage shared mesh handling functionality, which is a good design choice for code reuse.


99-103: Updated meshFileInfo construction.

The code now properly creates MeshFileInfo instances with the new fields, setting fileType to "local" and path to None.


173-173: Added explicit Fox conversion for proper error handling.

Adding .toFox to the optional result ensures proper error propagation in the for-comprehension.


212-223: Implementation of NeuroglancerMeshHelper interface.

The computeGlobalPosition and getLodTransform methods provide mesh-specific implementations of the helper interface. The computation of global position includes the necessary scaling and offset transformations.


281-281: Improved data handling with proper sorting.

The merged data bytes are now properly sorted by their original request index before being combined, ensuring correct ordering.

case "gzip" if rawBytes(0) == 31 && rawBytes(1) == -117 => ZipIO.gunzip(rawBytes)
case _ => rawBytes
}
} yield bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential 32-bit truncation for large minishard indices.

Converting minishardInfo._2 from Long to Int can lead to truncation if the value exceeds Int.MaxValue. As the Neuroglancer format allows up to 2³²-1 minishards, validate the range before conversion.

-      minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardInfo._2.toInt)
+      minishardNumber = minishardInfo._2
+      _ <- Fox.fromBool(minishardNumber <= Int.MaxValue) ?~> s"Minishard number $minishardNumber exceeds Int.MaxValue, not supported"
+      minishardIndex <- mesh.getMinishardIndex(shardUrl, minishardNumber.toInt)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (2)

58-63: Defensive handling for unsupported minishard_index_encoding values

Right now any encoding other than "gzip" is treated as raw bytes.
If the spec is extended (e.g. "brotli" already exists for chunk data) the code will silently mis-interpret the index.

Consider failing fast for unknown values:

-  case "gzip" => ZipIO.gunzip(bytes)
-  case _      => bytes
+  case "gzip"   => ZipIO.gunzip(bytes)
+  case "raw"    => bytes
+  case unknown  => throw new IllegalArgumentException(
+      s"Unsupported minishard_index_encoding '$unknown'")

72-79: Avoid needless array copy when parsing the minishard index

ByteBuffer.allocate(bytes.length); buf.put(bytes) duplicates the entire byte array.
ByteBuffer.wrap(bytes) creates the view without copying and is GC-friendlier, especially for big indices.

-val buf = ByteBuffer.allocate(bytes.length)
-buf.put(bytes)
-val longArray = new Array[Long](n * 3)
-buf.position(0)
+val buf = ByteBuffer.wrap(bytes)           // zero-copy
+val longArray = new Array[Long](n * 3)
 buf.order(ByteOrder.LITTLE_ENDIAN)
 buf.asLongBuffer().get(longArray)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala (1)

30-32: Prefer an enum over a raw String for meshFileType

meshFileType: Option[String] loses type–safety and invites typos.
You already define MeshFileType elsewhere – re-use it here:

-    meshFileType: Option[String]
+    meshFileType: Option[MeshFileType.MeshFileType]

This will give compile-time guarantees and removes the need for ad-hoc string comparisons.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

29-32: meshFileType should use the existing enum

Just like in the service class, meshFileType is currently a raw String.
Switching to MeshFileType.MeshFileType keeps request/response schemas aligned and avoids magic strings.

-    meshFileType: Option[String]
+    meshFileType: Option[MeshFileType.MeshFileType]
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (2)

117-119: Add fileType validation for legacy mesh files.

Currently, all mesh files discovered through exploreMeshFiles are assigned MeshFileType.local. Consider adding validation to ensure this is actually the case, especially if there's a way to distinguish between local and remote files based on their characteristics.


297-298: Consider adding helper methods for better type switching.

Following @MichaelBuessemeyer's suggestion, consider implementing methods like isNeuroglancerMeshFile and isLocalMeshFile in the companion object to avoid enum comparisons in match statements throughout the codebase.

object MeshFileInfo {
  implicit val jsonFormat: OFormat[MeshFileInfo] = Json.format[MeshFileInfo]
+
+  def isNeuroglancerMeshFile(info: MeshFileInfo): Boolean =
+    info.fileType.contains(MeshFileType.neuroglancerPrecomputed)
+
+  def isLocalMeshFile(info: MeshFileInfo): Boolean =
+    info.fileType.contains(MeshFileType.local)
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed26de2 and c76cf4a.

📒 Files selected for processing (9)
  • CHANGELOG.unreleased.md (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (6 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (7 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (9 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMesh.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • CHANGELOG.unreleased.md
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMesh.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala
🧰 Additional context used
🧠 Learnings (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (1)
Learnt from: MichaelBuessemeyer
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala:23-28
Timestamp: 2025-04-25T11:06:13.275Z
Learning: AlfuCache in the WebKnossos codebase has default configured TTL (time-to-live) and size limits, providing automatic protection against unbounded memory growth.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (1)
Learnt from: frcroth
PR: scalableminds/webknossos#8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala:170-173
Timestamp: 2025-04-23T08:51:57.756Z
Learning: In the webknossos codebase, classes extending `FoxImplicits` have access to an implicit conversion from `Option[A]` to `Fox[A]`, where `None` is converted to an empty Fox that fails gracefully in for-comprehensions.
🧬 Code Graph Analysis (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala (9)
util/src/main/scala/com/scalableminds/util/tools/JsonHelper.scala (2)
  • JsonHelper (17-112)
  • parseFromFileAs (58-61)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (2)
  • VaultPath (18-101)
  • parseAsJson (96-101)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataLayer.scala (2)
  • Category (33-43)
  • DataFormat (29-31)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (4)
  • successful (53-56)
  • fromBool (32-36)
  • toFox (14-14)
  • sequenceOfFulls (131-140)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMesh.scala (3)
  • NeuroglancerMesh (8-20)
  • NeuroglancerMesh (22-31)
  • transformAsMatrix (12-19)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/DataVaultService.scala (1)
  • getVaultPath (37-42)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala (7)
  • WebknossosSegmentInfo (100-104)
  • WebknossosSegmentInfo (106-127)
  • fromMeshInfosAndMetadata (109-122)
  • NeuroglancerSegmentManifest (16-23)
  • NeuroglancerSegmentManifest (25-88)
  • fromBytes (26-87)
  • enrichSegmentInfo (140-182)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/NeuroglancerPrecomputedShardingUtils.scala (3)
  • getMinishardIndex (111-115)
  • getChunkRange (128-138)
  • getChunk (138-148)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (1)
  • readMeshChunk (92-114)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: backend-tests
  • GitHub Check: build-smoketest-push
🔇 Additional comments (9)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala (1)

134-136: Quantization scale looks off by one

1 / math.pow(2, bits) maps the range [0 … 2ᵇⁱᵗˢ − 1] to [0 … 1 − 2⁻ᵇⁱᵗˢ], losing one LSB of precision.
The Neuroglancer spec recommends dividing by 2^bits − 1.

-  val chunkScale = Array.fill(3)(1 / math.pow(2, mesh.meshInfo.vertex_quantization_bits))
+  val denom       = math.pow(2, mesh.meshInfo.vertex_quantization_bits) - 1
+  val chunkScale  = Array.fill(3)(1 / denom)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)

198-204: LOSING translation & rotation when deriving transform

For remote Neuroglancer meshes only the diagonal scale is forwarded to the Draco→STL converter:

val transform = Array(
  Array(lodTransform(0)(0), 0, 0),
  Array(0, lodTransform(1)(1), 0),
  Array(0, 0, lodTransform(2)(2))
)

Any translation or non-diagonal rotation in lodTransform is dropped, shifting/rotating the final STL in world space.

If the native converter cannot apply full 4×4 matrices, consider at least adding the translation vector so that the geometry is positioned correctly.

Would you like assistance in propagating the full 4×4 transform through dracoToStlConverter?

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (7)

49-59: Good implementation of the MeshFileType enum.

This nicely addresses the comment by @MichaelBuessemeyer to use an enum for file type, which improves type safety and prevents mistyping issues. The implementation with ExtendedEnumeration and custom JSON format support is clean.


61-70: LGTM - Clean update to MeshFileInfo with type safety.

The updates to MeshFileInfo make it more robust by adding path, fileType, and a convenient helper method for checking Neuroglancer precomputed meshes. This case class now properly supports both local and remote meshes with clean optional fields.


80-81: Good architectural separation with NeuroglancerMeshHelper.

The service now mixes in NeuroglancerMeshHelper, which helps separate Neuroglancer-specific logic from the general mesh file service. This improves maintainability and keeps the codebase more modular.


181-185: Update looks good - maintains compatibility with changed parameters.

The signature update from passing only segmentIds to also including lodScaleMultiplier and transform allows for proper handling of both local and Neuroglancer mesh formats in a unified way.


189-190: Safe usage of Option to Fox conversion.

This line leverages the implicit conversion from Option[A] to Fox[A] provided by FoxImplicits, ensuring that when fromMeshInfosAndMetadata returns None, it's properly converted to an empty Fox that fails gracefully in the for-comprehension.


228-233: Good implementation of Neuroglancer position computation.

The position computation logic correctly applies transforms and scales based on LOD parameters. This ensures meshes are properly aligned with existing data, which was one of the key objectives of this PR.


284-297: Simplified mesh chunk reading logic.

The code now focuses on retrieving and combining the byte data without additional validation checks, which is cleaner. The tuple return with (data, meshFormat) provides all the necessary information efficiently.

Comment on lines +29 to +34
private lazy val minishardCount = 1 << shardingSpecification.minishard_bits

protected lazy val shardIndexRange: NumericRange.Exclusive[Long] = {
val end = minishardCount * 16
Range.Long(0, end, 1)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use Long arithmetic to avoid silent overflow when minishard_bits ≥ 31

1 << shardingSpecification.minishard_bits is evaluated with Int operands.
If a dataset ever uses minishard_bits ≥ 31 (allowed by the NG spec), the shift silently overflows and minishardCount becomes negative, leading to bogus shardIndexRanges and hard-to-debug IO errors.

-private lazy val minishardCount = 1 << shardingSpecification.minishard_bits
+private lazy val minishardCount: Long = 1L << shardingSpecification.minishard_bits

shardIndexRange already works with Long, so the change is local and safe.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private lazy val minishardCount = 1 << shardingSpecification.minishard_bits
protected lazy val shardIndexRange: NumericRange.Exclusive[Long] = {
val end = minishardCount * 16
Range.Long(0, end, 1)
}
private lazy val minishardCount: Long = 1L << shardingSpecification.minishard_bits
protected lazy val shardIndexRange: NumericRange.Exclusive[Long] = {
val end = minishardCount * 16
Range.Long(0, end, 1)
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support loading meshes from remote neuroglancer datasets
4 participants