Skip to content

Event system shouldn't assume a single, builtin Raycaster #2117

@AndrewPrifer

Description

@AndrewPrifer

Edit: for future discussion see #2133, where these ideas are being refined and implemented.

The title is a little vague so let me explain.

How the current event system works is:

  • the renderer adds all objects with interactions to store.interaction
  • on an event, objects in store.interaction are raycast using the builtin Raycaster and default camera
  • the system iterates over the hits according to the rules of event propagation and calls event callbacks

This system works correctly for most of the use cases, however it fails if certain objects need to use a different Raycaster, for example to render an overlay scene for a gizmo. For example, in this Drei example, if we were to add pointer event handlers to the main mesh calling e.stopPropagation(), the gizmo would stop being interactive when it's covering the geometry, since the event system assumes that it is using the same Raycaster and according to the hit, it is behind the mesh. Even if we don't call e.stopPropagation() on the mesh, we would have the issue that we wouldn't be able to stop clicks from propagating to the main scene when the gizmo is clicked.

The above use case is common enough that Drei provides a useCamera helper that helps use a Raycaster set from the perspective of a camera different from the default one. However, useCamera is a massive hack, because it works by overriding the raycast method of meshes with one that ignores the passed Raycaster and instead uses the Raycaster created by the useCamera hook from the passed camera. Meaning r3f's event system has no knowledge of this and will fully assume that these meshes are also raycast with the internal Raycaster, which results in the above described incorrect behavior.

Proposal

  • Extend the Instance declaration with the optional eventLayer property that takes a { raycaster: Raycaster, priority: number } object.
  • Group objects added to store.interaction by raycaster.
  • When an event occurs, we iterate over the raycaster groups using their own raycaster and doing the regular event handling for each group successively. Calling e.stopPropagation also stops the event from propagating to lower priority raycaster groups.

Objects without an eventLayer prop would be grouped to the builtin default raycaster using the default camera.

This solution also has the advantage that current projects using useCamera could be adapted with minimal effort: useCamera would still create its own raycaster, however instead of returning a hacked raycast method, it'd return the eventLayer object that can be passed to meshes.


What do you think? I'm curious to hear any corrections, extensions or counter-proposals! I'm sure I left some things out and some parts might be unclear, please ask any questions you might have.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions