Replies: 11 comments 54 replies
-
SubgraphsThis is incredibly minor, but I have a question about the code in this example:
A node can have many slots, and nodes can be connected with edges. Why is the function called Errata?In the "subgraphs don't run by default" section, is it a typo that only two slot inputs are supplied? graph.run_sub_graph(
"draw_3d_graph",
vec![
SlotValue::Entity(view_entity),
SlotValue::TextureView(view_texture),
],
)?; The earlier example appears to specify three? Named Inputs
Does this mean we can expect to pair inputs, or slots in general, with a name (K,V)? Drawing from graph.run_sub_graph(
"draw_3d_graph",
[
(VIEW_ENTITY, SlotValue::Entity(view_entity),),
(RENDER_TARGET, SlotValue::TextureView(view_texture)),
]
)?; Frustum Culling
Can I help out with this? We would need rudimentary bounding box generation, which is a big can of worms. We can create something that's quick and dirty, at the risk of people actually using it. I think it's a topic that might need an RFC of its own (following bevyengine/rfcs#12 ?), but I also understand the desire to get some quick render perf wins in place. I wrote a frustum culling plugin which lead to writing this bounding box plugin for the reason mentioned above. Let me know if you'd like my input with this, I learned a lot in the process of writing those prototypes. Getting data back to the CPUOutside of compute, do you plan to add the ability to copy a buffer back to the CPU? This would be nice for a color picking shader (🙂), or other cases where it would be helpful to render to bytes we can analyze on the CPU. Maybe we could specify output slots in the render graph that would actually be populated by the GPU? Stuff I likeSome things that stood out to me in this post that I especially appreciate:
|
Beta Was this translation helpful? Give feedback.
-
I would still propose (as I did before) to make this its own crate. This 'core' graph has to by it's very definition be quite opinionated and seems likely to fragment into at least 2 or 3 different versions depending on system specs and API limitations. One of the problems I ran into with adding PostProcessing, is that it becomes impossible to do ordering relative to passes added by different plugins, without adding some kind of magic constants. So then either my plugin needs to depend on plugins that may or may not be there (such as UI) or, which I would prefer, the 'core' render graph depending on these plugins instead. And bevy_render itself definitely shouldn't depend on these optional plugins. |
Beta Was this translation helpful? Give feedback.
-
I really like this general model. What if we want multiple simultaneous views of the same entity though? Imagine split screen multiplayer. This is the same problem we ran into with frustum culling. Relations presents an obvious solution: make it a relation tied to a particular camera (perhaps reducing projection data duplication too). |
Beta Was this translation helpful? Give feedback.
-
Strongly in favor of this change. I would also like a way to distinguish between world-space / screen-space coordinates in a unified way as part of this.
Would this involve a real layering solution in its place? Z coordinates in 2D are currently essential to control the order in which sprites are drawn. |
Beta Was this translation helpful? Give feedback.
-
Does this also mean, that it will be possible to trigger renders manually? E.g. I only want to render my static scene, when the camera changes... |
Beta Was this translation helpful? Give feedback.
-
After a fix from @cart regarding updating archetypes for function systems coerced to exclusive systems or something like that, and a couple more small fixes, I got my project working on top of the pipelined-rendering branch: I'm now looking around in the code to understand key things that are missing. As the new
The sprite functionality shows how textures can be handled and bound as there are no textures in PBR yet. The explicitness of all of this is quite refreshing for reading and understanding what is happening and I would expect it will make debugging easier too, perhaps after having added some more error handling/verification. I wanted to disable the shadow functionality when trying to get my project working as it looked like it was possibly adding to my confusion of what was initially wrong. I noted a couple of runtime errors already there that allowed me to quickly find what was wrong and fix it, so I think this is looking very positive. It is more boiler plate at the moment, but this is complicated stuff to get the plumbing right. I think we've probably gone from easy to do something wrong and hard to fix it, to easy to do something wrong and easy to fix it. That's great progress. 😃 |
Beta Was this translation helpful? Give feedback.
-
@cart after implementing my own shader in the new renderer I couldn't be more happy with how well the stages work. I think we can do a better job of sharing bindings across shaders though. For example there isn't a great way for me to reuse the standard pbr material in my own shader. One way is to not use the pbr plugin but if I still want lighting/shadows this is difficult. We could add a ZST to the pbr bundle called: |
Beta Was this translation helpful? Give feedback.
-
I'm trying to work out if we can have a generic First bit of work doesn't include a subgraph yet, but should (untested, but compiles) iterate over all the <== click to unfold. Folded because of length.use std::borrow::Borrow;
use crate::{
camera::RenderTarget,
color::Color,
core_pipeline::ViewDepthTexture,
render_graph::{Node, NodeRunError, RenderGraphContext},
render_phase::{DrawFunctions, RenderPhase, TrackedRenderPass},
renderer::RenderContext,
view::{ExtractedView, ExtractedWindows},
};
use bevy_ecs::{
prelude::{Entity, QueryState},
world::World,
};
use wgpu::{
LoadOp, Operations, RenderPassColorAttachment, RenderPassDepthStencilAttachment,
RenderPassDescriptor,
};
pub struct ViewPassNode<T: 'static> {
query: QueryState<(
Entity,
&'static ExtractedView,
&'static RenderTarget,
Option<&'static ViewDepthTexture>,
&'static RenderPhase<T>,
)>,
}
impl<T> ViewPassNode<T> {
pub fn new(world: &mut World) -> Self {
Self {
query: QueryState::new(world),
}
}
}
impl<T> Node for ViewPassNode<T> {
fn update(&mut self, world: &mut World) {
self.query.update_archetypes(world);
}
fn run(
&self,
_graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
let extracted_windows = world.get_resource::<ExtractedWindows>().unwrap();
for (view_entity, view, render_target, view_depth_texture, render_phase) in
self.query.iter_manual(world)
{
let color_attachment_texture = match render_target {
RenderTarget::Window(window_id) => extracted_windows
.get(window_id)
.unwrap()
.swap_chain_frame
.as_ref()
.unwrap(),
RenderTarget::Texture(texture_view) => texture_view,
};
let pass_descriptor = RenderPassDescriptor {
label: view.name.as_deref(),
color_attachments: &[RenderPassColorAttachment {
view: color_attachment_texture,
resolve_target: None,
ops: Operations {
load: LoadOp::Clear(Color::rgb(0.4, 0.4, 0.4).into()),
store: true,
},
}],
depth_stencil_attachment: view_depth_texture.map(|view_depth_texture| {
RenderPassDepthStencilAttachment {
view: &view_depth_texture.view,
depth_ops: Some(Operations {
load: LoadOp::Clear(1.0),
store: true,
}),
stencil_ops: None,
}
}),
};
let draw_functions = world.get_resource::<DrawFunctions>().unwrap();
let render_pass = render_context
.command_encoder
.begin_render_pass(&pass_descriptor);
let mut draw_functions = draw_functions.write();
let mut tracked_pass = TrackedRenderPass::new(render_pass);
for drawable in render_phase.drawn_things.iter() {
let draw_function = draw_functions.get_mut(drawable.draw_function).unwrap();
draw_function.draw(
world,
&mut tracked_pass,
view_entity,
drawable.draw_key,
drawable.sort_key,
);
}
}
Ok(())
}
} |
Beta Was this translation helpful? Give feedback.
-
I would like to implement calculation of vertex tangents when importing meshes that don’t have them to allow support for normal maps in one shader. One source suggested calculating them in shaders regardless, but someone else in #rendering disagreed and said that it has downsides and pre-calculating is preferred. I guess I’ll go for pre-calculating. |
Beta Was this translation helpful? Give feedback.
-
If we could get a working rewrite of our shaders in Rust, using Rust-GPU, would that be something we want to use? Or do we want to avoid it for now until it get's more stable/usable, etc. and just use WGSL for now? |
Beta Was this translation helpful? Give feedback.
-
How useable is this right not for a simple 2D game, or should I wait for it to be further along? |
Beta Was this translation helpful? Give feedback.
-
This is the second iteration of my Bevy Renderer Rework experiment. For more context on what motivated this, check out the Round One Prototype Discussion.
My goals for this round were:
The code is available on my pipelined-rendering branch.
The Changes
RenderPhase
is nowRenderPhase<Transparent3dPhase>
RenderPhase<ShadowPhase>
. Meshes relevant to that light view are then queued up into that phase (currently all meshes are relevant, but ultimately this should use visibility information).draw_3d_graph
sub graph has aShadowPassNode
that queries for "light views" relevant to the current view passed intodraw_3d_graph
and draws each light view'sRenderPhase<ShadowPhase>
. After these phases are finished, theMainPass3dNode
executes and draws current view'sRenderPhase<Transparent3dPhase>
.commands.get_or_spawn(entity)
can be used to "spawn into" an app world entity id.Sub Graph Api
All RenderGraphs can now have "Sub Graphs". These are "just" named RenderGraphs owned by the main graph. This creates a nested hierarchy of graphs.
Graphs can now require inputs, which can be added like this:
Sub Graphs are not run by default. They must be run by a graph Node when the graph is executed.
(note that currently inputs to sub-graph runs must be specified in the order they are defined, but ultimately they will be an unordered map to avoid incorrect-ordering errors)
3D Scene Example
Multiple lights work as expected! Try adding more!
Next Steps
run_sub_graph()
input apiActiveCamera<3dCamera>(Option<Entity>)
instead of centralized hashmap.Beta Was this translation helpful? Give feedback.
All reactions