Replies: 3 comments
-
Bevy internally uses 1. Preview your render graphBevy provides an extendable graph-structured rendering system, where input nodes pass data to output nodes. Below is an example taken from the graph node visualizer. I'd recommend you try it out on your own project to understand better. The boxes inside the nodes, for example in main_pass and ui_pass, are slots. The arrows represent edges between nodes, where black arrows connect nodes and blue arrows connect slots. In my opinion the blue arrows should actually point in the other direction, but I'll explain that later. The instructions on the repo don't give you the full answer on how to produce one, so I'll explain you here how to generate this graph:
fn main() {
App::build()
// ...<snip>...
.add_startup_system(print_render_graph.system())
// ...<snip>...
.run();
}
pub fn print_render_graph(mut render_graph: ResMut<RenderGraph>) {
let dot = bevy_mod_debugdump::render_graph::render_graph_dot(&render_graph);
std::fs::write("render-graph.dot", dot)
.expect("Failed to write render-graph.dot");;
println!("Render graph written to render-graph.dot");
}
2. Creating the base graph (see bevy_render/src/render_graph/base.rs:94)To access/modify the internal render graph just use system parameters (how cool?): fn system_that_uses_graph(config: &BaseRenderGraphConfig, world: &mut World) {
let world = world.cell();
// "graph" is our render graph
let mut graph = world.get_resource_mut::<RenderGraph>().unwrap();
// "msaa" contains helper methods to help define a graph given MSAA
let msaa = world.get_resource::<Msaa>().unwrap();
} There are different types of nodes:
// Creating a Pass Node
let mut main_pass_node = PassNode::<&MainPass>::new(PassDescriptor {
// color_attachment_descriptor is a wrapper for creating the correct RenderPassColorAttachmentDescriptor
// given if we're using MSAA or not
color_attachments: vec![msaa.color_attachment_descriptor(
// Tell GPU which texture to render our colors from for CURRENT frame to
// In this case we create a TextureAttachment slot with id "color_attachment"
// that will be defined later when we connect the nodes using graph.add_slot_edge()
TextureAttachment::Input("color_attachment".to_string()),
// Used for MSAA to resolve to it's attachment
TextureAttachment::Input("color_resolve_target".to_string()),
Operations {
// Tell GPU what operations to do BEFORE we render current frame
load: LoadOp::Clear(Color::rgb(0.1, 0.1, 0.1)), // Clear screen
store: true,
},
)],
depth_stencil_attachment: Some(RenderPassDepthStencilAttachmentDescriptor {
attachment: TextureAttachment::Input("depth".to_string()),
depth_ops: Some(Operations {
load: LoadOp::Clear(1.0),
store: true,
}),
stencil_ops: None,
}),
sample_count: msaa.samples, // Amount of samples per pass. Use for MSAA or Instanced drawing
});
// Select the first index 0 of our color_attachments collection as our default clear color
main_pass_node.use_default_clear_color(0);
// Creating our WindowSwapchain Node
graph.add_node(
base::node::PRIMARY_SWAP_CHAIN,
WindowSwapChainNode::new(WindowId::primary()),
);
// Important!: Connect Main pass to SwapChain
let input_slot = if msaa.samples > 1 {
"color_resolve_target"
} else {
"color_attachment"
};
graph.add_slot_edge(
base::node::PRIMARY_SWAP_CHAIN, // output node
WindowSwapChainNode::OUT_TEXTURE, // output slot
base::node::MAIN_PASS, // input node
input_slot,
)
.unwrap(); One important note about the example above, is the direction of the rendering. We go from a Pass input node to A Swapchain output node, however in the graph the blue arrow points in the wrong direction. Don't get confused by this.
// Creating a camera node
graph.add_system_node(base::node::CAMERA_3D, CameraNode::new(camera::CAMERA_3D));
graph.add_system_node(base::node::CAMERA_2D, CameraNode::new(camera::CAMERA_2D));
// Pointing our camera to the our Pass node
main_pass_node.add_camera(camera::CAMERA_3D); // For 3D rendering
main_pass_node.add_camera(camera::CAMERA_2D); // For 2D rendering
// Creating a copy texture node
graph.add_node("texture_copy", TextureCopyNode::default());
// Pointing our texture to the Pass node
// Here .add_node_edge(a, b) points a (input node) to b (output node)
graph.add_node_edge(base::node::TEXTURE_COPY, base::node::MAIN_PASS).unwrap();
// Create a WindowTextureNode
graph.add_node(
base::node::MAIN_DEPTH_TEXTURE,
WindowTextureNode::new(
WindowId::primary(),
TextureDescriptor {
size: Extent3d {
depth: 1,
width: 1,
height: 1,
},
mip_level_count: 1,
sample_count: msaa.samples,
dimension: TextureDimension::D2,
format: TextureFormat::Depth32Float,
// Specify how to use this texture (see https://api.codestream.com/p/YQPsSekOSnXyJmB2/Dp9cmfnARReU01bPK1shFw)
usage: TextureUsage::OUTPUT_ATTACHMENT,
},
),
);
// Point our Main pass's depth slot to it's respective depth texture
graph.add_slot_edge(
base::node::MAIN_DEPTH_TEXTURE,
WindowTextureNode::OUT_TEXTURE,
base::node::MAIN_PASS,
"depth",
).unwrap();
// Creating a shared buffer node
graph.add_node(base::node::SHARED_BUFFERS, SharedBuffersNode::default());
// Pointing it to our main pass node
graph.add_node_edge(base::node::SHARED_BUFFERS, base::node::MAIN_PASS).unwrap(); 3. Modifying the graph to render to a textureUnfortunately I'm not in the best position to explain how the rendering pipeline all works (also learning just like you), however you mentioned about rendering to a texture. Have a look at this recently added (post v0.5) example: https://github.com/bevyengine/bevy/blob/main/examples/3d/render_to_texture.rs From reading the example, one can kind of develop an understanding given the visual overview of the graph below. I hope this helps. Would be great if an actual maintainer would append my answer here. |
Beta Was this translation helpful? Give feedback.
-
great explanation! By the way here is the default render graph for |
Beta Was this translation helpful? Give feedback.
-
@myisaak Thanks for the write up, this is a great resource. |
Beta Was this translation helpful? Give feedback.
-
Hi! I would like to know if there are examples, documentations or tutorials about how the bevy render pipeline works. For instance, I would like to make an orthographic camera render to a texture, and I have no idea how to do that ;p. I tried to read the code a bit but it seems cryptic to me.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions