At long last - a Gamma correct scene graph :-) #1379
Replies: 9 comments
-
The topic of gamma correction and sRGB has a long history in computer graphics, it began as a means for correcting for the non linear way signals into a Cathode Ray tube display would map to output brightness so that the final displayed image would match that of the source data i.e. picture or video. The pictures/videos would have a gamma correction applied to them prior to be sent to the display, and the format ended up being sRGB or other form of gamma correction standard. A happy coincidence is that the human eye is also nonlinear, being more sensitive to intensity changes at lower light levels, and the sRGB gamma correction biases the precision to favor darker areas where our eyes are most sensitive and enabling us to use 8bit color components with acceptable visual quality - which is where sRGB packed RGB and RGBA formats come in. Fast forward to today when we have LCD displays that work quite differently, but sRGB is still the standard for source imagery, with LCD's providing their own options of gamma correction, that either match sRGB or variations. The presentation color space that Vulkan provides at minimum for swapchains is VK_COLOR_SPACE_SRGB_NONLINEAR_KHR illustrating how fundamental it is to modern graphics. Even before the changes just checked in, the VSG's has been using this color space for presenting final images, with some kludges in the rest of the loading and rendering that fudged a somewhat OK visual. With the latest changes we've replaced the kludges with proper support, sensible defaults and user control where required. However, for end users you may see some visual changes that will require updates to your data and/or applications as they could have been relying upon those kludges without realizing it. In this thread we'll endevour to provide details on the changes to the VSG, and how you might tweak your applications to get them back looking better than ever. The changes breakdown into the following areas:
The way the VSG is now setup by default is to treat all scene graph vertex , material and light color/intensities as linear, all the built-in PBR, Phong and Flat ShaderSets have vertex and fragment shaders that work in linear space. The color textures now need to be explicitly have their VkFormat's set to sRGB if they are sRGB colors or one of the linear formats if not. The vsgXchange::stbi loader that reads png, jpg, gif's etc. now sets the formats to sRGB by default assuming they are color textures, so you'll not need to set it, however, if the image isn't a color texture, such as normal or roughness map then you may need to explicitly set the format to linear RGB. There is support for handling which I'll go into other posts below. To illustrate the effect of sRGB/linear the new vsgcolorspace example has three rows of spheres with input intensities of 0.0, 0.25, 0.5, 0.75 and 1.0, with the bottom row just directly using these to set the vertex colors directly, the middle row passes threat them as being in sRGB color space and then does a sRGB_to_linear(), while the top row shows what happens when apply linear_to_sRGB() on data that is then treated as linear. It's worth noting that the middle line with the sRGB_to_linear() provides the most natural progression to the human eye, while linear just looks too bright even though mathematically it's correct brightness! |
Beta Was this translation helpful? Give feedback.
-
The vsgcolorscale example illustrates various aspects to managing color space, both the API/code level how to control settings through to the what it actually looks like on screen. One or features of the example is the querying the available presentation color spaces that is used during window creation's setup of the swapchain, and then use of these settings to set the create a window with different swapchain/color framebuffer storage format and color space options. The query looks like: // https://registry.khronos.org/vulkan/specs/latest/man/html/VkFormat.html
// https://registry.khronos.org/vulkan/specs/latest/man/html/VkColorSpaceKHR.html
std::cout<<"SwapChain support details: "<<std::endl;
auto swapChainSupportDetails = vsg::querySwapChainSupport(physicalDevice->vk(), surface->vk());
for(auto& format : swapChainSupportDetails.formats)
{
std::cout<<" VkSurfaceFormatKHR{ VkFormat format = "<<format.format<<", VkColorSpaceKHR colorSpace = "<<format.colorSpace<<"}"<<std::endl;
} The window set up looks like: for(auto& format : swapChainSupportDetails.formats)
{
if (vsg::compare_memory(windowTraits->swapchainPreferences.surfaceFormat, format) != 0)
{
auto local_windowTraits = vsg::WindowTraits::create();
local_windowTraits->windowTitle = vsg::make_string("Alternate swapchain VkSurfaceFormatKHR{ VkFormat format = ", format.format, ", VkColorSpaceKHR colorSpace = ", format.colorSpace,"}");
local_windowTraits->x = dx * (i % columns);
local_windowTraits->y = dy * (i / columns);
local_windowTraits->width = windowTraits->width;
local_windowTraits->height = windowTraits->height;
local_windowTraits->fullscreen = windowTraits->fullscreen;
local_windowTraits->swapchainPreferences.surfaceFormat = format;
local_windowTraits->overrideRedirect = windowTraits->overrideRedirect;
local_windowTraits->device = initial_window->getOrCreateDevice();
++i;
auto local_window = vsg::Window::create(local_windowTraits);
if (local_window) windows.push_back(local_window);
}
} On my Kubuntu 24.04 + Geforce 1650 systems I see two windows created, the title of each window shows the format and color space selected - here sRGB and linear RGB storage formats and both are presented as the same VK_COLOR_SPACE_SRGB_NONLINEAR_KHR color space. The sRGB storage (new default) does the conversion from linear fragment output color to sRGB before storing it, while the linear RGB (old default) just copies the fragment output color assuming it's already sRGB. Note the left window has the correct color/intensities while the right is too dark as it's missing a final linear_to_sRGB() conversion, though you'll notice a similarity between the middle sRGB_to_linear() sphere's on the left and the linear() sphere's on the bottom row of the right windows - this is due to the implicit conversion on presentation, essentially the source data is being treated as sRGB even though it's coded as linear at the scene graph level. On my Windows 11 + Geforce 2060 system I get many more swapchain options with extra storage option as well as extensions to the vkColorSpaceKHR: Note, how some look similar to the top left window - which is the default sRGB storage + sRGB presentation while others vary a long way from this standard. For the extensions to vkColorSpaceKHR you won't necessarily get the conversion into that color space done automatically when writing to the color framebuffer so you'll need to do the appropriate gamma correction on the fragment shaders color output. The VSG won't provide this so you need specialist color space for your application + hardware then you'll need to look up the specs and gamma correction functions appropriate for the color space you want to use. For most users I expect the defaults that the VSG provides to work fine, if there does come a time in your career that you need to dip into non standard presentation color spaces then you'll just need to learn what's necessary at that point - feel free to ping this forum if you are in this situation and need suggestions. |
Beta Was this translation helpful? Give feedback.
-
The gist of what you need to know to get started with most non-sRGB presentation colour spaces is pretty simple, so I'll post it here so people have a starting point as I expect this thread's likely to show up on search engines if someone looks it up.
It can get more complicated than this - lots of people have systems supporting HDR10 and no other HDR formats, tonemapping can be simple or really complicated, and some applications will want much more control over their colour management but this should be enough to hit the ground running. |
Beta Was this translation helpful? Give feedback.
-
The vsgcolorscale example also illustrates loading and adapting input data used to set up the scene graph. As I've mentioned about the VSG's built-in PBR, Phong and Flat ShaderSets all assume linear colours, but as the sRGB_to_linear() row of shows illustrates humans are a bit more fickle when comes to judging how dark/light something is and will via input data in sRGB to be more natural than linear, code wise this could be simple as setting up a linear color using one of the convenience functions i.e. window->clearColor() = vsg::sRGB_to_linear(0.2f, 0.2f, 0.2f, 0.0f); As well as directly converting between sRGB and linear colors using the vsg::sRGB_to_linear(..)/linear_to_sRGB(..) convenience functions you also have control over what formats Vulkan assumes for image data that it reads or writes to. The top row of the vsgcolorspace you will see 3 textured quads with the Ed Levin flight park texture. The left image is as loaded - which now defaults to sRGB, as the original data was in sRGB this is the appropriate color space. The middle image is after explicitly changing the format to linear RGB but without changing any data, so the lighter color is down to the texture sampler reading the data without any sRGB_to_linear transformation, but as it's originally sRGB this is wrong and makes it too light. Finally the right hand one is done with call to convert any linear RGB format into the equivalent sRGB, but as it's already sRGB on loading it doesn't actually change anything. If this test was doing with source linear RGB data it'd change it :-) The vsgcolorspace code that creates these three images loads the first left hand image, then clones it twice and uses sRGB_to_uNorm(..) and uNorm_to_sRGB(..) to get the equivalent format: auto image = vsg::read_cast<vsg::Data>("textures/lz.vsgb", options);
if (image)
{
auto image_uNorm = vsg::clone(image);
image_uNorm->properties.format = vsg::sRGB_to_uNorm(image->properties.format);
auto image_sRGB = vsg::clone(image);
image_sRGB->properties.format = vsg::uNorm_to_sRGB(image->properties.format);
....
} |
Beta Was this translation helpful? Give feedback.
-
Above I've discussed how the scene graph's vertex and material colors are assumed to be linear RGB by the built-in ShaderSets, and how you can explicitly use the sRGB_to_linear() to help convert color data into this form. I've also covered how the new default format of image data loaded by vsgXchange::stbi is now sRGB to be consistent with how color data is usually made/recorded, and how you can explicitly change the format to uNorm for cases where you've loaded data that says it's sRGB format, but in fact it's not a color at all like a normal map and really should be interpreted as linear RGB data. This is all fine for scene graphs that you personally building, but for loading 3rd party data you have to rely upon loaders such as vsgXchange::assimp where it has to handle this all for you. In this section I'll focus on how this is done and what controls you have available to guide the process. At the most basic level you just load the model and render it, just as you did before, all the standard PBR ShaderSets etc. will automatically handle the shaders and what image formats to use for color maps, normal maps etc. It'll all just work out of the box as before: auto model = vsg::read("glTF-Sample-Assets/Models/FlightHelmet/glTF/FlightHelmet.gltf", options); All of these working correctly hides a bit work under the hood that has to make sure each vertex and material colors are in the correct color space, and that colored textures honor their format, be it sRGB or linear RGB, as well has handle cases where the associated image has been loaded as sRGB assuming it's color data, but in fact it's normal map or roughness map and needs to be treated as linear RGB. The assimp loader typically loads vertex and material colors in sRGB so need to be converted to linear RGB, as this is how most modelling applications work with colors, but glTF is special case where all vertex and colors are linear RGB so no need for conversion. There are special cases where some models have there vertex or material colors in linear RGB even though typically that format has it in sRGB, so... it can be a bit of mass, but alas so it's always been with 3d model formats. To help cope with these oddities the vsgXchange::assimp loader has vsg:Options support for the following variables (I'll fully explain CoordinateSpace type in a later post in this thread): static constexpr const char* vertex_color_space = "vertex_color_space"; /// CoordinateSpace {sRGB or LINEAR} to assume when reading vertex colors
static constexpr const char* material_color_space = "material_color_space"; /// CoordinateSpace {sRGB or LINEAR} to assume when reading materials colors These two options provide guidance on the source color space to assume for vertex and material colors respectively, override the default behaving of assume sRGB for all formats except glTF which is linearRGB. The options can be set programmatically:
Or added to the command line i.e. vsgviewer glTF-Sample-Assets/Models/FlightHelmet/glTF/FlightHelmet.gltf --vertex_color_space sRGB |
Beta Was this translation helpful? Give feedback.
-
While the format that is assigned to image data loaded by the vsgXchange::stbi image loader now defaults to sRGB format on the assumption the data is color data originally made/recorded in sRGB, the loader now offers option support for overriding this default: static constexpr const char* image_format = "image_format"; /// Override read image format (8bit RGB/RGBA default to sRGB) to be specified class of CoordinateSpace (sRGB or LINEAR). Just like with vertex and material color space controls in vsgXchange::assimp you can set this programmatically:
Or added to the command line, first line just leaves loaded png with default sRGB, second overrides an sets to LINEAR : vsgviewer glTF-Sample-Assets/Models/FlightHelmet/glTF/FlightHelmet_Materials_GlassPlasticMat_Normal.png &
vsgviewer glTF-Sample-Assets/Models/FlightHelmet/glTF/FlightHelmet_Materials_GlassPlasticMat_Normal.png --image_format LINEAR This looks like: It's all the same data, just the fragment shader's texture samplers are using the VkFormat information to decide whether gamma correction needs to be applied - sRGB images are sampled with the sRGB_to_linear conversion applied, while linear RGB images are sampled without any conversion. It's also worth noting that when writing to an image with an sRGB VkFormat the data is assumed to be coming from linear and is automatically converted to sRGB. So whether reading/writing data to sRGB format images the data going in or out is all assumed to be linear, and conversions to the native format are made for you. |
Beta Was this translation helpful? Give feedback.
-
The next piece of the color space jigsaw is how the new vsg::ShaderSet CoordinateSpace hints helps loaders know what coordinate/color space that are compatible with the associated shaders so that they can make the necessary changes. As mentioned in posts above the standard VSG ShaderSets all work with linear vertex and material colors and no longer have an hardwired conversions from sRGB_to_linear() when reading color textures and no longer have any linear_to_sRGB() when writing out the fragment color. This makes the shaders compatible with a wider range of texture formats and color framebuffer formats, but it also relies upon the correct formats to be set up when reading from textures. The change to make sRGB the default in vsgXchange::stbi when reading png, jpeg, gif etc. means that color textures work fine out of the box and need no amendments to use with the updated shaders, but reading normal maps, roughness maps, elevation maps etc. via the stbi loader will typically have sRGB format applied when it really should be set up as linear RGB. As mentioned above one can use the vsg::sRGB_to_uNorm(..) convenience function to help fix this, which is fine for manually created scene graphs where you load every texture one by one and know what format should be what, but for the model loader like vsgXchange::assimp is needs to figure out when texture formats can't be trusted and need to be reset. This is where the vsg::CoordinateSpace hints comes in - providing a hint to the scene graph creation code what type of data it can work with. The CoordinateSpace type is simply a enum mask: enum class CoordinateSpace
{
NO_PREFERENCE = 0,
LINEAR = (1 << 0),
sRGB = (1 << 1)
}; I chose the name CoordianteSpace as it used for all data input types in ShaderSet, not just the subset that are associated with color space. It's a new addition and I may expand it's role further, for now it's only used for hinting what type of data a shader expects as inputs, it doesn't have use/influence over the swapchain color space. The vsg::AttributeBinding and vsg::DescriptorBinding structs used by the vsg::ShaderSet now have a coordinateSpace entry that defaults to NO_PREFERENCE. struct VSG_DECLSPEC AttributeBinding
{
std::string name;
std::string define;
uint32_t location = 0;
VkFormat format = VK_FORMAT_UNDEFINED;
CoordinateSpace coordinateSpace = CoordinateSpace::NO_PREFERENCE;
ref_ptr<Data> data;
int compare(const AttributeBinding& rhs) const;
explicit operator bool() const noexcept { return !name.empty(); }
};
VSG_type_name(vsg::AttributeBinding);
struct VSG_DECLSPEC DescriptorBinding
{
std::string name;
std::string define;
uint32_t set = 0;
uint32_t binding = 0;
VkDescriptorType descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
uint32_t descriptorCount = 0;
VkShaderStageFlags stageFlags = 0;
CoordinateSpace coordinateSpace = CoordinateSpace::NO_PREFERENCE;
ref_ptr<Data> data;
int compare(const DescriptorBinding& rhs) const;
explicit operator bool() const noexcept { return !name.empty(); }
};
VSG_type_name(vsg::DescriptorBinding); The vsg::ShaderSet methods for assigning the Attribute and Descriptor bindings have an additional coordinateSpace parameter that defaults to NO_PREFERENCE: /// add an attribute binding, Not thread safe, should only be called when initially setting up the ShaderSet
void addAttributeBinding(const std::string& name, const std::string& define, uint32_t location, VkFormat format, ref_ptr<Data> data, CoordinateSpace coordinateSpace = CoordinateSpace::NO_PREFERENCE);
/// add an uniform binding. Not thread safe, should only be called when initially setting up the ShaderSet
void addDescriptorBinding(const std::string& name, const std::string& define, uint32_t set, uint32_t binding, VkDescriptorType descriptorType, uint32_t descriptorCount, VkShaderStageFlags stageFlags, ref_ptr<Data> data, CoordinateSpace coordinateSpace = CoordinateSpace::NO_PREFERENCE); When setting up the ShaderSet you often can leave with the defaults, but for particular types that care about the data coordinate space or image format you can explictly specify what is suitable so you set the CoordinateSpace mask. The vsgshaderset has PBR, Phong and Flat shader options. The relevant parts for the PBR setup is: shaderSet->addDescriptorBinding("diffuseMap", "VSG_DIFFUSE_MAP", MATERIAL_DESCRIPTOR_SET, 0, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::ubvec4Array2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R8G8B8A8_UNORM}));
shaderSet->addDescriptorBinding("detailMap", "VSG_DETAIL_MAP", MATERIAL_DESCRIPTOR_SET, 1, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::ubvec4Array2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R8G8B8A8_UNORM}));
shaderSet->addDescriptorBinding("normalMap", "VSG_NORMAL_MAP", MATERIAL_DESCRIPTOR_SET, 2, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::vec3Array2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R32G32B32_SFLOAT}), vsg::CoordinateSpace::LINEAR);
shaderSet->addDescriptorBinding("aoMap", "VSG_LIGHTMAP_MAP", MATERIAL_DESCRIPTOR_SET, 3, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::floatArray2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R32_SFLOAT}));
shaderSet->addDescriptorBinding("emissiveMap", "VSG_EMISSIVE_MAP", MATERIAL_DESCRIPTOR_SET, 4, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::ubvec4Array2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R8G8B8A8_UNORM}));
shaderSet->addDescriptorBinding("specularMap", "VSG_SPECULAR_MAP", MATERIAL_DESCRIPTOR_SET, 5, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::ubvec4Array2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R8G8B8A8_UNORM}));
shaderSet->addDescriptorBinding("mrMap", "VSG_METALLROUGHNESS_MAP", MATERIAL_DESCRIPTOR_SET, 6, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::vec2Array2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R32G32_SFLOAT}), vsg::CoordinateSpace::LINEAR);
shaderSet->addDescriptorBinding("displacementMap", "VSG_DISPLACEMENT_MAP", MATERIAL_DESCRIPTOR_SET, 7, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, VK_SHADER_STAGE_VERTEX_BIT, vsg::floatArray2D::create(1, 1, vsg::Data::Properties{VK_FORMAT_R32_SFLOAT}), vsg::CoordinateSpace::LINEAR);
shaderSet->addDescriptorBinding("displacementMapScale", "VSG_DISPLACEMENT_MAP", MATERIAL_DESCRIPTOR_SET, 8, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 1, VK_SHADER_STAGE_VERTEX_BIT, vsg::vec3Value::create(1.0f, 1.0f, 1.0f));
shaderSet->addDescriptorBinding("material", "", MATERIAL_DESCRIPTOR_SET, 10, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 1, VK_SHADER_STAGE_FRAGMENT_BIT, vsg::PbrMaterialValue::create(), vsg::CoordinateSpace::LINEAR); Note the normalMap, mrMap, displacementMap textures and material uniform all explicitly select just LINEAR, telling the scene graph creation code that the image vkFormat should be uNorm rather sRGB and material color needs to be LINEAR. These details are used by the vsg::DescriptorConfigurator and vsgXchange::assimp loader when it decides what data that needs transforming/adapting. The new AttributeBinding::coordianteSpace DescriptorBinding::coordinateSpace members are serialized will be supported when reading/writing vsg::ShaderSet from VulkanSceneGraph-1.1.10 onwards. For the majority of users you'll be just loading data from vsgXchange and should be able to just benefit from this built-in functionality doing the right thing out of the box, so you needed worry about the lower level details of how it's done. For advanced users creating your own scene graphs/loaders then you'll need to consider the setup of ShaderSet and use of GraphicsPipelineConfigurator etc, here the vsgXchange::assimp loaders source code will be useful. |
Beta Was this translation helpful? Give feedback.
-
A final part of the work was updating vsgImGui to provide a ImGuiStyle_sRGB_to_linear(..) convenience function to apply sRGB_to_linear transformation on the ImGuiStyle colors. vsgImGui::RenderImGui::_init(..) methods will invoke this for you when the associated render pass is working with a SRGB color attachment. |
Beta Was this translation helpful? Give feedback.
-
While we've done what we can to make end users applications work seamlessly with the new changes, we can't change areas in your applications that made assumptions about the color space that don't hold now. For instance if you've set up vertex, material colors, lighting or texture formats in your application that relied upon the implicit gamma correction on presentation without realizing it you could find your data being presented too light or too dark. If you used an ambient light level of 0.1 before and this worked fine, you'll that looks washed out now with way too ambient light because implicitly the setup of the rendering as whole treated this as sRGB , so you could just make this explicit by calling: light->intensity = vsg::sRGB_to_linear(0.1); Which is equivalent to just assigning a smaller value like: light->intensity = 0.02; If you have any questions, or have seen problems when updating your application to VSG master let us know here and we can help provide answers and guidance on how to resolve problems. |
Beta Was this translation helpful? Give feedback.
-
Hi All,
I have merged work for sRGB support into VSG, vsgXchange, vsgImGui and vsgExamples master and bumped the versions to make it possible to check at compile time, once there has been community testing I'll tag the respective releases. The work was initiated by Chris Djali @AnyOldName3 which I adapted and built upon. I merged the bulk of the changes last Friday, and began writing up a post for the forum but lost it due to a severe storm causing a power cut, and was prevented me getting online for the rest of the working day. Those changes are:
Since Friday I've been pondering on how best to introduce the topic to the community, and in the end decided that extra controls, and a dedicated example would be best, and these further changes are now merged:
I'll break this thread into several posts to allow folks to reply to individual parts more easily, and give me chance to go collect screenshots across Linux and Windows, and generally put down my thoughts in manageable chunks.
First up - a teaser, here what the new vsgcolorspace example looks like under Linux, creates a set of spheres with different greyscale colours and textures to illustrate gamma correction and with two windows with different swapchain/colour framebuffer formats that are available on my Kubuntu 24.04 Geforce 1650 system, left window is the new sRGB default, and right the RGB format that was the previous default:
Beta Was this translation helpful? Give feedback.
All reactions