You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here len and Elem are used to get the size of inds in bytes automatically. They are needed as the direct reflect.TypeOf(inds).Size() does not work on the slice, it gives a wrong size of 24 bytes (the correct one is 48 due to uint being of 8 bytes here on my machine).
Renderdoc reveals that on the GPU the mesh gets uploaded as
{0 0 1 0 2 0}
which destroys the mesh and the subsequent rendering silently without issuing any compiler warnings or OpenGL errors.
The solution is to change the type from uint, which is on my 64 bit Ubuntu 20.04 is of size 8 bytes, to the type uint32 which is of size 4 bytes.
inds := []uint32{0 1 2 1 3 2}
Everything works fine this way.
Interestingly, the analogous expressions in C++ did not produce any rendering problems on my machine as unsigned int produced by gcc on Ubuntu 20.04 was 4 bytes. Consider this as a warning not to employ those undefined integer sizes such as int and uint with the OpenGL buffer functions that use gl.Ptr or unsafe.Pointer. This is also one more way to get undefined/erroneous behavior/blank screens without any error message.
The text was updated successfully, but these errors were encountered:
when inds is of the type std::vector<std::uint64_t>. Again, as in the both cases above (uint vs uint32 in Go), no complaints by the compiler or the OpenGL error checking system, but the first case yields the correct result while the second one leads to the incorrect mesh loading and a black screen.
So ideally an API should let this know, that only 32 bit ints can be specified as vertex indices, otherwise at least on my machine (Ubuntu 20.04+GTX 760) I get an undefined behavior checked both via C++ and Go code bases.
So those learnopengl.com codes are not safe in this sense either, it's likely not a good idea to use platform-dependent integers with such low level graphics APIs.
I stumbled upon one obscurity with gl.BufferData and decided to post this here in case someone else (or future me) hits it.
Suppose we have a simple mesh to be rendered such as the quad whose triangle vertices are
Then one uploads this data to the GPU buffer:
Here len and Elem are used to get the size of inds in bytes automatically. They are needed as the direct reflect.TypeOf(inds).Size() does not work on the slice, it gives a wrong size of 24 bytes (the correct one is 48 due to uint being of 8 bytes here on my machine).
Renderdoc reveals that on the GPU the mesh gets uploaded as
which destroys the mesh and the subsequent rendering silently without issuing any compiler warnings or OpenGL errors.
The solution is to change the type from uint, which is on my 64 bit Ubuntu 20.04 is of size 8 bytes, to the type uint32 which is of size 4 bytes.
Everything works fine this way.
Interestingly, the analogous expressions in C++ did not produce any rendering problems on my machine as unsigned int produced by gcc on Ubuntu 20.04 was 4 bytes. Consider this as a warning not to employ those undefined integer sizes such as int and uint with the OpenGL buffer functions that use gl.Ptr or unsafe.Pointer. This is also one more way to get undefined/erroneous behavior/blank screens without any error message.
The text was updated successfully, but these errors were encountered: