Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A dangerous function #128

Open
aabbtree77 opened this issue Dec 18, 2020 · 1 comment
Open

A dangerous function #128

aabbtree77 opened this issue Dec 18, 2020 · 1 comment

Comments

@aabbtree77
Copy link

aabbtree77 commented Dec 18, 2020

I stumbled upon one obscurity with gl.BufferData and decided to post this here in case someone else (or future me) hits it.

Suppose we have a simple mesh to be rendered such as the quad whose triangle vertices are

inds := []uint{0 1 2 1 3 2}

Then one uploads this data to the GPU buffer:

var buffID uint32
gl.GenBuffers(1, &buffID)
gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, buffID)
gl.BufferData(gl.ELEMENT_ARRAY_BUFFER, len(inds)*int(reflect.TypeOf(inds).Elem().Size()),  gl.Ptr(inds), gl.STATIC_DRAW)    

Here len and Elem are used to get the size of inds in bytes automatically. They are needed as the direct reflect.TypeOf(inds).Size() does not work on the slice, it gives a wrong size of 24 bytes (the correct one is 48 due to uint being of 8 bytes here on my machine).

Renderdoc reveals that on the GPU the mesh gets uploaded as

{0 0 1 0 2 0}

which destroys the mesh and the subsequent rendering silently without issuing any compiler warnings or OpenGL errors.

The solution is to change the type from uint, which is on my 64 bit Ubuntu 20.04 is of size 8 bytes, to the type uint32 which is of size 4 bytes.

inds := []uint32{0 1 2 1 3 2}

Everything works fine this way.

Interestingly, the analogous expressions in C++ did not produce any rendering problems on my machine as unsigned int produced by gcc on Ubuntu 20.04 was 4 bytes. Consider this as a warning not to employ those undefined integer sizes such as int and uint with the OpenGL buffer functions that use gl.Ptr or unsafe.Pointer. This is also one more way to get undefined/erroneous behavior/blank screens without any error message.

@aabbtree77
Copy link
Author

I have just checked that some C++ (akin to learnOpenGL) codes also break with other sizes but uint32_t. So for instance this works:

glBufferData(GL_ELEMENT_ARRAY_BUFFER, inds.size()*sizeof(std::uint32_t), &inds[0], GL_STATIC_DRAW);

whenever inds is of the type std::vector<std::uint32_t>, but that does not:

glBufferData(GL_ELEMENT_ARRAY_BUFFER, inds.size()*sizeof(std::uint64_t), &inds[0], GL_STATIC_DRAW);

when inds is of the type std::vector<std::uint64_t>. Again, as in the both cases above (uint vs uint32 in Go), no complaints by the compiler or the OpenGL error checking system, but the first case yields the correct result while the second one leads to the incorrect mesh loading and a black screen.

So ideally an API should let this know, that only 32 bit ints can be specified as vertex indices, otherwise at least on my machine (Ubuntu 20.04+GTX 760) I get an undefined behavior checked both via C++ and Go code bases.

So those learnopengl.com codes are not safe in this sense either, it's likely not a good idea to use platform-dependent integers with such low level graphics APIs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant