🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

OpenGL glMemoryBarrier Usage

Started by
1 comment, last by AndreyVK_D3D 4 years, 10 months ago

I was confused by OpenGL glMemoryBarrier API usage, did not know when should use it, and use which barrier bit. I list some cases here, please see whether they use glMemoryBarrier correctly, thank you.

case 1: In compute shader, use "imageStore" to write some content to "texture", then bind "texture" to a fbo object, use glReadPixels to read "texture" content. imageStore is incoherent memory access, so need to use glMemoryBarrier, because glReadPixel is an operation to read fbo attachment, use GL_FRAMEBUFFER_BARRIER_BIT to ensure visibility of "imageStore" operation, is it right?
...
glBindImageTexture(0, texture, 0, GL_WRITE_ONLY, GLR32UI);
glDispatchCompute();
glMemoryBarrier(GL_FRAMEBUFFER_BARRIER_BIT);
...
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);

case 2: In compute shader, read "texture[0]" content and write to "texture[1]" in the first glDispatchCompute, then read "texture[1]" content and write to "texture[2]" in the second glDispatchCompute. Is it neccessary to use glMemoryBarrier between two glDispatchComputes, and is GL_SHADER_IMAGE_ACCESS_BARRIER_BIT correct here?
...
glBindImageTexture(0, texture[0], 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32UI);
glBindImageTexture(1, texture[1], 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32UI);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
glBindImageTexture(0, texture[1], 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32UI);
glBindImageTexture(1, texture[2], 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32UI);
glDispatchCompute(1, 1, 1);
...

case 3: Mixed with compute pipeline and graphics pipeline. Write some content to "texture" in compute pipeline firstly, then sample "texture" content in graphics pipeline and draw to framebuffer. Is it neccessary to use glMemoryBarrier between glDispatchCompute and glDrawArrays, and is GL_TEXTURE_FETCH_BARRIER_BIT correct here?
...
glBindTexture(GL_TEXTURE_2D, texture);
...
glUseProgram(csProgram);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT);
...
glUseProgram(program);
...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

case 4: In compute shader, increment atomic counter varibles, and then use glMapBufferRange to map this buffer, read back to CPU. Is atomic counter incoherent memory access operation? Is it neccessary to use glMemoryBarrier between glDispatchCompute and glMapBufferRange, and is GL_BUFFER_UPDATE_BARRIER_BIT correct here?
GLBuffer atomicCounterBuffer;
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, atomicCounterBuffer);
...

glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, atomicCounterBuffer);

glDispatchCompute(1, 1, 1);

glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);

glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, atomicCounterBuffer);
void *mappedBuffer =
    glMapBufferRange(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint) * 3, GL_MAP_READ_BIT);
memcpy(bufferData, mappedBuffer, sizeof(bufferData));
glUnmapBuffer(GL_ATOMIC_COUNTER_BUFFER);

Advertisement

@xhcao, did you try to read the documentation about glMemoryBarrier ?

3DGraphics,Direct3D12,Vulkan,OpenCL,Algorithms

This topic is closed to new replies.

Advertisement