🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

"Pixelation" when temporally reprojecting previous frames

Started by
2 comments, last by microsista 2 years, 9 months ago

Hello,
I am a student building his own real-time pathtracer. It already includes distributed global illumination with fuzzy reflections, soft shadows etc.

The problem with this application is that I only use a single sample per pixel for all effects, so the result is very noisy. I am using spatial denoising, but that is not enough to get rid of aliasing, so I also have to use temporal denoising reprojecting previous frames onto the current one.

When I move the camera, previous frame does not align with the current one, so if I don't do anything, I get ugly ghosting artifacts in motion. I solved this by devaluing previous frame accumulation buffer, but then I don't get temporal denoising in motion, and every time I stop the camera it has to build-up the accumulation buffer from scratch.

My solution to this issue was to create a velocity buffer calculating position of the current pixel's world-space position in the previous frame by multiplying by the previous frame view-projection matrix and subtracing it from the current frame screen-space position in the ray-gen shader like that:

float4 currentFramePosition = mul(float4(payload.prevHitPosition, 1.0f), inverse(g_sceneCB.projectionToWorld));
float4 previousFramePosition = mul(float4(payload.prevHitPosition, 1.0f), g_sceneCB.prevFrameViewProj);
g_rtTextureSpaceMotionVector[dtid] = (currentFramePosition.xy / currentFramePosition.w - previousFramePosition.xy / previousFramePosition.w) * float2(0.5f, -0.5f) + float2(0.5f, 0.5f);

I know calculating current frame's screen-space position is redunant since I can easily get it from DispatchRaysIndex(), but I just did it for consistency, optimization will come later. The multiplications are to map it to (0, 1) space as I had some issues with negative values in these buffers.

Then in the composition pass in compute shader i use this value to offset the sampling position from the texture like that:

float2 m = (motionBuffer[DTid.xy] - 0.5f) * 2.0f * cb.textureDim;
if (m.x < 8 && m.x > -8) m.x = 0.0f;
g_renderTarget[DTid.xy] = (g_renderTarget[DTid.xy] + 7 * g_prevFrame[DTid.xy - float2(m.x / 2, 0)]) / 8;

the if statement is to get rid of floating point precision issues causing image moving when camera is stationary, y-axis disabled temporarily.

The problem is that I get blocky artifacts in motion:

Are they caused by the floating point precision issues or something else? How do I get rid of them?

Full code: Microsista/Pathtracer at TemporalReprojection (github.com)

EDIT: I've switched to using double precision, and decided to pass the offsets as int's to the composition shader:

double4 currentFramePosition = mul(double4(payload.prevHitPosition, 1.0f), inverse(g_sceneCB.projectionToWorld));
double4 previousFramePosition = mul(double4(payload.prevHitPosition, 1.0f), g_sceneCB.prevFrameViewProj);
g_rtTextureSpaceMotionVector[dtid] = (currentFramePosition.xy / currentFramePosition.w - previousFramePosition.xy / previousFramePosition.w) * int2(960, -540);

and in composition it's just:

g_renderTarget[DTid.xy] = (g_renderTarget[DTid.xy] + 7 * g_prevFrame[DTid.xy - motionBuffer[DTid.xy]]) / 8;

This is how the velocity buffer looks in PIX in motion:

Is this correct? The image still has blocky artifacts in motion:

EDIT2: When moving the camera top-bottom it actually works fine, there is no artifacts, no ghosting - perfect, but when moving the camera bottom-top there is ghosting, and when moving the camera horizontally, there are blocky artifacts, weird…

Advertisement

You shouldn't need doubles for this, fp32 should have plenty of precision for this kind of calculation. I think maybe your world-space position calculation here is wrong. If I'm following your code correctly, it looks like here you calculate the payload depth by computing the distance from camera → hit point and then dividing by 200. There's not even a need to calculate that distance…you can just use RayTCurrent().

On a side note, you're probably not going to want to point sample the “previous frame” texture when sampling it during reprojection. After applying your velocity to the current screen-space position you're not going to end up at an pixel center, and so you will end up with “wobbly” results where you snap from one neighboring texel to the other. Instead you want to apply some kind of good reconstruction filter, ideally one that maintains some sharpness. I implemented a bunch of filters in this sample if you want to try them out and see the difference visually, but I would recommend this one.

Thank you,
the issue was that I didn't put an UAV barrier between reading and writing to the previous frame buffer…

I seperated writing to the previous frame buffer to a different pass, put an UAV barrier between dispatches and it works like a charm.

What lead me to this was that moving the camera down resulted in reads that were already computed so it was correct, moving the camera up resulted in ghosting, so reads were always wrong. Moving the camera sideways resulted in blocky artifacts since some dispatches were already computed, but some were not.

UAV barrier assured all dispatches are completed before i read from the resource. That's DX12 for you I guess.

This topic is closed to new replies.

Advertisement