Advertisement

Biletaral Downsampler

Started by October 14, 2019 05:36 AM
3 comments, last by turanszkij 4 years, 11 months ago

I'm watching Tiago's talk from siggraph 2013 about dof implementation in CryEngine 3 (very similar to dof implementation used later in idtech 6 as far as I know).

Around 2:17:33, Tiago said that they were using Bilateral filter for downsampling to avoid bilinear filtering errors. He was basically saying "rejecting taps based on similarity".

I was wondering if someone knows how to implement this downsampler.

Let's say we're downsampling to quarter resolution, I can sample 4 taps and average only those who are "similar" (intensity level difference don't cross a threshold) - this raises 2 questions though:
1. Which tap should be the reference tap for rejection? picking any of them just seems random and picking the average might give an empty result in case the taps are completely different (the unfortunate case of a sharp aliased edge).
2. Sampling 4 taps and doing all these operations just to downscale seems like a lot of instructions that will kill perf. This makes me believe that I’m doing it wrong.

Would be great to hear how to do it properly and also a short explanation why bilinear is bad for this case.

Thanks!

 

 

The reason you want to use bilateral instead of bilinear because bilinear doesn't account for depth discontinuities, just blurs everything, so you will get halos around a sharp character face against an out of focus background.

Bilateral samples depth and color for every tap, and if a discontinuity is detected in depth (difference between center depth and sample depth are over a threshold), it falls back on the center color sample. It also doesn't just simply fall back, but lerp back instead by the difference weight, for example single direction gaussian bilateral blur:


const float center_depth = texture_lineardepth.SampleLevel(sampler_point_clamp, uv, 0);
const float4 center_color = texture_color.SampleLevel(sampler_linear_clamp, uv, 0);

float4 color = 0;
for (uint i = 0; i < 9; ++i)
{
  const float2 uv2 = uv + direction * gaussianOffsets[i] * resolution_rcp;
  const float depth = texture_lineardepth.SampleLevel(sampler_point_clamp, uv2, 0);
  const float weight = saturate(abs(depth - center_depth) * camera_farplane * depth_threshold);
  color += lerp(texture_color.SampleLevel(sampler_linear_clamp, uv2, 0), center_color, weight) * gaussianWeightsNormalized[i];
}

Keep in mind, that it is incorrect to separate a bilateral blur into horizontal and vertical direction passes, but in practice it might look acceptable. For example I use it for SSAO and it doesn't make a visual difference whether you separate it or not, but performance wise the separated version will be faster.

Advertisement

That makes sense to reduce halos. Thank you for the explanation.
 

I see that you’re use linear depth distance as the weight. I wonder, have you considered using some other weighting function? I'm experimenting right now using the previously calculated coc difference as the weight. Results seems to be okay so far and I can save some instructions by not having to linearize the depth.

Thank you Jànos. I really like your posts about the work you’re doing on Wicked.

I haven't tried, but if it works for you, that's great and thanks! :)

This topic is closed to new replies.

Advertisement