🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Filtering and aliasing in a scene

Started by
10 comments, last by AhmedSaleh 2 years, 7 months ago

I have 4 fisheye cameras and I project them after lens correction onto a 3D bowel mesh. I'm getting flickering or really like ants are moving in those areas of the scene (top view) of the cameras.

enter image description here

I've tried to generate mipmaps that solved the flickering and the quality is much better. The problem is I'm running that on an embedded platform and generating mipmaps each frame is not considers and it's cpu intensive operations. Plus also getting a technique like Render to texture and do SSAA is also a problem.

I have tried to create a 2D Mask where I do bicubic interpolation/filtering on those areas that the flicker appears, it didn't solve the issue.

I'm looking forward for methods to solve that problem.

enter image description here


Game Programming is the process of converting dead pictures to live ones .
Advertisement

I guess you can solve it easily by taking multiple samples. E.g. divide your texel into a 4x4 grid, then do lens correction and one texture fetch for each subpixel sample individually, take average result of all samples.
You could also use N random subpixel positions, where N is larger in areas of more distortion to optimize.

@JoeJ

Thanks so much Joe.

Is it possible to write a pseudo code fragment shader to share the idea ?

Because I have tried to do bicubic interpolation with a mask, but that didn't work…

Many Thanks

Game Programming is the process of converting dead pictures to live ones .

@joej

Here is my trial:

#version 330 core
out vec4 FragColor;

in vec3 ourColor;
in vec2 TexCoord;

uniform sampler2D texture1;
uniform sampler2D texture2;

vec4 cubic(float v){
    vec4 n = vec4(1.0, 2.0, 3.0, 4.0) - v;
    vec4 s = n * n * n;
    float x = s.x;
    float y = s.y - 4.0 * s.x;
    float z = s.z - 4.0 * s.y + 6.0 * s.x;
    float w = 6.0 - x - y - z;
    return vec4(x, y, z, w) * (1.0/6.0);
}

vec4 textureBicubic(sampler2D sampler, vec2 texCoords){

   vec2 texSize = textureSize(sampler, 0);
   vec2 invTexSize = 1.0 / texSize;

   texCoords = texCoords * texSize - 0.5;


    vec2 fxy = fract(texCoords);
    texCoords -= fxy;

    vec4 xcubic = cubic(fxy.x);
    vec4 ycubic = cubic(fxy.y);

    vec4 c = texCoords.xxyy + vec2 (-0.5, +1.5).xyxy;

    vec4 s = vec4(xcubic.xz + xcubic.yw, ycubic.xz + ycubic.yw);
    vec4 offset = c + vec4 (xcubic.yw, ycubic.yw) / s;

    offset *= invTexSize.xxyy;

    vec4 sample0 = texture(sampler, offset.xz);
    vec4 sample1 = texture(sampler, offset.yz);
    vec4 sample2 = texture(sampler, offset.xw);
    vec4 sample3 = texture(sampler, offset.yw);

    float sx = s.x / (s.x + s.y);
    float sy = s.z / (s.z + s.w);

    return mix(
       mix(sample3, sample2, sx), mix(sample1, sample0, sx)
    , sy);
}

void main()
{
	
	float mask_green  = texture(texture2, TexCoord).g;
	
	vec4 out_color;
	if ( mask_green > 0.1)
	{
		out_color = textureBicubic(texture1, TexCoord);
	
	}
	else
	{
		//out_color = texture(texture1, TexCoord);

	}
	FragColor = out_color;
}
Game Programming is the process of converting dead pictures to live ones .

It depends on the lens correction distortion how much bicubic filter can help. If one pixel maps to an area of about 5x5 pixels in the photo, a cubic filter covering 3x3 pixels will still under sample. And you want a average of an area, not a better point sample, so that's why the better filter might not be good enough. To improve this, mip maps would help ofc as they make an average of an area.

I'll draw a picture to illustrate how multiple samples should fix it:

On the left is a destination pixel with 6 random samples. You map each of them with your distortion fixing math to the source image, which might be a larger area shown on the right.
A bilinear lookup should suffice for each sample, and the average of all samples gives a good estimate if sample count is high enough. Simple Monte Carlo integration.

So all you need is a way to generate random sub pixel positions, usually done using hash functions. Pseudo code would be like this:

const int sampleCount = 8;
const int dimensions = 2;
int seed = (int(curPixel.x) * screenWidth + int(curPixel.y)) * sampleCount * dimensions; // we ensure each sub sample gets it's own unique seed
vec3 sum (0);
for (int i=0; i<sampleCount; i++)
{
	float subOffsetU = randF (seed + i * dimensions + 0) - .5f; // assuming hash returns values between 0 and 1
	float subOffsetV = randF (seed + i * dimensions + 1) - .5f; // assuming hash returns values between 0 and 1
	vec2 subSampleCoords = pixelUV + vec2(subOffsetU, subOffsetV);
	vec3 sample = TextureFetch (ProjectionUnDistort(subSampleCoords));
	sum += sample;
}
vec3 averagedResult = sum / sampleCount;

Some example c++ hash function i'm using would be this:

	inline uint32_t randI6 (uint32_t v) // pcg
	{
		uint32_t state = v * 747796405u + 2891336453u;
		uint32_t word = ((state >> ((state >> 28u) + 4u)) ^ state) * 277803737u;
		return (word >> 22u) ^ word;
	}
	
	inline float randF (uint32_t time)
	{
		uint32_t s = randI6(time);
		return float(s) / float(0x100000000LL);
	}

@JoeJ Thanks so much Joe

  1. How would I use the above approach without random samples ?
  2. How would I use the c++ hash functions inside the shader ?
  3. Is there a need for the mask that I created ?
  4. There is no need to do the filtering over the whole image, that's why I created the mask..
Game Programming is the process of converting dead pictures to live ones .

I forgot to mention that such hash functions are no perfect random number generators. It can happen, e.g. if image width has a certain value, that the generated random samples show patterns, and then the method no longer works properly. Without visualizing sub sample positions that's hard to detect - quite an annoying problem. That's why you can find hundreds of different hash functions in shadertoy, but none is perfect. To fix it, we can try adding some constant arbitrary numbers like so:

int seed = (int(curPixel.x) * (screenWidth + 111) + int(curPixel.y)) * sampleCount * (dimensions + 1);

AhmedSaleh said:
How would I use the above approach without random samples ?

A regular grid, like subdividing our square into a NxN grid for N^2 samples. Often works better anyway.

AhmedSaleh said:
How would I use the c++ hash functions inside the shader ?

They should just work after adopting syntax issues. Otherwise shadertoy has many examples.

  1. Is there a need for the mask that I created ?

no.

AhmedSaleh said:
There is no need to do the filtering over the whole image, that's why I created the mask..

I would make it work first, and after quality is fine you could use the mask for an adaptive sample count for better performance.

@JoeJ

How about this solution that I did ?

Would you review it please ?


vec4 average_filter(in sampler2D t, in vec2 uv, in vec2 textureSize)
{

   vec4 total = vec4(0., 0., 0., 0.);
       
   int down = int(16);
   
   
   vec2 pixelSize = 1.0 / textureSize;
   
   float x_subpix_inc = 1.0 / (textureSize.x * down);
   float y_subpix_inc = 1.0 / (textureSize.y * down);
   
   for (int i=0; i < down; i++) {
       for (int j=0; j < down; j++) {
       
           vec2 pure_sample_loc = vec2(uv.x + float(i)*x_subpix_inc, uv.y + float(j)*y_subpix_inc);

           total += texture2D(t, pure_sample_loc).rgba * vec4(1.0,0.0,0.0,1.0);
           
       }
   }
   
   total = total / (floor(down) * floor(down));

return total;

}


void main()
{
 
float mask_green  = texture(texture2, TexCoord).g;
vec2 texture_size = textureSize(texture2, 0);
vec2 texel_size  = 1/texture_size;
 
vec4 out_color;
if ( mask_green > 0.1)
{
   
   
 out_color = average_filter(texture1, TexCoord, texture_size);

 
}
else
{
 out_color = texture(texture1, TexCoord);

}
FragColor = out_color;
}
Game Programming is the process of converting dead pictures to live ones .

AhmedSaleh said:
How about this solution that I did ?

I see it's sampling a regular grid per texel, so yes this should give you high quality multi sampling.

But i miss the projection mapping in your inner loop?
I assumed you have some fancy analytical mapping from planar to fisheye projection, and you would need to do this for each sample before the texture lookup.

This topic is closed to new replies.

Advertisement