🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Cube shadow map sampling filter

Started by
4 comments, last by Josh Klint 1 year, 11 months ago

I successfully implemented a correct cube shadow map sampling filter that takes a nxn samples of a cube map, with the exact correct vector offsets and no seams.

Unfortunately the code is prohibitively slow when run in an n*n loop, much slower than the cost of doing the texture lookups.

Before I scrap cube shadow maps and switch over to 6 2D images, can anyone point to a fast working alternative or another suggestion?

10x Faster Performance for VR: www.ultraengine.com

Advertisement

It looks like this has been discussed in detail here:
https://www.gamedev.net/forums/topic/657968-filtering-cubemaps/

10x Faster Performance for VR: www.ultraengine.com

You can switch over to 6 2D images (or texture atlas, or even virtualized shadow map). You will have to implement proper coordinate function to handle seams.

This being said, I never noticed any extreme slowdown between cube texture lookup and 2D texture lookup - but maybe I never used it intensively (also it might be related to hardware differences - especially on older hardware). If I'm not mistaken modern hardware considers them as standard 2D texture with NumLayers = 6. There shouldn't be any major slowdown.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Reading the link above, @l. spiro mentioned once you setup your texel adjacency across borders, filtering remains the same as with flat textures. And i remember her to have expertise on this subject.

But it's not really correct to leave it at this. The problem is at the corner regions, where only 3 texels meet not 4. If you use usual bilinear interpolation, what's the proposed solution for the missing 4th texel?
The correct solution would be expensive. It requires to deal with 3 texels, their edges being 120 degrees apart from each other. It's expensive because of the divergent code branch which only a few threads will take. My first idea to calculate weights would be to use mean value weights, which differs wildly from simple bilinear weights.

So i always wondered how this is handled in practice. I assume some compromise, e.g. generating a 4th texel from the average of it's two neighbors.
This would cause a small discontinuity at the 8 corners, but maybe that's acceptable?

If anyone knows, i'm curious since many years about this detail… : )

Vilem Otte said:
This being said, I never noticed any extreme slowdown between cube texture lookup and 2D texture lookup

The cost is in the calculations it takes to get the exact correct cubemap coordinate, when you are taking multiple samples.

I ended up using the cubemap with a 2D array sampler with six levels. I didn't have to change much code, and the array texture clamping works better than a single 6x1 image.

10x Faster Performance for VR: www.ultraengine.com

This topic is closed to new replies.

Advertisement