🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Virtual Texturing, I don't get it. How to use Feedback buffer find specific tex to load?

Started by
5 comments, last by JoeJ 2 years, 9 months ago

Recently I begun to learn VT based on http://silverspaceship.com/src/svt/.

I have searched many paper or ppt on this topic, all of them are all focus on Mips and other stuff. One thing I'm still don't get is that how do you get what texture and which part of the texture you should load, when you just output a UV and mips. The gap here is that, in a frame, how do I know which pixel belong to which texture?

Advertisement

iGrfx said:
when you just output a UV and mips. The gap here is that, in a frame, how do I know which pixel belong to which texture?

um - if you output UVs, those UVs tell you the position in texture space, so calculating the tile having the texels for that position is trivial?
Likely i got your question wrong and you might rephrase it.

@JoeJ Thanks for reply!

My question is:

I Draw a duck, say some pixel current be draw use its texcoord would be [0.3, 0.5]. Then I draw a cat, its texcoord [0.6, 0.6]. All of these geo be drawn in their own UV space. How could I know which texture I need to upload to physcial gpu texture? Every Texture in VT has their own UV space in range[0,1]. Based on feedback, I just konw Every geo's UV, I can't figure out which texture because evey texture has the coord, say [0.1,0.1]. If I have ten texture, which one's [0.1,0.1] I should sample?

Ha, ok. AFAICT, the standard way of virtual texturing is to have just one single but huge texture. So you would scale down duck and cat UVs, and translate them so the UVs don't overlap but are beside each other in one common texture space. We also put both texture images into one big image so they match the new UVs.
At runtime, only the tiles containing duck and cat texture would be loaded if we render just that.
You also want a preprocessing tool to generate our common UV atlas and image tiles automatically, so you don't have to scale UVs manually for all your models.

The only limitation we get from this is that all our textures must share the same amount of channels. If we want an alpha channel for some foliage model, all other models get alpha channel too, although they don't need it. That's maybe the point where we want to use two textures, one RGB and one RGBA, and the problem you mention is unavoidable.
You could then store a material ID in your GBuffers, and the materials knows if it needs RGB or RGBA and picks the corresponding texture.

The topic is related to ‘global parametrization’, which might make it easier to think about it. Say we do a game like Rage, and each surface in the game has its own unique texture. Our scene has 10 ducks, but each duck can have different colors. To achieve this, we would generate unique UVs for each instance of the duck. So even the geometry is the same for all, each has its own texture. Now we could even bake static lighting into our texture, and each duck has it's own correct and unique lighting. We can also just paint our runtime decals into this texture. And beside material ID we also need an instance ID in the GBuffer.

That's very different then from the traditional way of reusing the same texture multiple times, but the same idea we already know from using static lightmaps.

JoeJ said:

Ha, ok. AFAICT, the standard way of virtual texturing is to have just one single but huge texture. So you would scale down duck and cat UVs, and translate them so the UVs don't overlap but are beside each other in one common texture space. We also put both texture images into one big image so they match the new UVs.
At runtime, only the tiles containing duck and cat texture would be loaded if we render just that.
You also want a preprocessing tool to generate our common UV atlas and image tiles automatically, so you don't have to scale UVs manually for all your models.

The only limitation we get from this is that all our textures must share the same amount of channels. If we want an alpha channel for some foliage model, all other models get alpha channel too, although they don't need it. That's maybe the point where we want to use two textures, one RGB and one RGBA, and the problem you mention is unavoidable.
You could then store a material ID in your GBuffers, and the materials knows if it needs RGB or RGBA and picks the corresponding texture.

The topic is related to ‘global parametrization’, which might make it easier to think about it. Say we do a game like Rage, and each surface in the game has its own unique texture. Our scene has 10 ducks, but each duck can have different colors. To achieve this, we would generate unique UVs for each instance of the duck. So even the geometry is the same for all, each has its own texture. Now we could even bake static lighting into our texture, and each duck has it's own correct and unique lighting. We can also just paint our runtime decals into this texture. And beside material ID we also need an instance ID in the GBuffer.

That's very different then from the traditional way of reusing the same texture multiple times, but the same idea we already know from using static lightmaps.

Thank you for your detailed answer. There is one more thing I didn't get. Different models use the same texture, or the same model uses different textures. How is this information related to the baked giant virtual texture? Is there any modification to the material of different objects?

iGrfx said:
or the same model uses different textures

This case would no longer exist. If we have some character which initially had different face and body textures, your preprocessing tool would pack all UVs and image data into the same virtual texture.
But following the example above, we would use two virtual textures if it has hair strands with alpha. The final materials and render would look the same.

Though, it's up to you if you allow (or need) some changes for optimization. If the scene has many different and exotic materials with variable custom texture data for special cases, you might decide to simplify such materials so they fit into the standard.
The other option is to use virtual texturing only for some things, e.g. terrain and architecture, all opaque and using standard PBR material. Other things like characters, dynamic objects and foliage, can use traditional texturing. That's surely more flexible, but also keeps the traditional downsides of overhead from using many textures and shaders.

This topic is closed to new replies.

Advertisement