Advertisement

Texture caching for Texture2DArray

Started by July 21, 2019 10:56 AM
3 comments, last by vinterberg 5 years, 1 month ago

Hello all

In my application I am having the setup that I have terrain consisting or larger tiles which are divided into 16x16 chunks. The tiles define all the textures that can be used by the chunks and each chunk can select a maximum of 4 textures from that list. A tile is limited to a total of 64 unique textures. Until now I have been setting up the 4 textures for each chunk in each frame using `PSSetTextures` with the 4 textures. While profiling I could notice that a lot of performance is actually wasted swapping these textures around. Now I had the idea that since 64 textures is much less than the 128 from `D3D11_COMMONSHADER_INPUT_RESOURCE_SLOT_COUNT` I could bind all of them at the beginning of rendering a tile and then just add a uint4 with the actual indices when rendering a chunk and index the 64 textures array. Now this doesnt work in D3D11, it would in D3D12 I think.

So the other obvious option would be to use a Texture2DArray. It looks ideal however when looking through examples I can see one major issue for me. Realistically tiles have less than 10 textures and there is very small variation between different tiles textures. So it would be very inefficient to reload the same textures over and over again just because the array of textures between two tiles differs in 1 texture (right now they are all cached and only loaded once).

So I am wondering, is there an option to have some kind of caching between texture arrays? Can I reuse the same texture in multiple arrays? Are there other options for my use case?

Thank you very much in advance and best regards
Cromon

There's no automatic caching or de-duplication that you can take advantage of. Each texture array is going to fully consume memory that's roughly equal to Width * Height * ArraySize * FormatSize.

If you can globally fit the textures for all tiles into a single large texture array (2048 is the max), then for each tile you could perhaps store an index into that array in a constant buffer or structured buffer. 

Advertisement

Partially Resident Textures / Tiled Resources would be one way you could achieve the memory saving you're looking for. If you want to reuse the same memory for one slice by having it shared in multiple arrays. There are some restrictions around array textures with mips though that could scupper that plan as the spec wasn't resolved until D3D12 I think.

But it sounds like your main concern is the rebinding of textures. Do you have so many unique textures that you can't just store them all in one big Texture2DArray that is shared by all chunks in all tiles?

It sounds like you contemplated that approach but discounted it because it wouldn't work on D3D11. Dynamically indexing into slices of a Texture2DArray on D3D11 is fine - that's different to dynamic indexing introduced in D3D12 which would let you sample from completely unrelated textures.

If all your textures are the same resolution and format they could all go into one big Texture2DArray. It might mean you want one Texture2DArray for Albedo, one for Normals, one for Specular etc, but so long as all formats and sizes within each "family" are the same then that would work in D3D11.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Use texture atlassing, that way you only need to bind a few "megatextures" that contains all your textures (which can vary in size), and you can have some kind of constant buffer or something holding UV ranges for each texture :)

These textures only have to be bound once in the lifetime of the application, and not every frame - saving you a lot of bandwidth :)

Look up "texture atlas" to find some guides, you'll have to do some border tricks because of filtering.. Or search this forum ❤️

.:vinterberg:.

This topic is closed to new replies.

Advertisement