🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

How to get vertex data other than positions

Started by
6 comments, last by Shigeo.K 3 years, 2 months ago

Hi,
I've finished my personal research for the DirectX12.
I think I've understood its theory and now I can make Dx12 program from triangles to shadow, differed, SSS, AO ..etc.
But when it comes to DXR, I stumbled immediately.

VERTEX NORMALS.

After googling for 5days, I believe we can't get vertex normal in DXR pipeline Easily like the raster pipeline.
Using G-buffer which contain normal data is the only solution to get vertex normal?
Or calculate normals by my self from 3 position in the shader?

Advertisement

You're free to pull in whatever data you want in your ray tracing shaders. When your ray intersects a triangle, you can get the geometry/mesh index, the index of the triangle, and well as the barycentric coordinates of the hit location. This is enough to load the triangle's vertex attributes and interpolate them to compute the vertex normal, or whatever attribute you're interested in (texture coordinates, tangent, etc.). You can see how I did this in my DXR path tracer if you look at the code here: https://github.com/TheRealMJP/DXRPathTracer/blob/master/DXRPathTracer/RayTrace.hlsl#L447

In that example I'm using bindless techniques to read from the index buffer and vertex buffer, but you can also bind them through your local root signature if you're using the full DispatchRays pipeline for DXR. If you're using inline tracing then you don't have a local RS to work with, in which case bindless techniques make a lot of sense.

MJP said:
I'm using bindless techniques

Thanks MJP your are very helpful as always.
I'm glad to hear that I can get vertex attributes in raytracing pipeline.
But 1 thing I can't understand is “bindless technique”. What is this?

Shigeo.K said:
“bindless technique”

My personal belief, is, it is explaining how to feed vertex function in shader itself from declared stream, rather than programaticaly, as opposed to just declaring attributes in shader as old school way.

@Shigeo.K

You're welcome! “bindless” refers to a family of techniques that let you bypass the traditional way of binding textures and buffers to your shaders. In a traditional setup, your shader will expect a handful of textures and buffers, and your CPU code will provide matching shader resource views for each of these so that the shader can read from them. With bindless, you instead set things up so that you bind your entire descriptor heap to your root signature, and then use special shader syntax to access the textures and buffers within that heap. The end result is that your shader can freely access any resource, as long as the shader knows the index of that resource's descriptor within the heap. If you look at that code I posted, you can see me doing that with code like this:

Buffer<uint> idxBuffer = BufferUintTable[RayTraceCB.IdxBufferIdx];

RayTraceCB.IdxBufferIdx is a 32-bit integer containing the index of the the index buffer's SRV descriptor, and it uses that index to access the descriptor and then read from the buffer.

MJP said:
If you look at that code I posted, you can see me doing that with code like this: Buffer idxBuffer = BufferUintTable[RayTraceCB.IdxBufferIdx];

I understand now what you are saying.
The CPU side just passes the first address of the group of resources to the pipeline, then shaders can see each of the resource by the INDEX.
This is "the bindless”. ok.
I managed to pass the arbitrary vertex attribute to shaders by using buffers(StructuredBuffer or ByteAddressBuffer).
It's great, but there is 1 thing still not clear to me.
Can we pass the vertex attributes( normal, uv, tangent ..etc) by using ACCELERATION STRUCTURE BUFFER( not StructuredBuffer nor ByteAddressBuffer, TextureXXX ) .
If we can do that, there is no need to prepare any extra buffers other than acceleration buffers.

MJP said:
“bindless” refers to a family of techniques that let you bypass the traditional way of binding textures and buffers to your shaders.

Is there a performance cost from using bindless over traditional? Sounds there is at least a need to resolve indirections. (I'm using Vulkan which has some advantages here i've heard, but didn't look at it yet.)

@Shigeo.K No, the acceleration structure only contains the minimal amount of data required to be able to compute ray/triangle intersections. Any other additional attributes need to be fetched manually.

@joej It depends on the hardware and what you're doing with bindless, but in my experience there is not an appreciable overhead as long as the descriptor index ends up being uniform. For a non-uniform descriptor access you can expect up to Nx overhead on the actual texture/buffer fetch, where N is the number of different descriptors that end up getting accessed in a wave. For uniform access where the index comes from a constant buffer, I think you're looking at a very small amount of overhead vs. the index being “hard-coded” into the compiled shader, and potentially not actually being any slower then what the driver does for fixed offsets into a small descriptor table. On AMD you're basically looking at an extra 32-bit scalar load at most. In theory there may be some extra cache misses due to descriptors being sparse instead of located contiguously in a table, but in practice I think descriptors end up being so big relative to the cache line size that it doesn't really matter (I think Nvidia actually has a special cache for descriptors where each descriptor ends up being a single cache line).

@MJP

I understood.
Now everything is crystal clear for me.
Again thanks MJP.

This topic is closed to new replies.

Advertisement