🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Depth buffer issues in DirectX 11

Started by
5 comments, last by Gnollrunner 3 years, 2 months ago

I'm have some issues with the depth buffer and long distances in DirectX 11. What I gather is the Z values must be scaled between 0 and 1. However I'm using a floating point depth buffer and what I'm wondering is, if there is a way to just disable all that, and use the raw 32 bit floating point values directly when making the depth buffer comparisons. I understand that everything is divided by the W value, however I figure I could just set that to 1. What I need is some way to disable the clamping. I've looked all over the place and I haven't found any good solution. I'd even consider going to DX12 or Vulkan if it would solve this problem.

Edit: Actually I may have a solution. I think I'll just scrap the projection matrix altogether and just do each coordinate by hand. For Z hopefully I can just subtract some large number from the exponent so everything is below 1.

Advertisement

I'm not super familiar with dx11, but what kind of issues?

What I gather is the Z values must be scaled between 0 and 1.
If you are needing the distance in your shader, can you multiple the z value on range [0,1] by your far value (or something) to get the depth? Since multiplying against [0,1] is like taking a percentage of whatever you multiply by. I feel like there's probably other ways to get depth too (eg create vector3 from world camera position, to world position of the current fragment/pixel, and scalar project that vector onto the normalized camera forward vector, giving you the orthographic distance in front of your camera)

Or were you using some sort of fixed function depth test and that was having issues?

My tutorials on youtube: https://www.youtube.com/channel/UC9CQOdT1A9JlAks0-PF5vvw
Latest Tutorials:
A simple and intuitive ray triangle intersection algorithm https://youtu.be/XgUhgSlQvic

Möller Trumbore ray triangle intersection explained visually https://youtu.be/fK1RPmF_zjQ

Setting up OpenAL c++ visual studio https://youtu.be/WvND0djMcfE

Gnollrunner said:
Actually I may have a solution. I think I'll just scrap the projection matrix altogether and just do each coordinate by hand. For Z hopefully I can just subtract some large number from the exponent so everything is below 1.

Sounds you plan to reinvent projection matrix, just to end up with the same thing in the end.

Yeah, what is the problem? Z fighting? Have you tried inverted Z which seems good enough for most? (personally i did not yet, so can't help)

AFAIK it's also quite common to render distant stuff (sky, universe) to texture using it's own projection and compositing it with close up foreground.

Edit: Btw, IIRC Unigine has double support, remembering you would consider to use engines if there were some with that.

Basically the problem is I need to render stuff at long range. It's not even Z fighting since, the transformation makes parts of convex objects like my sun, vanish at distance. Simply breaking off the projection matrix and doing things with two vector/matrix multiplications instead of one, improves things quite a bit but doesn't solve the problem completely. I think because of the range there is some numerical instability involved.

I have yet to try my non-matrix idea yet. Sure, with prefect math it shouldn't make a difference how it's done, however computers have limited precision. Changing only the exponent of Z leaves the rest of the precision bits untouched so I figure that might be a way to avoid some instability. Also I found in OpenGL there is an option to use the actual Z coordinates unchanged, but it's an NVidia option only

I had read about reversing the near and far planes but looking at the number distribution my guess it would kill the closer in stuff, given the fact I'm trying to render a planet with it's distance sun. I don't really need a lot of distribution at range. Beyond the sun, moon and possibly large asteroids, everything else such as stars will be a dot on the screen and being off even a million kilometers isn't going to mean much for a distant star. I want the starts to actually be separate objects in their correct position, and not just a fake star field so you can select them with a heads up display.

If all else fails I guess I'll have to use stenciling and do multi-pass rendering with different near and far clipping planes. Fortunately with celestial objects it's easy enough to sort them.

Edit:

Well it works, at least for my small red dwarf sized sun at out to 10 million kilometers with 1.0 unit == 1.0 meters. Theoretically it's doing something similar to the projection matrix but the numerical stability seems to be a lot better for some reason. With the projection matrix things were breaking up at about 16000 km. I have yet to see how well it works standing on planets with the shading on (although it works OK with wire frame) but I guess I'll find out. Here's the HLSL vertex code if anyone is interested:

// f4P = [tan(HeightViewAng/2) * AspectRatio, tan(HeightViewAng/2), PowerOfTwoZDiv, NearPlane]
// AspectRatio = Viewing aspect ratio width over height
// PowerOfTwoZDiv = Tested with ldexp(1,96) which should be OK up to a galaxy 
// NearPlane = Near clipping plane
VSSunOutput VSMain( VSSunInput IN )
{
    VSSunOutput OUT;

    OUT.position = mul( mGV, float4( IN.position, 1 ) );
    OUT.position.w = OUT.position.z;
    OUT.position.z -= f4P.w;
    OUT.position.xyz = OUT.position.xyz / f4P.xyz;
    OUT.position.z *= OUT.position.w;

    // For procedrual shading 
    OUT.meshID = IN.meshID;
    OUT.world = float4(IN.world, 1.0f);

    return OUT;
}

Basically the problem is I need to render stuff at long range.

Sounds like you want an infinite far Z distance. The precision/aliasing of the Z buffer is mainly determined by how close to the camera your near clip plane is; it's totally possible to set up the projection matrix to project “infinity” into the 1.0 value, and some chosen “near clip plane” to the 0.0 value.

See for example: https://developer.nvidia.com/content/depth-precision-visualized​

Note that you'll probably have to construct the projection matrix yourself, rather than use a pre-existing library function, to set this up, but that's not particularly hard – the library just does some math, and you can do math in your code, just as well!

enum Bool { True, False, FileNotFound };

@hplus0603 I actually was constructing it myself and I had it set to infinity. I even did the multiplies in double and then converted to float before sending the GVP matrix down to the GPU. Didn't make any difference. When I got past a certain point it just trashed my coordinates. The fact that separating the two matrices and doing the vector multiply in two operations made a large improvement made me think it was the source of a lot of instability. When you multiply matrixes there are lot of operations and you really don't control them. Coding them separately means I can scale Z without changing any of the, precision bits at least in the divide, since I can divide by a power of 2.

In any case it seems to be working now so I guess I stick with what I have. The code is so small compared to a pixel shader I mot going to worry about a small drop in performance that get from taking the projection matrix out.

Also as far as I can tell, there isn't really any need for the required Z scaling from 0 to 1. It seems like an artifact from using integer depth buffers but there is no way around it that I found.

This topic is closed to new replies.

Advertisement