Advertisement

Screen Space Reflections in HLSL - proper camera ray casting

Started by November 23, 2019 11:00 AM
1 comment, last by darknovismc 4 years, 9 months ago

Hi,
I'm trying to implement SSR based on this blog Screen Space Glossy Reflections which is an implementation of Morgan's McGuire raytracing algorithm written in GLSL.

Unfortunately I'm getting wrong results on Ray Tracing Screen Space step.

Could anybody describe me how to generate camera screen rays properly (float3 viewRay)?

My vertex shader looks like this:


struct VertexOut
{
    float4 posH : SV_POSITION0;
    float3 viewRay : POSITION;
};
VertexOut VS_SSR( uint id : SV_VertexID )
{
    VertexOut output;
    float2 tex = float2( (id << 1) & 2, id & 2 );
    output.posH = float4(tex * float2( 2.0f, -2.0f ) + float2( -1.0f, 1.0f) , 0.0f, 1.0f );    
    output.viewRay = float3(tex.x,tex.y,1);
    
    return output;
}

Draw call in C++ is just a fullscreen quad:


setViewPort(renderTargetSize.x,renderTargetSize.y);
deviceContext->OMSetRenderTargets(1,&ssrRTV,nullptr);
depthResource->SetResource(depthResourceView);
normalResource->SetResource(normalResourceView);
	
effect->GetTechniqueByName("RayTrace")->GetPassByIndex(0)->Apply(0,deviceContext);
deviceContext->Draw( 4, 0 );

Initialization of effect variables after compiling shader :


compileShader(L"ssr.fx",macros,shaderFlags);
depthResource = effect->GetVariableByName("depthBuffer")->AsShaderResource();
normalResource = effect->GetVariableByName("normalBuffer")->AsShaderResource();
projection = effect->GetVariableByName("projection")->AsMatrix();
depthBufferSize = effect->GetVariableByName("cb_depthBufferSize")->AsVector();
zThickness = effect->GetVariableByName("cb_zThickness")->AsScalar();
zThickness->SetFloat(1.0f);
ZNear = effect->GetVariableByName("cb_ZNear")->AsScalar();
ZNear->SetFloat(0.5f);
ZFar = effect->GetVariableByName("cb_ZFar")->AsScalar();
ZFar->SetFloat(32760.0f);
stride = effect->GetVariableByName("cb_stride")->AsScalar();
stride->SetFloat(1.0f);
maxSteps = effect->GetVariableByName("cb_maxSteps")->AsScalar();
maxSteps->SetFloat(25.0f);
maxDistance = effect->GetVariableByName("cb_maxDistance")->AsScalar();
maxDistance->SetFloat(1000.0f);
strideZCutoff = effect->GetVariableByName("cb_strideZCutoff")->AsScalar();
strideZCutoff->SetFloat(1.0f);

I'm getting camera view dependent ray hit results. I mean when I move camera around hitPixel intersections with DepthBuffer changes.

I also assume that viewToTextureSpaceMatrix is my projection matrix calculated in XMMatrixPerspectiveOffCenterLH(..) function call. Is that right ?

Hi' I have following Vertex Shader:

struct VERTEX_OUT
{
    float4 pos: SV_POSITION;
    float3 viewRay :POSITION;
};

VERTEX_OUT VS_SSR( VS_INPUT_SIMPLE input )
{
    VERTEX_OUT output;

    float4 position = float4(input.pos, 1.0f);
    output.pos = position;
    output.viewRay = mul(position,invProjection).xyz;
    return output;
}

Where my invProjection is a matrix calculated like this

/*
    Projection matrix
    a 0 0 0
    0 b 0 0
    0 0 c d
    0 0 e 0
    Inverse projection
    1/a, 0, 0, 0,
    0, 1/b, 0, 0,
    0, 0, 0, 1/e,
    0, 0, 1/d, -c/(d*e)
    */
    XMMATRIX invproj;
    for(int j=0;j<4;j++)
        for(int i=0;i<4;i++)
            invproj.m[i][j]=0;
    invproj.m[0][0] = 1.0f/proj.m[0][0];
    invproj.m[1][1] = 1.0f/proj.m[1][1];
    invproj.m[3][2] = 1.0f/proj.m[2][3];
    invproj.m[2][3] = 1.0f/proj.m[3][2];
    invproj.m[3][3] = -proj.m[2][2]/(proj.m[3][2]*proj.m[2][3]);
    invProjection->SetMatrix(&invproj.m[0][0]);

from projection matrix calculated in XMMatrixPerspectiveOffCenterLH(..).

I had to resort to this calculations as XMMatrixInverse(..) gives me an exception because of det==0.

McGuire's algorithm also requires pixelProjection matrix. Which I calculate like this:

XMMATRIX tex
            (
            0.5f,0.0f,0.0f,0.0f,
            0.0f,-0.5f,0.0f,0.0f,
            0.f,0.f,0.5f,0.0f,
            0.5f,0.5f,0.5f,1.0f
            );
    XMMATRIX pixProj = proj*tex;
    projection->SetMatrix(&pixProj.m[0][0]);

My depth linearization function looks like this:

float LinearizeDepth(float input)
{
    float z = 1-input;
    float ProjectionA = g_ZFar / (g_ZFar - g_ZNear);
    float ProjectionB = (-g_ZFar * g_ZNear) / (g_ZFar - g_ZNear);
    return ProjectionB / (z - ProjectionA);
}

When I debug the whole thing I see that point P0 has a good pixel coordinates after unprojecting by pixelProjection matrix. ( from top-left screen corner). P1 should also be ok. Then I start the loop:

for(;
        ((PQk.x * stepDir) <= end) && (stepCount < cb_maxSteps) &&
        !intersectsDepthBuffer(sceneZMax, rayZMin, rayZMax) &&
        (sceneZMax != 0.0f);
        ++stepCount)
    {
        rayZMin = prevZMaxEstimate;
        rayZMax = (dPQk.z * 0.5f + PQk.z) / (dPQk.w * 0.5f + PQk.w);
        prevZMaxEstimate = rayZMax;
        if(rayZMin > rayZMax)
        {
            swap(rayZMin, rayZMax);
        }
        hitPixel = permute ? PQk.yx : PQk.xy;
        // You may need hitPixel.y = geometryBufferSize.y - hitPixel.y; here if your vertical axis
        // is different than ours in screen space
        //hitPixel.y = geometryBufferSize.y - hitPixel.y;
        sceneZMax = linearDepthTexelFetch(int2(hitPixel));
        PQk += dPQk;
    }

After first pass I got rayZMin == sceneZMax == csOrig.z and that immediately breaks the loop in intersectsDepthBuffer().

Raytracing stops in the same manner for every screen pixel.

Did anyone had this issue when porting Morgan's McGuire and Michael's Mara algorithm to their project ?

What can be causing that csOrig.z is the same as linearDepthTexelFetch(int2(hitPixel)) ?

This topic is closed to new replies.

Advertisement