🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Signed Distance Fields Dynamic Diffuse Global Illumination

Started by
7 comments, last by trsh 2 years, 7 months ago

I am writing a rendering engine as part of my undergraduate thesis and I came across this paper titled “Signed Distance Fields Dynamic Diffuse Global Illumination” (https://arxiv.org/pdf/2007.14394.pdf​)​.​ The paper presents some amazing result and it seems like this could be the best solution out there for diffuse global illumination in real time. It is even claiming to produce better and more performant results than ray tracing.

I'm wondering if anyone here has also read this paper and attempted to implement it and is willing to share their experiences with this topic. Personally I am attempting a Vulkan implementation but if anyone has tried this in OpenGL or DirectX I would love to hear your thoughts too.

Advertisement

That's a very interesting paper. I'm a little lost as to how they decompose such a complex polygonal scene into SDF primitives on the fly - maybe that's actually happening in an offline pre-processing step?

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Seems similar to Godot:

Don't know if this is already available with the engine.

@swiftcoder Ya it seems like some pre-processing is involved. Under the “Conclusions” conclusions heading, the authors specify that the decomposition of the scene into SDF primitives is done by supplying a separate mesh representation using basic primitives, kind of similar to how you create a collision mesh.

@JoeJ Definitely some theoretical overlap between these techniques, although the Godots method is not really able to handle a fully dynamic scene (most methods can't to be fair). I dont think Juan from Godot has released a technical breakdown of his technique yet :(.

hmmm… why does every paper claim to have solved leaking, only to be topped by the next paper to have solved this as well, but for real this time? : )

My thoughts after reading:

They resemble the scene manually using analytical primitives (box, sphere, capsule…), and they also have LODs for this scene representation. This has two problems: Hard to automate*, and with increasing scene size and complexity their ‘clustering’ becomes classical raytracing, requiring a proper acceleration structure and increasing traversal costs.
I think it's more attractive to use SDF volume bricks than using analytic primitives. Automation is much easier (but still big effort), and there is no GPU divergence from the need to support different kinds of primitives. Accuracy and support across art styles and detail levels should be better too, though memory requirements much higher and the Sponza curtains would leak again.
*) I remember some papers about mesh segmentation which did this locally to cluster meshes into flat / cylindrical / spherical regions.

Unfortunately i do not understand their method of probe placement. It sounds like they place probes on a static regular grid, and if they collide with the SDF scene, the probe moves out of solid space by following the distance gradient. But it seems they do this at runtime, maybe to react on dynamic objects. Imagine a moving wall. Probes would then pushed with the wall, until they snap to to the back side behind it. They detect this by checking the distance traveled as being too large, and enforce a full update of that probe rejecting its history (and thus also its integrated multiple bounces?). Interesting, i wonder which artifacts this causes, but with some TAA i guess it rarely shows up.

I also fail to get their multi bounce thing. I do not understand why they need to spend extra tracing work on this, and why they have a parameter for its contribution. Seems not the typical radiance caching method where multiple bounces are free and correct with no extra effort.

Also failed on their probe visibility test. Do they trace a ray in SS from pixel to probe? Or do they use probe depth buffer like RTX does? I guess they use a spatial trick like tracing only towards one of 8 affecting probes, dithering selected probe in SS, then get visibility to the missing 7 non traced probes from neighbor pixels. Something like that.

Finally mentioning large scene support but not showing examples. The paper feels promising but lacks some pages of better explanation and illustrations. Also, it is just another implementation of the idea to update sparse probes for realtime GI. As with DDGI (now RTXGI) the only news here is about certain implementation details, or which hack works best for whom. I wonder which SDF primitive they would use if curtains would follow a curve.

It's a decent paper, but as JoeJ points out in far more detail, unwieldy. Not that Signed Distance Fields are a bad idea, they're far faster than hardware raytracing and have shipped in realtime in multiple titles on the PS4/XBO without any special hardware required.

For a much better idea of the same thing I'd actually look at the following two papers however. The first is a prototype of what is running in Unreal Engine 5. And while it's only for semi static scenes (you can update objects slowly) it solves a lot of the problems this paper has, allowing for arbitrary art and very fast tracing: https://advances.realtimerendering.com/s2015/DynamicOcclusionWithSignedDistanceFields.ppt

Edit, I found it! Ok here's the improved probe paper it's simple, fast, supports multi-bounce, and should be exactly what you're looking for.

Anyway, the second paper complicates things by needing a prepass and a two level acceleration structure. But art support is arbitrary. The way I see it, SDFs really are the future. People used to think they were limited, but more recent tracing work means the only thing missing anymore is robust skinned animation. Shadows, mirror reflections, etc. are all doable and fast. But for your purposes I'm guessing diffuse GI with probes is enough.

SDF's is a tradeoff like anything else, there's not necessarily a “best” choice here. SDF's in their usual form represent certain shapes very well, while for other things they have more trouble with and may require more memory and/or extensions. This particular paper takes things a full step further by fitting geometry to clusters of SDF primitives, which introduces even more uncertainty. I only skimmed (I'll need to come back later) but I didn't see any analysis of what sorts of geometry and scenes their fitting/clustering process has trouble with, so that's an open question.

On the other hand, tracing against triangles can be very expensive but can also exactly match the original triangle representation of your scenes (assuming you're using a traditional triangle-based pipeline like 99.99% of games). In some cases having some discrepancy between your triangle and SDF representation might be totally ok: Fortnite's usage of SDF tracing for distant sun shadows comes to mind, and diffuse GI may also be a good fit since that's inherently a low-pass filter. But for specular reflections or soft contact shadows maybe the SDF representation isn't good enough. Or maybe you have an art style and geometry representation (like the blobby shapes in Claybook) where SDF totally works, and that's great. Maybe you do both depending on the situation!

Anyway it seems like a clever approach outlined in that paper and seems totally worth the effort of investigating, I just wanted to point out that these things often aren't one-size-fits-all.

I am currently researching this paper. What I can say, that without supplemental code or more in depth explanation it's (at least for me) impossible to code the actual implementation. So it's another paper authors made for them selves :D and most likely we wont see its full implementation in light, what is pity.

This topic is closed to new replies.

Advertisement