🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Compute Shader Ray Tracing

Started by
5 comments, last by Turbo14 4 years, 3 months ago

Has anyone made a compute shader ray tracer that can render a moderately complex scene in real time? I've googled a bit and all I've found are ones with a few spheres and maybe a skybox and a ground plane. I'm talking about dx11 compatible compute shaders not dxr or rtx stuff. Although, I think if they keep making specific ray tracing hardware, any compute shader ray tracer's will be obsolete. Still fun to mess with though.

Only kinda complex thing I've found is Crytek's demo.

Advertisement

Yeah, this would be a fun and cool thing to see. I'm thinking a level of complexity like the original Quake - not Quake 2 which the RTX version is - so just point and spot lights, no bounce, no sky light, no translucency, no reflection, low poly counts, but yet just about enough to be a cool platform for experimenting with.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

https://github.com/GPUOpen-LibrariesAndSDKs/RadeonRays_SDK

https://wotencore.net/en-eu/

https://amietia.com/q2pt.html

https://www.gamedev.net/projects/380-real-time-hybrid-rasterization-raytracing-engine/

There were more interesting projects shown here on the site, but hard to find. I remember @vilem otte s path traced game in doom challenge.

This brings the next question… just how detailed can a scene be that renders 60+ fps on say a gtx 1060 ranged card with a compute shader ray tracer at at least 540p?

I'm wondering if I should even bother working on one or just wait for hardware accelerated RT to blow everything out of the water.

I'm wondering if I should even bother working on one or just wait for hardware accelerated RT to blow everything out of the water.

Personally i decided for the latter option.

I already use compute RT for the realtime GI project (multibounce diffuse and glossy reflections) i'm working on, but this is a special case implementation - it uses hierarchy of surfels instead triangles and aggressive LOD, also this allowed me elegant solutions for the data divergence problem of classic RT.

But i planned to implement classic RT as well for sharper reflections. I can still get some benefit from my surfel data structure, which is BVH, geometry and LOD in one thing.So they would be interesting and maybe promising to work on.

Then when RTX has been announced i decided to drop my plans. RTX is not perfect, but it can give fully accurate results for sharp reflections and shadows. I could not deliver this accuracy with my own surfel stuff.
And i would not want to work on compute full scene triangles raytracing, becasue after some years everyone will have HW RT on his GPU.

My personal assumption: If Crytek had known about upcoming RTX, they would not have worked on their very advanced RT solution either.

This leaves only two reasons to work on compute RT: For learning, and for fun.

Is it worth to work out your own compute raytracer for learning purposes? (Same question: Is it worth to learn how software rasterization works?)

IMO the answer is more no than yes. I mean, it is worth to spend one or two weeks on it, but it is not worth to optimize it so it runs at practical performance. This takes too much time for little practical benefit.

I would have answered differently one or two decades ago, but nowadays game technology became complex, and you can not know everything in detail. You need to specialize on something, no matter if motivated by actual needs or just interest. And it also means you can only learn a small piece of the whole thing, which implies time spent on learning is limited.

So, if you are interested in RT, there are many things worth your time: Importance sampling, light sampling, denoising, etc. It's a whole lot of stuff.
(I've seen your posts here - results seem incorrect. You might want to focus on a simple correct pathtracer at first, and ignore performance. That's worth to learn no matter how hardware evolves.)

But the technical implementation of RT itself (forms of acceleration structures, traversal and intersection) seems no longer in the hands of software developers. It's a hardware thing now.

Yeah my stuff wasn't really accurate because I wasn't basing it off anything and was coming up with my own solutions instead of looking up how to do it right. I worked on it a little more which looks a little more accurate.

Anyways, I'm pretty much abandoning the CPU stuff… you need an insane amount of CPU cores to render anything at a reasonable resolution, and hardly anyone has more than 4-6 cores. My latest version uses memory mapped files to raycast on separate process and send em to the main program to put the image together I use an equal amount of threads to process the raycasted pixels. Anyways even with a 6 core, 12 thread, it's still too slow even at a 4th of 1080p resolution.

Anyways, software ray tracing CPU and GPU, is pretty much dead when RTX and whatever AMD comes out with becomes more developed.

This topic is closed to new replies.

Advertisement