🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Double precision on GPU, still too slow or now the way to go?

Started by
13 comments, last by scippie 3 years, 6 months ago

JoeJ said:

Even if the game itself uses double prec. (Star Citizen is the only example i know)

You mean Star Citizen actually renders with double precision on the GPU? Or the engine works with double precision internally? Because I think lots of games do that, no? I actually even think working with doubles is faster on the CPU.

Advertisement

@scippie

scippie said:

JoeJ said:

Even if the game itself uses double prec. (Star Citizen is the only example i know)

You mean Star Citizen actually renders with double precision on the GPU? Or the engine works with double precision internally? Because I think lots of games do that, no? I actually even think working with doubles is faster on the CPU.

On modern CPUs double is quite fast. If you need the large coordinate space it's likely a lot faster than most of the tricks you would use to avoid it. It's defiantly a lot more convenient. I use double for most stuff CPU side as it avoids a lot of problems with large scale terrains.

Now that I think about it: I've read about creating a GPU based physics engine. Unless the scene fits into single precision floating point values, this does seem like a bad idea then!

I use it for my particle engine though, but my particles are always relative to the camera of course, so single precision is always more than enough there.

scippie said:
You mean Star Citizen actually renders with double precision on the GPU? Or the engine works with double precision internally? Because I think lots of games do that, no? I actually even think working with doubles is faster on the CPU.

I don't know any details. My guess is rendering is single prec. (they have very detailed models), but physics on CPU uses doubles, also network and game code.

On CPU the cost is usually 2 x single prec. perf., so ratio is optimal.

Star Citizen doesn't use native double. They use the cpu to virtualize a sparse 64bit depth buffer into a normal single precision one in the gpu.

Frantic PonE said:

Star Citizen doesn't use native double. They use the cpu to virtualize a sparse 64bit depth buffer into a normal single precision one in the gpu.

That sounds nice. I've used logarithmic scaling in the depth buffer for my space game until now. It gave good results but maybe it won't stay that way as graphics progress of course.

This topic is closed to new replies.

Advertisement