Benefits of multithreaded renderer

Started by
39 comments, last by NikiTo 4 years, 5 months ago

So I keep reading about multithreaded renderer but as a beginner its not clear to me what exactly is the benefit of it? Ultimately don't the calls to the gpu have to be done in a specific linear order?

Advertisement

For example, if you have a game with 8000 zombies, each thread could animate and cull 1000 of them and then build GPU command buffer. Then the main render thread can submit those without blocking that thread for long.

A single threaded renderer would need to do parts of this work in only one thread and become bottlenecked when submitting many draw calls, while the other threads become bored and start defragmenting the thrashcan.

ccherng said:
its not clear to me what exactly is the benefit of it

If you mean what is the benefit of investing your time in it, i would say not much. Not much if you are a beginner. I always wanted to use async or other extras of the GPU, but I was always able to fake it with standard commands. I go for performance, but still, i haven't seen a reason of multi threading. Maybe i will see it further, but so far not yet. Maybe if you create a game like Horizon Zero Dawn, you will need it. But most of the time, it will be fast already with one thread.

ccherng said:
Ultimately don't the calls to the gpu have to be done in a specific linear order

Only the command list will be generating commands in parallel. GPU will receive the commands in a line. So we are talking about multithreading inside the CPU while it executes the code of the DX12 API( @joel correct me if i am wrong here)

GPU will do the impossible to parallelize the serial workload it receives. It can completely randomize your workload. Take nothing for granted and use barriers. Only barriers can make sure the workload is executed in the way you intended. Otherwise, no "specific linear order” can be assumed.

@NikiTo @joel So am I understanding that when speaking about multithreaded renderer we count things like animation and matrix transformations as part of the “renderer” and we don't speak about the renderer system and animation system as separate systems? This is standard terminology to speak of the animation as part of the renderer?

ccherng said:

@NikiTo @joel So am I understanding that when speaking about multithreaded renderer we count things like animation and matrix transformations as part of the “renderer” and we don't speak about the renderer system and animation system as separate systems? This is standard terminology to speak of the animation as part of the renderer?

Mostly, the things you would do in the CPU are worthy of multithreading.
Matrices are computed in the CPU. If you have lot of matrices to compute, multi threading helps.

If you are going to animate in the GPU, it makes less sense to multithread on the CPU.
If you are going to animate in the CPU, you benefit a lot from multi threading.

But modern CPUs are pretty fast anyways. I suggest you to code for a single thread, first. And to make sure it works correctly. If you think it is slow, then try to add multi threading. Ofc it is faster for you to add multithreading, but is it worth the effort?…. Code it first for a single thread.

Hell is taking a single-threaded program and making it multithread.

We're getting more processors for the same money, but they're not getting any faster. Clocks have been at 3-4 GHz for a decade now. Unity and UE4 went through huge efforts to multithread their renderers, and it paid off.

The new game consoles, the PS5 and the Xbox X, are both 8-core machines. Game developers need to be thinking multithread, unless it's a retro sidescroller or something.

I meant “indirect” where i wrote “async”

Nagle said:

Unity and UE4 went through huge efforts to multithread their renderers, and it paid off.

“huge effort" - that's the point.

“retro sidescroller" - way way way too over exagerated.

ccherng said:

@NikiTo @joel So am I understanding that when speaking about multithreaded renderer we count things like animation and matrix transformations as part of the “renderer” and we don't speak about the renderer system and animation system as separate systems? This is standard terminology to speak of the animation as part of the renderer?

Not sure if animation belongs to ‘renderer’ or ‘game engine’. Maybe the former since one could do it on GPU (e.g. waving foliage).

But it is also about the driver work if you issue API commands, not just the example animation which has obvious costs. The API costs appear hidden, because the driver often buffers them and processes them internally in the background, so API calls return quickly, but it takes time to prepare work for GPU and multithreading makes sense. It is also about latencies: A thread can issue uploads to the GPU as soon as possible, so stuff is ready to execute when needed.

Multithreading should be the default way of doing everything nowadays. Many algorithms can not profit from parallelization directly, but in those cases the question is only: What other work can i do while some thread runs that algorithm?

It's some extra work, makes debugging harder and increases code complexity, but it's the only way to utilize the whole system power if necessary.
Better start sooner than later using it, because it's a matter of experience which takes time to build up.
But this does not mean multithreaded rendering is a must have - this totally depends.

I guess there are 2 levels of multi-threading to consider as well.

You could put all your rendering/animation/etc. logic on one single thread separate from other game stuff (and maybe split some of the other stuff into other threads as well), rather than have nearly the entire game be a single thread.

And you can go beyond that and have the rendering itself use multiple threads, but one thread can already do a lot of rendering if that is all it is doing.

The thing is we always need a solid reason to do anything.

Using multithreading for the sole idea of using it is bad. Multithreading “because modern CPUs have many cores” and “because it is 2020” is wrong.

Multithreading uses semaphores and lot of extra code to can work, so having two cores doesn't mean the speed of execution will double.

The driver already multithreads in the background where possible. Programmers of the API made it as fast as possible.

Multithreading requires lot of work and investigation. Investigation takes the most of time. The dev needs to program it a way, measure it, then program it another way, measure it and so on. And then he needs to debug all that a lot. Just by adding multithreading is not going to make miracles automatically. If somebody multithreads in the wrong way, it will not multiply the speed.

For example, packing the output of a busy shader can speed up the program much more than if the dev loses months for CPU multithreading.

Thing is, we need a solid reason to do anything. And i haven't heard of a game that has 8000 zombies. Is there a game with even 800 zombies? Unless one makes Accurate Battle Simulator, there is not so much to animate. Not to mention 8K zombies is more a candidate for compute shaders than for CPU.

Multithreading always should make the program faster, but it is rarely a must have. And rarely it is worth. In any way, i put it in the last place in my list of priorities.

If somebody develops a game engine, that will be run later, it makes sense to put a big effort on the engine. Programming must be used to boost productivity. If somebody is making a cutscene with 8mln zombies, maybe it is worth to trick the cutscene to show less zombies than spending 6 months to a year to code it(or just show a video instead). When somebody is developing an engine, this engine will be used frequently and the 1 year spent for multithreading, pays it off. Because the user of the engine benefits constantly of the time and effort implemented. For example, Intel does not offer some easy to offer instructions, because they are not popular enough. The kernel of the OS is extremely optimized, and people will program extremely not-optimized web applications for it later.

How much time is put on something, and how much benefit it will offer later?

(Last time i started to program a rendering engine, i started with basic geometry, then added textures, various types of shadows, various types of tessellation, various types of animated textures and so on and this killed my developing process. I had not a single piece of code finished. Only a lot of shaders planned to do things that i would very rarely use. And the project suffocated me and i ran out of stamina before compiling a single thing. So i abandoned it. The project died because of bad priorities organization.)

(The modern video API of Windows will FORCE you to multithread. People on Microsoft decided multithreading is a must have for a video API and they not only offer you the possibility to multithread, but they force you to multithread.)

(Block ciphers are perfect for multithreading. The execution time of each of the thread must be the same, so it perfectly fits on many cores. The division of work is 50/50 - just perfect! If the best division of work you can get is 10% for a core, 90% for the other core, it is not worth to multithread.)

This topic is closed to new replies.

Advertisement