🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Is this type of RTS possible? (polygon count)

Started by
47 comments, last by Newgamemodder 3 years, 2 months ago

Newgamemodder said:

Confused over the 4 MB/s part, do you mean the bus size would have to be that?, a modern GPU can run at 936 GB/s. Or are those 4 MB/s an online specific thing?

It's for network transfers between client and server: a lot for mediocre network connection, and a serious challenge for reliable low-latency transmission.

Omae Wa Mou Shindeiru

Advertisement

I assume that it doesn't apply to me since i won't be using onlin multiplayer? (only vs computer).

What i don't understand is the ps5 is rumored to be weaker than a RTX 2060 or 2070. If it's weaker and can run 8 billion (theoretically) shouldn't the 3090 or 4000 series be able to run the polygon count or more?

A GPU can throw an effectively infinite amount of geometry and fragments at a frame, but in order to finish rendering that frame in an appropriate fraction of a second before your game skips frames or slows down you need to respect a strict performance budget. Non-realtime applications, like CAD, can invest seconds or minutes, or even hours and days, to render one frame, but a game must sacrifice details and powerful, general and elegant algorithms (like raytracing a complex scene) to ensure performance.

So rendering billions of polygons per second is only a benchmarking stunt: on the one hand a practical game has a lot of other work to perform, and on the other hand you must utilize much less than 100% of powerful hardware to ensure your game doesn't stutter on any reasonable machine.

Your design should start from units and environments and expensive computations (like AI planning and pathfinding) that involve them, and then after you have a good grasp of what's in your game determine how much rendering detail you can afford. and what LOD management and general effort saving techniques are suitable. For example, distant units (up to a certain number of screen pixels) could be simplified to transparent sprites without shadows (they are going to look like indistinct moving particles in any case), and if the camera fills a large portion of the screen with a few close units it could be worthwhile to invest in a depth buffer and sorting to draw front-to-back, in order to cull the geometry behind large foreground detailed objects as efficiently as possible.

Omae Wa Mou Shindeiru

Newgamemodder said:
What i don't understand is the ps5 is rumored to be weaker than a RTX 2060 or 2070.

Such claims are never useful, because you can not compare ‘averaged’ perf. across different architectures. It depends more on the benchmarks you choose than anything.

I would guess ps5 has better compute and raster performance than RTX2060/70, but raytracing is worse due to AMD not having fixed function traversal HW (which opens up flexibility options on the other hand to eventually compensate, so we have to wait until devs get used to it and max out.)

shouldn't the 3090 or 4000 series be able to run the polygon count or more?

Future is hard to predict, especially now where rasterization HW shows weakness especially with very high detail. That's why Epic uses software rasterization using GPU compute for small triangles in UE5.

However, the trend seems to have subpixel detail, and you do not have to worry how exactly this will be done (hardware or software).
And we'll get there with current hardware already, according to Epic.
But we will never have multiple triangles per pixel - that's just wasted performance and gives aliasing for no benefit. A LOD system must scale geometry to pixel resolution or higher, but never lower.
This means: Your huge polygon numbers from opening post are possible, but the engine will scale it down to what can be displayed and processed.
So, you can have all that detail and data (assuming heavy instancing), and you can use those numbers to calculate things like ‘having X units gives me Y triangles’. They just won't show on screen at any time, which is not your problem.

What i try to say is: If highest LOD of a character model has 100k triangles, this does not mean all (or any) instances of that mesh render all of them. With that in mind, your question about max polygon numbers becomes somewhat irrelevant in practice. But ofc. we'll always have limits, and they depend more on engine and game than on HW.

Newgamemodder said:
EDIT: Confused over the 4 MB/s part, do you mean the bus size would have to be that?, a modern GPU can run at 936 GB/s. Or are those 4 MB/s an online specific thing? What i don't understand is if the rumors of the PS5 being able to run (theoretically) 8 billion polygons per second can't a PC run more/better?

My commentary was related to my own experience working with RTS game prototypes.

The 4 MB/s is related to network transmission in multi-player case (in client-server approach), it would pretty much be either “heavily optimize on networking” or literally “no-go”. In single-player only scenarios, this will not concern you at all.

The main thought behind my post is, that you're trying to find a problem somewhere, where it doesn't exist. This is not a question of 8 billion or 40 billion polygons a second. You will never hit that limit. Why?

You will hit many more problems on game logic side - like AI, path finding, physics, etc. - much earlier than hitting a performance problem in rendering any amount of triangles. You will hit any other rendering problem (ranging from insufficient memory, to shading being too heavy) much much earlier than hitting a performance problems due to amount of rendered triangles. This applies unless you try to do anything stupid of course (that includes rendering 1 million triangle character to single pixel - you don't want to do that from obvious reasons - that is why LOD was invented).

Let me describe you example performance problem, which you will hit much earlier than rendering the characters:

Each single one of the 3000 characters throw a single grenade at each other in one tick. A second later all grenades explode at once. Now, you need to calculate how much damage each grenade explosion did to each character. Keep in mind that you have just 16ms to solve this whole problem - i.e. single game tick.

Make sure you can handle game logic for large amount of characters first (replace them with boxes or capsules). If it runs smoothly without any problem - then congratulations, you can start worrying about making it pretty. But if that won't run smoothly - no amount of optimizations on graphics side (like lowering polygons count) will save it.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Well i guess it's up the the game devs then to find out if it's possible ? Since there is nothin i can do…

Besides some of my calculations are probably off due to rather counting too much than too little. Only worry are the dead bodies affect on the game engine

So you mean if the game gets released, can handle everything with its current polygon count and i just replace the models, then billions should work? or can polygon count still be a problem then?, what parts do the CPU control in the game then?

Thanks for all the help! ?

Villem Otte's suggestion to prototype the game with boxes and capsules is more important than you seem to understand: your game is about units, not about polygons.

Rendering graphics is cheap unless you do something grossly naive, and simplifying assets or adopting more efficient but more complex techniques has a predictable and small impact. You should therefore focus on the complex and important aspects of your RTS, those that determine whether the game is fun (e.g. rules that imply satisfying strategy and tactics, manageable user interface, AI that is stupid enough to require skillful player direction but clever enough to avoid boring micromanagement) and are technically challenging and performance critical (e.g. pathfinding).

Omae Wa Mou Shindeiru

I know it's not only about polygons, just trying to find out if there is a consensus to how to count Vram and other things. Also what does the CPU handle in strategy games?, i thought AI pathing and unit number was handles by CPU and textures/shaders/polygons and physics were handled by GPU…

Does anybody know if there is less performance use if the characters are organized into “squads” of 9 characters in each?

Usually also physics (aside the entire game) is done on CPU, otherwise you're right.

Also animation of character skeletons is done on CPU. That's the first thing i'd put eventually to GPU if working on ‘massive character count’ game.

Physics on GPU is not so attractive because downloading the simulation results back to CPU each frame is very slow, but physics affect gameplay, so we need those results.

There are also forms of decorative physics, e.g. particle systems just for visual impression, but not affecting anything. Such things are good for GPU.

Those billions of triangles per second are only the triangles themselves. This is a triangle > { { 0.0, 0.0 }, { 0.0, 1.0 }, { 1.0, 1.0 } }

It doesn't count: coloring, texturing, filtering, shadowing, animating, shading, post processing, etc, etc. The number goes down the more visual processing you add to your scene.

As for how many RTS units can a game handle, this is solely dependent on the level of detail and optimization of the game. This can vary from dozens of units to billions (if your units are unshaded triangles :D)

Newgamemodder said:
Also what does the CPU handle in strategy games?

Depends which game. By a rule of thumb, yes, CPU handles AI pathing and unit logic. Not always the case though.

This topic is closed to new replies.

Advertisement