🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Is it possible to rasterize this many polygons on a GPU?

Started by
46 comments, last by JoeJ 2 years, 11 months ago

Hi again Gamedev!

I've done some more research since my last thread. I've been asking around on many different websites and forums.

I also read this post on Nvidias website:

https://developer.nvidia.com/blog/realistic-lighting-in-justice-with-mesh-shading/

Is it possible to render 1.3-1.4 billion Triangles in your field of view in a game?, is the Blog post above accurate in that's it's rendering 1.8 billion Triangles at once?

Is there any other way to make it possible to render this many triangles? Is mesh shaders enough?

Thanks!

Advertisement

Nanite is about reducing triangle counts, so even if the model has millions of triangles it only renders about one triangle per pixel at most.

So, the answer to your question becomes yes in practice, but only if the game engine supports a dynamic LOD system.
UE5 currently is the only one which has this, but i assume many will adopt similar methods in the near future.

… how times are changing… suddenly your requests no longer sound unpractical :D

Hi JoeJ long time no see!, thanks for the response! ?

Aha so lets say i have about 192 x 6 million polygon craft/ships how would i lose detail?, if i were to not use unreal how would it look then? ?

What is the difference between a “normal” lod system and a “dynamic”? :D

Just curious since according to the video i watched on justice it takes a 2 billion model mesh and makes it a 1.8 billion one :D

Yeah feels like we are at the verge of greatness when it comes to graphics now :D

Newgamemodder said:
What is the difference between a “normal” lod system and a “dynamic”? :D

A traditional system uses discrete LODs, so a number of models each having half detail than the other. Often modeled manually, with tools projecting original details to normal map textures.

A dynamic system does the reduction at runtime, not requiring multiple models. Mostly used for terrain heightmaps so far.

Nanite now extends this to any geometry. It preprocesses very high poly models into a hierarchical data structure, and at runtime it only streams in the data which fits the current requirements of the projected screen size. This way memory requirements and rendering work are minimized.
Technically it solves two problems:
1. GPUs can't draw tiny triangles efficiently, so Nanite draws them with compute shaders, not using GPU ROPs as usual.
2. LOD is adjusted in a fine grained manner, so it does not switch over the whole model, but locally over the surface. To achieve this they traverse the hierarchical data structure and draw variable detail levels which fit to screen nicely.

This can do exactly what you want. Just currently they do not yet support skinned characters, but they plan to add this.

Newgamemodder said:
Aha so lets say i have about 192 x 6 million polygon craft/ships how would i lose detail?, if i were to not use unreal how would it look then?

With UE it would just work to import the 6M poly ship and display it 192 times.
With any other engine, you need to model multiple reduced LODs of the ship, like we have discussed last time.
Lacking the functionality Nanite has, other engine also could not display the same ‘insane’ detail. And there might be visual popping on changing LOD over the whole ship.

Aha so it short terms it's like this?:

  • The Chinese game uses their own in-studio version of “nanite” but without Unreal engine 5 and does the exact same thing.

  • By “skinned” do you animated models with animations or just a rigged model?

  • It's possible to have all the 1.3 billion triangles on scene ONLY if it's Dynamic LODS + compute shaders? (would i lose details on the highest LOD's if it uses a type of nanite?)

I want to see a demo of Nanite without so much instancing. Sure, they have a zillion triangles, but most of them are the same object repeated. All those statues in the UE5 demo are the same, and many of the rocks are instances of standard rocks. That's the best case for the Nanite approach.

Also, simple LOD algorithms work really well on rocks. Or things which have lots of surface detail you can fade out. Things with lots of hard outside edges, like buildings, not so much.

I'd like to see a UE5 demo of a few city blocks where every building has an detailed interior and they're all different. Nanite is going to be great for Red Dead Redemption 3. GTA 6, maybe not so much.

Newgamemodder said:
By “skinned” do you animated models with animations or just a rigged model?

Skinning means deformation where the surface is affected from multiple bones.
If it's just one bone per limb (robots, as seen in their latest demo), each limb is just a unique object, so that works already.

Newgamemodder said:
It's possible to have all the 1.3 billion triangles on scene ONLY if it's Dynamic LODS + compute shaders? (would i lose details on the highest LOD's if it uses a type of nanite?)

If it were possible with traditional methods, we would have seen the detail Epic shows long before.
It's about efficiency. GPU rasterizers become underutilized if we use tiny triangles, so we avoid to do this to get high fps, many NPCs, etc., because we scale everything to the numbers where efficiency is good

Newgamemodder said:
The Chinese game uses their own in-studio version of “nanite” but without Unreal engine 5 and does the exact same thing.

Personally i assume this will happen. Nanite is a simple solution solving a lot of problems. Usually if one comes up with such thing (e.g. SSAO in Crysis), all the others adopt it quickly.
But that's just my guess. They have ongoing work on other priorities, and Nanite is still much more complex than a SSAO shader. So, with ‘soon’, i mean something like within the next 5 years.

Nagle said:
I want to see a demo of Nanite without so much instancing.

If insane detail is the primary objective, that's impossible for a larger game.
The only way to do this i see is data streaming. But even then generating all the unique content requires breakthroughs in procedural content generation, photogrammetry methods, or collaborative asset sharing across the whole industry.
…maybe it was Nanite which inspired Sweeney to those ‘Metaverse’ ideas. : )

Do you know if creating an own in-studio nanite is hard to implement?

Also do you have to create the lods manually or does the engine when you have dynamic LODS?

On a side note have you seen these things? :O

https://www.youtube.com/watch?v=AVLuGOTjqnY

https://www.youtube.com/watch?v=gyGzeZaVXcw

So to summarize. Should i do the way the Chinese company did or a in-studio Nanite?

The “should I” is up to you and where you want to spend your resources.

The LOD systems have been around for years. See for example Hugues Hoppe's publications 20 years ago. View dependent LOD reducing the model to roughly one vertex per pixel is mostly a data processing task these days.

While it can make sense for some fields like physical scanning of objects with laser range finders or terrains covering thousands of miles, it makes less sense for an artist making models. After the models are made there is a preprocessing step to create hierarchical data for the object. Then when it comes to display the system must sort through the data to dynamically compute where in the data hierarchy to view, generate the model, and keep the ever-changing data updated on the video card.

The CPU load of doing it makes it a poor fit for games in the general case, but scientific data visualization has used it for years.

Newgamemodder said:
Do you know if creating an own in-studio nanite is hard to implement?

It's not hard, but game industry became huge and bloated. Doing big changes is never easy because dependencies on established workflows and toolchains. Luckily a thing like Nanite does not really need big changes, but it's still a change affecting many details. First question is: Do we even need it? Do we want to increase costs on making content creation more expensive because increased detail everywhere? Does it pay out, or is there a better investment for us? People will make differing choices and priorities.

Newgamemodder said:
So to summarize. Should i do the way the Chinese company did or a in-studio Nanite?

I don't know the context about a ‘Chinese company’ - you need to provide that and refine the question.

From last time i remember you want to mod existing RTS games, and you wan't to use detailed models.
If it's that, you can only wait for such games to be made, and then choose one of them to mod?
This will take some time. It's a bit unlikely games already in production would change engine just to get insane details. Especially for top down RTS, where detail level can be kept pretty constant and so dynamic LOD is not necessary or that attractive.

Newgamemodder said:
Also do you have to create the lods manually or does the engine when you have dynamic LODS?

For discrete LODs, both is possible and common. AFAIK, GTA5 for example did all (or most) LODs manually. Other games often use automated processes, e.g. Simplygon middleware.
However, automated processes are usually implemented on tools side. Runtime engines do not import one model and generate n LODs automatically. So modders depend on having those tools or need to reverse engineer them to provide proper assets the game can load. DCC tools also have reduction / remeshing options to help with generating discrete LODs.

For ‘dynamic LOD’ as Nanite does, it also is an offline preprocess done in editor. The game runtime then uses this preprocessed results, but a UE5 game could not import a 5M poly model and generate the necessary data structures itself. It might work for you to use standard UE5 editor to convert your models for use within a future UE5 game then.

This topic is closed to new replies.

Advertisement