🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Why the sudden boom in marching cubes? [Possible target]

Started by
20 comments, last by Turbo14 6 years, 2 months ago
19 minutes ago, Scouting Ninja said:

It's the same problem as Euclideon

I thought the same. One more problem with Unlimited Detail was they claimed things that never where true.

Recently i've read their patent and surprise: Their algorithm is a regular octree front to back traversal - the same thing i'm using for occlusion culling for more than a decade and i never assumed this to be a new invention. In fact the only thing that's 'new' is their idea to replace a perspective divide by approximizations - this made sense i the 90s when divisions where expensive.

So, no new revolutionary algorithm, of course no unlimited detail and no replacement for game engines.

 

Now, looking at automontage.com i see similar claims:

'Meshes only model surfaces - a hollow and thus very incomplete approximation of reality'

What? Why processing volume if all we can see is the surface?

'Mesh content creation is complex and technically demanding; costly with high barriers of entry'

Ah... so poking out holes with a spherical brush is better than shaping by dragging vertices? 

'Many layers of “hacks” (e.g. UVs) make editing and distributing mesh assets cumbersome'

Yep - decoupling material from surface is surely a very bad thing - it allows to share data and saves work, but it is complex, so it must be bad.

All their arguments are wrong and the exact opposite is true.

Personally i think polygons are a very efficient way to aproximate surface. Voxels can never get there. We can improve the efficiency of polys too with better topology and by adding dispalcement mapping to get the same detail with less memory. We can make polygons volumetric by using polycubes or hexagonal remeshing etc. - this stuff is hard and did not made it into games yet, but it will, and it will be more flexible and efficient than voxels in regular grids.

But that's just my personal opinion. What makes me sad is how they degrade their own good work with such ridiculous claims to attract foolish investors.

 

Back on topic, the problem with marching cubes / tetras, dual contouring es the bad topology they produce. Too much vertices for a too bad shape. Hardware is powerful enough to deal with this, but we could improve here, and that's what i'm currently working at (but for completely different purpose and usecase).

So, we could take the 'bad' output of those algorithms and remesh it to something good. E.g. using something like this, which is quite fast: https://github.com/wjakob/instant-meshes

Personally i have harder requirements, i need a pure quads low poly approximization with as few unregular vertices as possible (quadrangulation problem). I did not think this could be realtime, but after implementing something close to this paper: https://www.graphics.rwth-aachen.de/media/papers/campen_sa2015_qgp_medium.pdf, i see it would be probably fast enough for user generated ingame content. Further this allows seamless texturing, so proper displacement mapping as well, plus smooth lod transitions as seen in voxel engines. I see a big future for this stuff in games even beyond current applications where we consider marching cubes.

 

 

 

 

 

 

 

Advertisement
1 hour ago, swiftcoder said:

In practice, going from edge intersection -> marching squares -> marching cubes is dead simple, and things only get complicated when you go to make a whole world out of voxels.

Took a shortcut with the whole intersection thing. Pre-made polygons can be used and then it's just building blocks, nice thing is that smoothing can be done then the vertices can just be scaled if it has to be smoother or less so.

1 hour ago, swiftcoder said:

and it comes around whenever people want to make "minecraft, but better", or "infinite universe via procedural generation".

Ha, so true. I use to be like that, in some ways still am. The saddest part about voxels is that they are as optimized as it gets, no discoveries to be made as it is literally the smallest data point there is.

1 hour ago, JoeJ said:

Recently i've read their patent and surprise: Their algorithm is a regular octree front to back traversal - the same thing i'm using for occlusion culling

Yea, if these people where willing to research modern rendering they would have advanced way beyond what they are doing right now. The ridiculous claim that they can't use graphics cards is insane.

But yes it's just culling, using bits patterns to represent atoms as a one dimension array then a octree culling. The only impressive thing is that they managed to get past the 1bit color problem. Even so it doesn't look like they got that much further with the colors and is mostly why it still looks so bad after all these years.

1 hour ago, JoeJ said:

Back on topic, the problem with marching cubes / tetras, dual contouring es the bad topology they produce.

Like I mentioned before, I cheat this stuff. Using a 3D tool to make this is no different from drawing a curve on a paper and calculating all the points. Exporting and importing the mesh is no different from typing in a formula.

2 hours ago, JoeJ said:

Back on topic, the problem with marching cubes / tetras, dual contouring es the bad topology they produce. Too much vertices for a too bad shape.

They don't have to produce bad topologies. That's mostly a side-effect of people running these algorithms on a regular grid. If you run on an unconstrained octree, collapse as many nodes as possible, and then dual-optimise the node placement on the output side, you can get very minimal topologies.

Or you can move to an extraction algorithm designed to produce clean topology, like Adaptive Skeleton Climbing.

Folks interested in this space should take a look at some of the eye candy Lin has been producing lately :)

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

I'd say that boom for Marching Cubes was somewhere around the time Minecraft was released.. :) 

PS. I just read initial post fully and you actually mention this haha. Sorry, I just looked at the topic and thought I'd drop some clever remark..

I think it never really ceased, just sometimes comes in waves where people create more "infinite, procedural & voxel terrains". Surface extraction from such volume data was always interesting topic, and the way it innovated working with terrain and finally having real 3D surface instead of faked heightmaps with some artificial holes and added geometry for caves / overhangs. I'm surprised this is not a hot topic for next-gen and everyone sticks to heightmaps still. I think Cryengine had voxel editor for terrian but I'm not sure which games used it. Such terrain provides much more interesting features, but has a lot of problem areas - generating LODs, texturing so I can see why this may not be considered as competition, which is much much easier to deal with. 

I personally stick to Surface Nets, which are more like wrapping a cloth around some more rough surface, but generates good enough terrain features and works on a binary data volume instead of density field. 


Where are we and when are we and who are we?
How many people in how many places at how many times?
2 hours ago, swiftcoder said:

They don't have to produce bad topologies. That's mostly a side-effect of people running these algorithms on a regular grid. If you run on an unconstrained octree, collapse as many nodes as possible, and then dual-optimise the node placement on the output side, you can get very minimal topologies.

Yeah, but unfortunately for my needs there remain still too much singularities. I'm interested more or less in object space lighting. This project may be the first that really shows the benefits of the idea: 

See how this guy (probably) stores ray traced results in textures and instead denoising them, he just blurs with neighouring texels to turn a sharp reflection into a glossy one, or a hard shadow into soft shadow. (If i get him right)

Having a mesh that primary consists of regular quads helps here, as we can build a seamless UV map to keep neighbour sampling efficient across UV seams. We can build this UV map on the original mesh from there as well, so it's not necessary to modify detailed geometry like characters or guns. Here the quadrangulation is just an intermediate step to get seamless UVs.

But for background geometry using the quadrangulation directly enables really easy lodding: The low poly quadrangulation is the base level and we subdivide to get back the detail from original mesh, eventually in combination with geometry images, displacement, even screen space displacement, volumetric voxels on surface shell, whatever...

We would end up with a new, more efficient form of geometry with many applications. (i think the real reason why displacement mapping did not really take off is the problem of seams becoming unacceptable, so you can use it only on things like height maps, or stitch holes with inefficient hacks.)

That's quite off topic but may be worth to mention :) 

 

Edit: I got him wrong - mentioned project does not work in texture space so no need for global parameterization there. (But the argument holds for upcoming techniques that do so like mine.)

17 hours ago, swiftcoder said:

They don't have to produce bad topologies. That's mostly a side-effect of people running these algorithms on a regular grid.

Fantastic links. Moving into octrees is the logical next step. However most games for now seem to be happy with just a grid and the grid is a better way to learn for new developers.

Has anyone used a system like this in a game yet?

17 hours ago, noizex said:

I'm surprised this is not a hot topic for next-gen and everyone sticks to heightmaps still.

This is actually common. Replacing a commonly used element in game design requires that the new thing to be at-least 50% better. Because it requires engines to be re-build and teams re-trained. Right now there are lots of improvements that developers know exist but don't implement because it's not that much better.

17 hours ago, noizex said:

I personally stick to Surface Nets, which are more like wrapping a cloth around some more rough surface

This is actually what Unreal's hightmap also does, although only on the top down, it's also why it keeps so much quality where Unity terrain feels like zooming out collapses the terrain; making it difficult to work on it's terrain from a distance and to keep scale.

16 hours ago, JoeJ said:

I'm interested more or less in object space lighting.

The holly grail of voxels. The way voxels can pass data around and mix it precisely will probably be the secret to unlocking real lighting and reflections. Not to mention provide new concepts for developers to play with.

The link you provided shows some amazing results. 

13 minutes ago, Scouting Ninja said:

Has anyone used a system like this in a game yet?

It's hard to tell. A lot of commercial games are very cagey about the extraction algorithms.

17 hours ago, noizex said:

I'm surprised this is not a hot topic for next-gen and everyone sticks to heightmaps still. I think Cryengine had voxel editor for terrian but I'm not sure which games used it.

C4 engine also had a voxel editor. Baking voxel editors out to meshes is a reasonably common workflow for complex terrains.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

1 hour ago, Scouting Ninja said:
18 hours ago, JoeJ said:

I'm interested more or less in object space lighting.

The holly grail of voxels. The way voxels can pass data around and mix it precisely will probably be the secret to unlocking real lighting and reflections. Not to mention provide new concepts for developers to play with.

But this has already happened: voxel cone tracing :) ...and it turned out to be not really the solution towards realtime GI. The problem is for realtime GI you currently need aggressive LOD, and world grid aligned voxels quickly become a too bad approximization. The way voxels allow to pass data with neighbours is great, but the same applies to quadrangulations with textures while being much more memory / runtime efficient. So the only generic usecase for realtime voxels i would agree upon is very diffuse geometry, whatever this could be. A thinner shell of voxels over polygons is somethig i think about ever since... maybe i get a chance to try this in the far future... :) 

 

18 hours ago, JoeJ said:

Yeah, but unfortunately for my needs there remain still too much singularities. I'm interested more or less in object space lighting. This project may be the first that really shows the benefits of the idea: 

See how this guy (probably) stores ray traced results in textures and instead denoising them, he just blurs with neighouring texels to turn a sharp reflection into a glossy one, or a hard shadow into soft shadow. (If i get him right)

Having a mesh that primary consists of regular quads helps here, as we can build a seamless UV map to keep neighbour sampling efficient across UV seams. We can build this UV map on the original mesh from there as well, so it's not necessary to modify detailed geometry like characters or guns. Here the quadrangulation is just an intermediate step to get seamless UVs.

But for background geometry using the quadrangulation directly enables really easy lodding: The low poly quadrangulation is the base level and we subdivide to get back the detail from original mesh, eventually in combination with geometry images, displacement, even screen space displacement, volumetric voxels on surface shell, whatever...

We would end up with a new, more efficient form of geometry with many applications. (i think the real reason why displacement mapping did not really take off is the problem of seams becoming unacceptable, so you can use it only on things like height maps, or stitch holes with inefficient hacks.)

That's quite off topic but may be worth to mention :) 

 

Edit: I got him wrong - mentioned project does not work in texture space so no need for global parameterization there. (But the argument holds for upcoming techniques that do so like mine.)

I'm basically doing the same thing as the engine you mentioned except my ray trace is on the CPU, and has less features atm. Didn't know about this engine until today.

My trace image gets blurred out at more than a dozen meters since it's low resolution though... If I move my ray trace to the GPU I think it would solve this problem since the traced image would be higher resolution.

3 minutes ago, Turbo14 said:

I'm basically doing the same thing as the engine you mentioned except my ray trace is on the CPU, and has less features atm. Didn't know about this engine until today.

My trace image gets blurred out at more than a dozen meters since it's low resolution though... If I move my ray trace to the GPU I think it would solve this problem since the traced image would be higher resolution.

I saw it in the other thread and was wondering how it works, now i get the idea :)

In your final video there's a noticeable detachment of shadows on camera rotation. You could fight this by temporal reprojection, but it's one more problem that would not arise if you would work in texture space instead screen space. Another advantage is the possibility to update only a fraction of texels per frame - all those things are much harder in screen space. However moving from screen to texture space is a really big challenge, no matter if we talk about rasterization or ray tracing. Probably i won't dare to do it myself for other things than GI samples where i already have it... 

This topic is closed to new replies.

Advertisement