🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Depth impostors - does anybody use them.

Started by
10 comments, last by Frantic PonE 3 years, 7 months ago

There are older papers available that mention depth impostors. But I've never seen a video of one. Does anyone use them? How do they look?

A depth impostor is a billboard impostor that has depth info for each pixel. Each pixel thus has a position in 3D space. So when it's composited into the output image, that position is transformed into screen depth space and used for Z-buffering. This makes impostors depth sort properly, and the GPU does most of the work.

It's not a perfect illusion. If you're perpendicular to the billboard, it's perfect, and as you get off-axis but before you switch to the next billboard, there's going to be some parallax error.

This seems like a good idea that isn't used. I'm thinking of this as a technique for distant areas of big worlds. A city where each distant block is represented by one impostor, for example.

Anyone been down this road?

Ref: https://www.researchgate.net/profile/Michael_Wimmer4/publication/220853057_Point-Based_Impostors_for_Real-Time_Visualization/links/57565b8c08ae155a87b9d296/Point-Based-Impostors-for-Real-Time-Visualization.pdf

Advertisement

FWIW, the old game Startopia used this. It re-calculated at most X pixels worth of impostors per frame to limit the amount of work done. As long as you didn't move too fast, the effect was pretty good, and performance was smooth.

With modern engines, using all kinds of re-projected geometry for shadows, reflections, etc, as well as about 16 render targets in the G buffer, this technique becomes less useful. You can't “rotate" an impostor to cast correct shadows, so you'd need one impostor per shadow-casting light source, and pretty soon, it's just faster to render the geometry normally.

enum Bool { True, False, FileNotFound };

Thanks. I watched a video of the old Startopia, from 2001. They were using this way too close, in indoor situations as close as 10m away. That's pushing this approach too hard.

I want to use this for outdoor settings 50m to 1km away, for distant stuff, as in rendering a city.

Metro Exodus uses cached billboards for distant objects - i assume they also use depth.
Should be easy to find if you google for frame breakdown / analysis…

hplus0603 said:
this technique becomes less useful. You can't “rotate" an impostor to cast correct shadows

Yeah, but if we go to RT shadows, this argument no longer holds and it remains interesting.

I've had some crazy thoughts about an impostor renderer: Divide surface into smaller patches, impostor for each.
Then we could get all advantages from object space shading but with much less complexity. We could cache the lighting AND the geometry rendering over several frames in one go.
Maybe it's worth to try it out… i could generate the data easily, but not sure about fixing cracks or aliasing on the final image.

Divide surface into smaller patches, impostor for each.

That's what I'm thinking. Divide the world into tiles, maybe 32x32 meters. Generate a depth billboard impostor for each. Near tiles get a full 3D render, but far tiles use the billboard. Shadows, lighting, and parallax may be a little off in the distance, but you probably won't notice.

Metro Exodus

The 2019 version, or something from the past?

That's what I'm thinking. Divide the world into tiles, maybe 32x32 meters. Generate a depth billboard impostor for each. Near tiles get a full 3D render, but far tiles use the billboard. Shadows, lighting, and parallax may be a little off in the distance, but you probably won't notice.

My plan is smaller tiles an hierarchies for LOD, but the problem remains the same. If we imagine to do impostor patches for a sphere model, then rotate it slowly, we get cracks and impostor reprojection trickery might not be good enough to merge the surface properly.

To limit such issues, you surely want to divide per object, so there never is a seam across a connected surface. And i guess for Second Life geometry that's quite practical because it's all individual models. But no idea if this works for architecture, and it certainly wouldn't for a big and close model like terrain.

IIRC, UE has an advanced impostor system. It works not only for distant facade but also for walkable geometry. I remember a video with landscape, rocks on a hill and house on top of the rocks. Rocks and house were impostor(s?), and the transition when walking up in first person was not visible. So maybe you can make a test in UE / ask at their forums for actual experience.

I mean Metro from 2019, yes.

It works, but I'd only go that route if you have an actual bottleneck or if you're trying to learn.

I've used similar on projects, but also from a decade ago and not on anything current. In various projects we used proxies of clusters of complex objects as billboards, and also used depth images coupled with 2D images. (The latter technique allows seamlessly mixing 2D paintings into 3D worlds with full depth information.) What you describe is effectively the age-old practice of swapping out geometry with high/low/billboard quality. The big difference is that instead of a pre-built asset you're making a snapshot for your billboard.

But is this really an issue for your game you want to spend time on? Most games design around it with level design so you can never see those things in the background, or they put you in scenarios where the background doesn't need to be the full world. Alternatively, they have worlds that are filled sufficiently that it's fine. You can get several kilometers worth of instanced stuff on modern cards with well-designed worlds to be visually appealing.

@frob OK. A bit more background on what I'm doing.

I'm writing a new viewer prototype for Second Life, in Rust. The existing viewers, which are open source, are mostly built on a 20 year old C++ code base, and the rendering is 20 year old technology. It's OpenGL, with too many draw calls, and is single-thread compute bound. It's common to shorten the viewing distance to small values, like 50 meters, to keep the frame rate above single digits. This is why Second Life has a reputation for being very sluggish, and a total turn-off for people used to AAA title performance.

So I'm starting over in Rust, using Vulkan, WGPU and rend3 for output. As a proof of concept, I'm making a basic visual output side, concentrating on rendering large numbers of static objects. Improved graphics technology and multi-threading will help, but it's not enough. There are whole high-detail cities, and they need to be dealt with somehow.

Second Life content is generated by users, and is often highly detailed. The lower levels of detail tend to be of poor quality. Often they have holes in them because they are generated by a terrible mesh decimator. Unlike games, there's no phase where content gets optimized down to low-poly form. So I want to work from mostly the higher level of detail info, but impostor it.

I'm stuck with that content. I don't get to change that. So I have to deal with a huge content load. Hence the need for impostors for the more distant objects.

The general idea is to sort out the stationary objects from the moving ones, and impostor all the stationary objects with big building-sized impostors. Draw moving objects normally. The viewer has no idea what can potentially move; that's up to the users.Regenerate the impostors if something stationary starts moving, and demote moving objects to stationary if they stay put for 15 seconds or so.

Every building and ship has interior detail. FPS is 5-10. Half that with shadows on. Great content, slow rendering system.

Nagle said:
But I've never seen a video of one.

you will see them from 1:48

enjoy ?

This topic is closed to new replies.

Advertisement