🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Question about lightmapping UVs

Started by
8 comments, last by MJP 2 years, 9 months ago

Hey!

I am about to implement support for lightmapping in my custom engine, and I have some questions about the UV coordinates used by each lightmapped object. For reference, I looked at how Unity handles this. It seems like Unity prefers to use the second UV map for lightmapping, if there is one. That's fine, I can support arbitrary vertex formats in my engine, however, it is unclear how these UVs are shared (or not) between instances of an object. For example, with Unity (and many other engines) you can apparently lay out the lightmapping UVs yourself in a DCC, such as Blender. Then, when you import the model into Unity you select the option which keeps your UVs rather than generating new ones. My question is, how does this work with more than one instance of that object?

Let's say you have a rock which is used in two different places in the level. Obviously they will need to occupy different places in the light map(s). But what if they are instances of the same mesh which is imported using the “keep my precreated UVs” option? Do they simply get their own lightmap texture or are the lightmap UVs scaled down into "slots" in a lightmap texture or what?

Obviously Unity is closed source, and I don't expect you to know explicitly how it does things, but is there a “defacto standard” for this?

Thanks!

Advertisement

GuyWithBeard said:
Obviously Unity is closed source, and I don't expect you to know explicitly how it does things, but is there a “defacto standard” for this?

IDK, but the best compromise seems to scale and offset instance UVs, so each instance gets its own texture space. That's just a lookup by instance ID on scale and offset to resolve the indirection.
Having one texture per instance sounds unpractical, as instance counts tend to be the high with small objects.

JoeJ said:

GuyWithBeard said:
Obviously Unity is closed source, and I don't expect you to know explicitly how it does things, but is there a “defacto standard” for this?

IDK, but the best compromise seems to scale and offset instance UVs, so each instance gets its own texture space.

Yeah, that is what I thought as well, but then why would the lightmapper not just use the normal UVs of the object? Why do they need separate UV coordinates for the lightmap at all?

is it just to have an unwrapping that is optimal for lightmaps, as opposed to normal texturing?

GuyWithBeard said:
Why do they need separate UV coordinates for the lightmap at all?

Because the lighting differs for each instance. One column is in shadow, the other is in sunlight.
Thus each copy of the same object has unique data which breaks our strict instancing.
We're left with two options: Either have a unique texture per instance but have common UVs, or make unique UVs but use one large texture for all.

Texturing of an object will often map the same texels to more than one triangle, and sometimes tile the same texture more than one in UV space. That doesn't work at all for light maps; they need to be a unique parameterization.

Also, light maps generally need gutters between charts to avoid “bleeding” when viewing objects further away (higher-numbered/lower-resolution MIP levels.) It's not uncommon to define an 8x8 texel grid for light maps, and put vertices in the center of those 8x8 texel grid spots so every texture coordinate is (8*N+4)/R for resolution R, and some integer N.) Meanwhile, for regular objects, you typically want UV charts to line up perfectly to create seamless wrapping/tiling.

enum Bool { True, False, FileNotFound };

JoeJ said:

GuyWithBeard said:
Why do they need separate UV coordinates for the lightmap at all?

Because the lighting differs for each instance. One column is in shadow, the other is in sunlight.
Thus each copy of the same object has unique data which breaks our strict instancing.
We're left with two options: Either have a unique texture per instance but have common UVs, or make unique UVs but use one large texture for all.

Right, I might have been a bit unclear in my question. By “separate UV coordinates” I meant separate coordinates in the actual vertex data, but I now realize that I don't need that. I can just apply an instance offset in the shader before sampling the lightmap.

hplus0603 said:

Texturing of an object will often map the same texels to more than one triangle, and sometimes tile the same texture more than one in UV space. That doesn't work at all for light maps; they need to be a unique parameterization.

That makes sense. I don't really have overlapping UV islands or even use tiling (except for the terrain) all that much so I hadn't though of that.

Thanks for your input!

Going even further, UV coordinates are the basics, really from the rendering models of the 1970s. Better systems today have UV and ST coordinates, plus often spherical harmonics data and SH coordinates for more advanced lighting. Then you can get into physically based rendering (which the major engines all support) for even more complex but visually compelling scenes.

Mostly the art tools handle it, and the data is fed through shaders to do the heavy lifting. Lighting cannot be instanced, so each object gets its own data processed by shader.

frob said:

Going even further, UV coordinates are the basics, really from the rendering models of the 1970s. Better systems today have UV and ST coordinates, plus often spherical harmonics data and SH coordinates for more advanced lighting. Then you can get into physically based rendering (which the major engines all support) for even more complex but visually compelling scenes.

Not sure how this relates to lightmapping. Aren't ST coordinates the same as UVs with one axis flipped? As for SH, I actually use a form of simple GI for dynamic objects already, and their data is encoded as spherical harmonics. But since the vast majority of my scene will be static I want to look into baking lightmaps for those objects.

If I were in your shoes I would probably share the “base” lightmap UV among all instances of a particular mesh, and then have a per-instance 2D transform that you apply to get the “final” packed location of an instance's lightmap data. It is simpler, and requires significantly less per-instance data so you will save yourself some headaches. If you were to go the other way and have fully unique lightmap UVs per instance, you can then pack the individual sub-charts of the mesh independently for each instance. This can give you a bit better packing, but at the cost of more data. Another option would be to have some kind per-vertex lookup to be able to figure out the chart that a vertex belongs to and then use that to get a per-instance-per-chart transform, but I think it's probably overkill.

Regarding SH, I believe what frob was saying was that these days it's common to encode the lightmap data itself using spherical harmonics, or some other basis that provides directionality. The main benefit is that this lets your light maps combine nicely with normal maps.

This topic is closed to new replies.

Advertisement