🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Tessendorf Ocean Normals

Started by
1 comment, last by J. Rakocevic 3 years, 11 months ago

I've been working on an implementation of the tessendorf paper on ocean surfaces. I'm pretty well along, but I'm realizing that with per-vertex normals, the specular reflections I'm getting will always be pretty blurry. Like what Keith Lantz has going on in his pictures vs. Eric Bruneton's glittering BRDF. A big part of that is that Bruneton is generating a screen-space grid with per-pixel normals, while Keith and I are generating 3 dimensional grids with per-vertex normals.

Anyone have any experience with ocean rendering? What's my next move here, should I just scroll a normal map over top to simulate a rippling surface? Is there a microfacet BRDF I can use with my course normals? Something else I'm missing maybe?

Advertisement

I haven't done ocean rendering but have done a few water shaders with ripples, fresnel etc.

One important thing to note is that your geometry has a limited complexity set by you (vertex grid density) and therefore your knowledge of “true” normals only goes as far as that. Extra normal data defined on any finer level can't really be accurate, since normals in the pixel/fragment shader are simply interpolated between vertices. But they can be faked with noise. So if the algorithm uses a more coarse vertex grid for processing purposes you can still sprinkle some good ole perlin variation with careful parameter selection to add finer changes (basically procedural normal mapping, can be surprisingly cheap). If you want a lot of control over them it can get involved to parameterize (aka if your waves change size or speed how to make the noise still look natural and fitting).

Having said that, the paper I think you are talking about uses a deterministic, analytic formula to displace the surface based on some parameters like time t and “wave vector”. In volumetric rendering, when it's done by people smarter than me, there are examples where a function's derivative is found in order to determine a normal of the surface it describes. Whether this function is possible to derive I don't know, sorry. Alternative option is to sample the function at several really close points and use the differences to fake the derivative.

What I'm getting at is that, if this analytic solution to a normal or even just brute multisampling works fast enough, I'm not sure if you need the vertex grid at all. A volumetric post process shader can do a lot (I made volumetric clouds like that, no vertices except for the screen quad).

This topic is closed to new replies.

Advertisement