🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Understanding Lag Compensation & Timestamps

Started by
20 comments, last by NetworkDev19 4 years, 9 months ago
2 hours ago, Mussi said:

Ping and measure the RTT before you start/join the game.

Note that lag compensation is a concept, not a specific implementation. What you've described is one form of compensation, but there are other ways to compensate for lag, e.g. simply increasing the radius of a fireball (there are pros and cons to every approach and it depends on the type of game).

Thank you, however, how does RTT translate to ticks of the server?

I have 100ms ping when I start, I connect to the server, server sends me a message saying "I am at tick 120". The client starts at what tick to guarantee it stays ahead? 

 

And yes, sorry, I meant Valve's specific lag compensation in hind sight.

Advertisement

Typically, the client will put their clock in each packet that goes out, and the server will then echo back "the packet where you said clock was X, I received at time Y," which means that the client can compare X to Y to make appropriate adjustements. Typically, the client will aim to make X = Y+nTicks where nTicks is how many physics tick commands go into a network message.

enum Bool { True, False, FileNotFound };
21 hours ago, hplus0603 said:

Typically, the client will put their clock in each packet that goes out, and the server will then echo back "the packet where you said clock was X, I received at time Y," which means that the client can compare X to Y to make appropriate adjustements. Typically, the client will aim to make X = Y+nTicks where nTicks is how many physics tick commands go into a network message.

Thank you yet again!

I actually went and got something working. I poked around the Unity Multiplayer FPS sample and saw what they were doing with this, I hadn't understood what it was doing before and now it makes sense!

Now my client is running ahead of the server in its clock by at least 1 buffered "frame"/tick.

However, sometimes the client drifts. There's a formula in the Unity FPS sample that it uses. It looks something like this for context if you're interested. When the client drifts, it will end up 1 tick behind the server, and then reset itself to be 1 tick ahead which is good. However, the command sent for that tick from the client is always lost.

The reason it is lost is because when I see a command received by the server is from a tick in the past, I disregard it. Perhaps I shouldn't ignore inputs that are roughly -1 from the current server tick?

Example: I sent move for tick 2 and jump for tick 3. The server is already at tick 3. It looks and sees both inputs, but I was previously dropping the move input (2 < 3). So should I go ahead and perform both the move and jump on tick 3?

 

If the client drifts, then you will lose a command. That's just the way it is, unless you want to re-play the world for all players (note that all other players will also see your command late, and need to re-simulate!)

To avoid this, you can aim for the client to provide commands 2 or 3 ticks ahead, and when you see a command 0-1 tick ahead, tell the client to update its clock.

In general, though, clocks on computers don't drift much. If you see drift, could it be that you count "graphics frames displayed" instead of using a high accuracy clock like CLOCK_MONOTONIC_RAW or QueryPerformanceCounter() ?

 

enum Bool { True, False, FileNotFound };
36 minutes ago, hplus0603 said:

If the client drifts, then you will lose a command. That's just the way it is, unless you want to re-play the world for all players (note that all other players will also see your command late, and need to re-simulate!)

To avoid this, you can aim for the client to provide commands 2 or 3 ticks ahead, and when you see a command 0-1 tick ahead, tell the client to update its clock.

In general, though, clocks on computers don't drift much. If you see drift, could it be that you count "graphics frames displayed" instead of using a high accuracy clock like CLOCK_MONOTONIC_RAW or QueryPerformanceCounter() ?

 

I'm counting the simulation ticks in a FixedUpdate loop (Unity). So I'm not using timestamps, but rather ticks that are supposed to occur every 16ms for example. 

The client waits until it receives the server's tick, then begins to count upward. It monitor's the server's tick, so if it the client slows down for some reason or is moving too fast, then it will reset back to the servers tick (+ 1/2 rtt + 1~3 tick buffer). The problem I'm experiencing is that when this occurs, and the clock resets, often I'll end up losing that 1 input that goes out just before the clock resets.

This is the implementation I got from the FPS Unity example and the Overwatch GDC architecture talk. I'm definitely open to using different clocks, I'm just afraid that if I rely on a particular clock on different devices/CPUs/architectures because its multiplatform, I may end up seeing issues.

Hello again,

So I'm just posting to run down ideas outloud and I'm curious if anyone has any suggestions to improve it or problems with the design. I believe it's what is in the Unity FPS sample.

The client waits for 2 world snapshots from the server. It uses delta encoding so it has to do this anyway. In each world snapshot from the server, the server includes it's current game time tick.

The game client intializes it's own game time tick to:

The servers tick + ((timeSinceLastSnapshotInMS + rtt) / 1000) * 60 + 2

Where 60 is the tickrate of the server and 2 is the extra buffer of ticks.

Now the gameclient is constantly doing this over time. Each time the game client ticks, it runs this calculation and decides that if the current client tick number is too far behind or too far ahead of the result of that calculation, it simply hard sets it's tick number to that result.

This seems to work great for awhile. But eventually the client drifts or hiccups and then ends up too far behind ahead or behind the server and it results in a missed command. It happens maybe every 15-20 seconds. It's annoying, and I'm beginning to think I should ditch the design and go for something else.

Quote

eventually the client drifts

So, first, the client should not use this to set its "tick," but use it to set the "relation between local clock and server clock," where "clock" literally means the high-precision timer used to advance time in the OS. This is stored as an offset to QueryPerformanceCounter() or clock_gettime() in Linux. These clocks, in general, should proceed at very similar rates over time, so drift should be minimal in the normal case.

You don't need to only use the signal for "within tolerance" or "totally out of whack." You can let the server tell you how many ticks ahead/behind you are. If you aim for 2, and get a couple of messages saying you're 1, or 3, ticks ahead, you can adjust offset by a small amount based on that information. Because the offset is in clock terms, not tick terms, you can adjust by fractional ticks. Just make sure you have some hysteresis/damping in that adjustment loop, or you'll likely to get oscillations.

enum Bool { True, False, FileNotFound };
On 8/11/2019 at 1:28 PM, hplus0603 said:

So, first, the client should not use this to set its "tick," but use it to set the "relation between local clock and server clock," where "clock" literally means the high-precision timer used to advance time in the OS. This is stored as an offset to QueryPerformanceCounter() or clock_gettime() in Linux. These clocks, in general, should proceed at very similar rates over time, so drift should be minimal in the normal case.

You don't need to only use the signal for "within tolerance" or "totally out of whack." You can let the server tell you how many ticks ahead/behind you are. If you aim for 2, and get a couple of messages saying you're 1, or 3, ticks ahead, you can adjust offset by a small amount based on that information. Because the offset is in clock terms, not tick terms, you can adjust by fractional ticks. Just make sure you have some hysteresis/damping in that adjustment loop, or you'll likely to get oscillations.

So you think I should ditch the Unity FPS sample's direction and go for something where -

1) On login, the client sends a local timestamp (UTC ticks or something)

2) The server sees this, computes the delta with its own timestamp, and sends the client the result.

3) The client will now use its local timestamp + delta to send future commands to the server.

 

Am I understanding you correctly?

I have no idea how the Unity FPS sample does it.

Perhaps you can adjust the FPS sample to make fractions of small adjustments if you receive a small delta, and only make the "big jump" adjustment if you really end up with a big offset.

One possible, somewhat simple implementation, is to make the server send back in each packet "I received commands for tick X when I was at tick Y" (which is really just a simple difference) and "your adjustment value was Z."

The client will send its adjustment value to the server, together with the target tick number.

The client can then easily adjust its adjustment value to target a specific number of ticks-ahead -- say, 2.0. Because the server tells you what the offset is as well as the delta, you can adjust the offset correctly without much risk of feedback / oscillation. If you don't want to include this value in each packet (it's 4 to 8 bytes, depending on representation,) then you can instead apply dampening and hysteresis to the adjustment, where if you see an unaccecptable value (less than -2.0 or greater than +20.0 in this case, for example) then you would adjust the entire difference in one big chunk, and forbid any more large adjustments for the next 2*RTT packets.

enum Bool { True, False, FileNotFound };
On 8/12/2019 at 9:16 PM, hplus0603 said:

I have no idea how the Unity FPS sample does it.

Perhaps you can adjust the FPS sample to make fractions of small adjustments if you receive a small delta, and only make the "big jump" adjustment if you really end up with a big offset.

One possible, somewhat simple implementation, is to make the server send back in each packet "I received commands for tick X when I was at tick Y" (which is really just a simple difference) and "your adjustment value was Z."

The client will send its adjustment value to the server, together with the target tick number.

The client can then easily adjust its adjustment value to target a specific number of ticks-ahead -- say, 2.0. Because the server tells you what the offset is as well as the delta, you can adjust the offset correctly without much risk of feedback / oscillation. If you don't want to include this value in each packet (it's 4 to 8 bytes, depending on representation,) then you can instead apply dampening and hysteresis to the adjustment, where if you see an unaccecptable value (less than -2.0 or greater than +20.0 in this case, for example) then you would adjust the entire difference in one big chunk, and forbid any more large adjustments for the next 2*RTT packets.

 

At the moment, I have the server sending the client:

  • Server's current tick (ie tick 100)
  • Latest tick for commands it has received from that client (ie tick 105)

Using this information, the client assumes the server has 5 commands buffered (for each tick 100-105) for that client. This is delta.

I have a setting that says "if the delta is > than X, tick slightly slower" And "if the delta is < Y, tick slightly faster"

This appears to somewhat work, it has a tendency to bounce around a lot when I set it to 2. I would expect the delta to stay close to 2 commands buffered, but it oscillates between 0 and upwards of 3-4 kind of rapidly. It never settles on 2 for a long time at any latency I throw at it from 0 to 200.

The Unity FPS sample does something pretty much identical, but it stays close to 2-3 when set to 2 as well. I'm wondering if maybe my tick timestep are wrong somehow.

You suggested sending the adjustment value "Z" to the server, I presume you mean the # of ticks the client is guessing it should be ahead. Can you expand on that and how that would help? I don't mind the extra 4 bytes if it keeps my command buffer rock solid.

 

Thank you again by the way.

This topic is closed to new replies.

Advertisement