🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Why sync client and server tick?

Started by
10 comments, last by Armchair-Advisor 3 years, 2 months ago

I have been working on my networking solution in Unity for a few months now. I have implemented a delta compression system where I compress commands and states etc and have implemented client-side prediction, and I still don't understand why would I sync the client and server tick, other than to schedule future events to happen at the same time both on the client and the server (for example, to have a bomb explode at the same time regardless of latency). But most things that I think of are instantaneous, for action games almost everything happens without anticipation. Therefore I feel like syncing the tick is not really worth it?

What I do to keep the client and server in sync is to attach each command from the client with a command number, starting at 0, and use it to keep track of them and for ack, etc… And I have a command queue on the client where I send the entire thing where each command is delta compressed against the next, and mostly I only send one or two, only when packet loss happens it would be more than that. While on the server I have a buffer where I wait for two commands before starting to execute them so I always have a command to execute each tick. And when the server receives more than one new command (when it had no commands at all, which would happen when packet loss occurs), it would execute all of them at the same tick, which would result in a jump in the character movement for other clients, where it was previously standing still (due to no command) and now suddenly appears farther ahead (several commands executed together). Although what I do to ensure this won't ruin the experience for other players to have to deal with a stuttery player, is to measure how much the server is executing more than one command at the same tick for this client, and when it's much, the server would increase the client's buffer (therefore increasing its command execution latency, which would preserve the smooth player experience despite packet loss, at the expense of more deaths behind cover caused by higher latency), but then the server will most likely not have to execute more than one command at the same tick, because the buffer has gotten bigger, and the bunch of commands from the client would fill the gaps before the server runs out of commands to execute. And once the server detects that the client's connection has gotten better, it would shrink the buffer size back to have two commands before execution.

This all works well at the moment. Is there something I'm doing wrong and I'm not realizing it? Why would I sync the client and server ticks when I already have a system that works without it?

Advertisement

If you don't depend on the specifics of physical simulation, then just synchronizing commands that update the game state, is totally fine! This is what I think most RTS-es do with the deterministic simulation model. As long as the physics is just used to “present” the authoritative state that's determined by your command queue, then that will work fine.

There's of course the problem of player A sending a command at the same time as player B, and the server has to tie break who “won” (if it's something like “pick up a health pack” or “strike the killing blow” or whatever.) This ends up being mostly put into latency hiding – player gives a command, gets command-acknowledgement, but doesn't see the actual outcome until the server responds with the outcome.

enum Bool { True, False, FileNotFound };

@hplus0603 Thanks for responding!

The framework I'm working on is geared towards FPS and action games (I want the netcode to support making something to the scale of Battlefield 4 multiplayer, where you have soldiers and vehicles - up to 64 players). I have read in an older post (https://www.gamedev.net/forums/topic/697159-client-side-prediction-and-server-reconciliation/) a comment of yours where you said that syncing the clock is better when you have a high player count, can you clarify how? I am just interested in knowing what AAA multiplayer FPS games like Battlefield do. I want to still be able to compensate the client for packet loss and execute a bunch of commands in a single tick, BF4 for instance definitely does this for movement commands.

If you can't afford to run the full simulation for every player, based on command order, and specifically, can't afford to hide the latency between “shoot gun” and “see result,” then using tick based simulation is better, because it allows you to talk about when something was initiated, vs when something was resolved, AND it lets you talk about the state of other entities on your screen.

So, for example, let's say it's tick 200. I line up the crosshairs on you, and pull the trigger. You are 10 ticks away from me (to server, and then to your machine) and thus the latest state I have from you is tick 190. So, I will send a packet to the server similar to:

  • At my-tick 200
  • I fired my gun at you, in the given orientation/position
  • who were extrpolated to tick 200 based on the state I had for 190

The server can then place your entity at the spot it was at 190, and extrapolate it enough for its displayed position for tick 200, and then see whether my shot would hit or not.

My gun would go “bang” and muzzleflash and cycle the action right away, so I get immediate feedback of my shot. Ideally, there's some smoke from the muzzle flash that obscures your avatar for a hundred milliseconds, too :-)

Then the server will send a message to everyone who can see it, that:

  • At server tick 200
  • I fired a gun in the given direction/position
  • and I hit you (or, I missed)
  • and if I hit you, your new hitpoints are X

When my client receives this packet it draws the blood spray out of your avatar that represents being hit, and deducts your hitpoints from the hitpoint bar.

In this model, my local client is running “me” at tick 200, the server would be at tick 195, and I would be seeing “you” as of tick 190, because of the transmission delay.

This model is largely the “source engine lag compensation” model, with extrapolation of entities.

The problem with command IDs is that you cannot enforce a global order on the IDs. The server needs to enforce the order. You can presumably send a snapshot as a “command ID” from the client every tick, and treat that as the input to the server, at which point you have just renamed “tick number” to “command ID” and it's the same thing :-)

enum Bool { True, False, FileNotFound };

The way I thought about doing lag compensation is to attach each shoot command with the state tick of the entity that was shot.

But now I can see how the tick approach is cleaner and better. I am just afraid that it won't work well under bad network conditions. If I understand it correctly, wouldn't the continuous adjustments result in missed commands? Specifically setting the tick to the past. How would this approach differ from the one I currently have when it comes to accepting new commands. Currently, I send each client packet with all un-acked commands as I explained in the original post. And I simply accept if cmdId > LastId. This way the player experience is always smooth even if packet loss/jitter happens.

Will commands be missed if you drop packets? Yes, unless you do something special to work around that, such as including the commands from the last few packets in each packet you send upstream. If you use RLE encoding, this generally doesn't add a whole lot of extra packet space anyway. You will still have to deal with longer delay for certain entities – you might want to “delay” the clock of a poor connection to give it more latency to give it time to catch up.

That being said, poor network connections will always lead to poor experience. Generally, connections don't have “just a missed packet here and there” – instead, they will either bunch up and send a whole set of packets at once every second or so, or they will have periods where many packets get lost; generally because of interference and how wifi works. Once you hit a copper or optical wire, packet loss is very rare – unless the backbones are overloaded, in which case, again, you'll see significant queuing/batching, and a real-time experience will be hard to deliver.

enum Bool { True, False, FileNotFound };

Thanks for the insight on packet loss.

Can you clarify what you mean by delaying the clock of a poor connection?

Now I have changed my code to have a universal local clock on the client that is independent of the server clock. I already send each packet with all un-acked commands from previous ticks on the client, so anytime packet loss happens they would accumulate and be sent together. My question is how to handle (accept/ignore) these commands on the server. I have read on older threads where some people try to sync the client tick to the server tick so that when the client packet arrives at the server, the packet tick would match the server tick.


I don't understand why do that? (Because as you say, all that matters is that things happen in the correct order.)


Because doing this would result in ticks in the past due to adjustments and this would cause commands to be missed if I understand how this works correctly. Reading through other posts, it seems that the proper way to go is to have a remote clock for each client on the server (and also on the client for the server) where it tries to sync to the independent client/peer clock. And using that we could know how far back we can accept client commands (by comparing our remote estimated tick to the command tick) to make sure we don't compensate too much, and it would also be used to properly schedule events to be executed in the future (and these would be relative to the server clock).

Is my thinking correct? Because I am almost complete in the basic features of my networking solution, just I am confused about this server-client tick syncing.

Can you clarify what you mean by delaying the clock of a poor connection?

Assuming that you queue the client commands such that they will arrive at the server at the “right” tick, then if every so often, your packets will be delayed by 200 milliseconds extra, then count that time into the delay of the client, so that when things are “good” the packets arrive early, but when things are “bad,” they still arrive in time. This is generally better than having the lower latency for the “good” times, but assuming packets are delayed/lost when things are “bad," if things are “bad” often enough.

I have read on older threads where some people try to sync the client tick to the server tick so that when the client packet arrives at the server, the packet tick would match the server tick.
I don't understand why do that? (Because as you say, all that matters is that things happen in the correct order.)

If players never interact with each other, the only thing that matters is that commands are done in order for each individual player.

When players start interacting with each other – shooting at each other, or competing for being “first” to a health pack pickup, or some other such scarce interaction, then the “order” of events needs to be carefully coordinated such that all clients agree on what happened, and the time latency between “giving command” and actually “seeing resolution” needs to be solved with gameplay and simulation design. For example, if two players are both racing for a health pack pickup, then when it's picked up, immediately play the pickup sound on the client. However, only give the health points back once the server confirms it. Whoever “won” actually gets the points, whoever didn't “win” hears the health pack pickup sound … which presumably then came from the other player picking it up, right next to them!

To make these situations “look right” requires careful consideration. You have three options:

  1. Extrapolate all remote client actions on the local client. This will “often” make the remote players look like they're in the “right” place, except when they turn sharply, it will have displayed those players in a “wrong” place they were never in. This is very common in FPS-es.
  2. Delay the resolution of local commands until all commands for a given time step can be received from the server. With long latency, this means you have to apply some kind of masking to hide the latency – else, you'd be pressing “forward, now!” and nothing would happen for 100 milliseconds before you started running forward. RTS games do this, by playing a “yes, sir!” animation before the units start moving on the local machine.
  3. Run the action immediately locally, display remote clients locally at their “late” (but correct) position, and design the game such that the precise location doesn't matter as much – no health packs, timed doors, moving platforms, and so on. If player A interacts with player B, allow interaction time stamps “in the past." This is common for RPG-style games, and some FPS style games that don't have very interactive environments. Similarly, “base defense” and other more asynchronous multiplayer games can easily get away with this.

enum Bool { True, False, FileNotFound };

Couldn't the server simply not care and always favor the lower latency player? (and if multiple players have very close latencies, it would be based on the order of the server clients list, so the player at the beginning of the list always has his commands executed before the rest). According to my experience, I think most games do that. What do you think games like Fortnite/BF do?

Regarding extrapolation, I am not sure most action multiplayer games use it? Especially FPS games. In a game like Battlefield, for instance, where the player-controlled character is fast and can move left and right easily, it would result in continuous visual corrections. So it's not something I would consider using for my game. I will stick to interpolating known states. Unless I'm missing something and it's not that bad?

And thanks for replying, I really appreciate it.

Couldn't the server simply not care and always favor the lower latency player?

What does the higher-latency player see, though? You still need to have the bit where one of the players needs to get “corrected” somehow, and the game design challenge of hiding those cases as much as possible.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement