🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Exact Syncing

Started by
28 comments, last by SaltySnacks 23 years, 7 months ago
You can''t have perfect synchronisation because you would need to know the time for a one-way trip (i.e. Server-Client) which is impossible to tell unless the clocks on both computers are themselves perfectly synchronised.

And you can''t synchronise the clocks over the internet because you don''t know how long the data takes to get to the machine (see above)

The only way to do this is to use a private network where you can predict the trip times (because you know how much traffic there will be), or physically take an accurate clock round each client to synchronise their clocks.
Advertisement
damn, so close... yet so faaarrr off


- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
It is a private network...all machines, including server will be in the same room over ethernet. Which leads me to beleive that the lag will be so small anyway, that I dont really need to worry about syncing it. However I could be wrong.
If you''re on the same subnet on 10BT or 100BT the lag will be a few microseconds, or the thread delay (which could be as high as 10ms),

but the reality is, two people can click the button at the same time, even if you were perfectly sync''ed.

- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
quote: But there''s no way to reliably get synchronized with the server to begin with. Your margin of error is the round trip time.


I have to disagree with this strenuously... protocols like NTP do a very good job at synchronizing clocks over latent connections.

Here''s the basic idea:

1) Send a ping packet with a millisecond timestamp (L1).
2) The remote host replies after inserting *his* timestamp (R1).
3) When you get the response, get another timestamp. (L2).
4) The equation (R1 - (L1+L2)/2) is the difference measured between your clocks (dT).
5) Use this offset to adjust timestamps from your local clock to that of the game clock: LOCALTIME + dT = GAMETIME.

Obviously, higher RTTs and greater variance will reduce the accuracy, but that can be offset by repeating this operation over time and selecting those response packets with the lowest RTT (or by taking a weighted average). Basic statistics will even let you calculate the *real* margin of error for a given series of ping data.

Best of all, these calculations are fast enough that you can perform them on the fly. I do this in my games, along with other data measurement, so that I can indicate potential network problems -- even improve path prediction and collision detection in my simulation.

For more information, check out:
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.
check out what! 8^) hehee it seem your message got cut off. Well fprefect is right on. Its unlikely that most players will have such large variablity in pings (order of magnitude ping differeces, as mentioned). NTP is the best solution in this case, and quite easy to implement. Check out this site for a good descripiton :

www.codewhore.com

It would be well worth your time to implement, it will open a whole new arena of possibilites for synchronization between the client and server.

Good Luck

-ddn
Oops, I put brackets on the URL and it ate it as HTML.

Actually, I was referring to the CodeWhore site as well -- since it''s mine. Comments and feedback are welcome!
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.
We were trying to find an better way, that is the method I suggested...

The equation (R1 - (L1+L2)/2 incorrectly assumes that a packets traveling to a destination take the exact same amount of time to return, which is usually not the case.

This method syncs you to the difference between L1 & L2, which should relatively small in in most cases, <100ms? The problem is there''s no way to measure L1 seperately from L2.

quote:
Its unlikely that most players will have such large variablity in pings

But they do! Have you ever played EverQuest? It constantly varies, and jumps around, a lot! (some is due to lag not latency, but even during ''quiet hours'' it still bobbles.


quote:
Obviously, higher RTTs and greater variance will reduce the accuracy, but that can be offset by repeating this operation over time and selecting those response packets with the lowest RTT (or by taking a weighted average). Basic statistics will even let you calculate the *real* margin of error for a given series of ping data.

bu they can''t, unfortunetly, I wish you were right. I even thought you were for a few moments
Then I thought the best you can do is deterimine the modal trip time, and never the offset. But that won''t do you any good, it doesn''t matter what the average nor Sx is, you need to know what the exactly what the offset is for this packet, not about what the offset it is to x many decimals.

It''s more important that your sync algorithm takes into account such disruptions and adjust for them.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
I guess i wasn''t clear, but i meant the L1+L2 pings. Their variablity won''t be that great usually, well under 100ms. This adds an inherent error in the time sync, however NTP doesnt claim to be perfect it only synchonrinzes clocks to within a range +- error. For most multiplayer games the error is well within the acceptable range of unpredictablity inherent in any multiplayer game. I guess the only way to convince yourself of the utility of it, is to implement it and try it out for yourself. It works, and works well, is the only argument i can put up for it. It''s explained well in the codewhore article, you can start there.

To fprefect :

I enjoyed your articles, they are very informative. However in one article you mentioned the use of splines to smooth out movement data, wouldn''t such a system add additional latency as you would buffer up current data points to interperlate a smooth path. In another article you mention latency as being the bane of network simulations. Is there a way of using spline interperlation without using data buffering?

Also in one article you suggested packet reduction as a major way to improve network performance. What do you think about using an aggressive compression scheme? Perhaps you could do an article about it?

Good Luck

-ddn
quote:
The equation (R1 - (L1+L2)/2 incorrectly assumes that a packets traveling to a
destination take the exact same amount of time to return, which is usually not
the case.

This method syncs you to the difference between L1 & L2, which should relatively
small in in most cases, <100ms? The problem is there''s no way to measure L1
seperately from L2.


Absolutely correct, but (and it''s a big but) there are very few real world
instances of assymmetric links. In 99% of the cases, the difference between
L1 and L2 is simply the non-deterministic but reasonable errors in networking.
That said, you are exaggerating the non-determinism of normal Internet speeds.

In case of severely assymmetric links (fast download, slow return trip), there
is no reliable way that I know of to calculate L2-L1, but it''s my opinion that
assuming L1=L2 should *not* seriously impact the prediction and collision
detection in your game. Imagine this, the world you see is L1 in the past
(compared to the authority/server) -- but your response times are delayed by
L2, so the overall effect is the same.

We see players with very slow connections, who are at a disadvantage until they
learn to cope with ghosting or aliasing of their targets. Similarly, assymmetic
latency is simply another handicap -- as long as the latency is stable, it is
possible to compensate for it. (If the latency is totally unstable, then all
bets are off -- but so is everything else.)

quote:
But they do! Have you ever played EverQuest? It constantly varies, and jumps
around, a lot! (some is due to lag not latency, but even during ''quiet hours''
it still bobbles).


Well, latency is measureable. You can establish certain parameters in which the
gameplay should be tolerable, and when the latency falls outside that range, you
either show a blinking light or simply disconnect the user and blame the ISP. To
combat latency, you need to be able to measure and then predict it. If you can''t
do that, then you shouldn''t be designing network games.

(I recently was involved in a Usenet flamewar regarding poor performance on the
backbone for my cable ISP. I don''t want to bring that in here, but I think we
as game designers can agree that there is a certain point where even the best of
network designs will fail.)

Given sufficient pings (say 10 or 20) you will be able to approximate the lowest
RTT and the general variance of network latency. Just think back to statistics
and picture the bell curve with a fixed lower bound (speed of light/distance):
a large curve with sharp dropoff. A wider curve means more variability, but you
can now determine how long the fastest 25%,50%,75% of round trips will take (the
25th, 50th, and 75th percentile respectively) -- then plan your prediction
accordingly.

quote:
But that won''t do you any good, it doesn''t matter what the average nor x is, you
need to know what the exactly what the offset is for this packet, not
about what the offset it is to x many decimals.

It''s more important that your sync algorithm takes into account such disruptions
and adjust for them.


I maintain that you can do that statistically: discard any clock synchronization
packets that exceed the standard deviation for RTT. For a fast connection, this
window may be too small and require some to fudging. For a slow connection, the
tolerance will be wider but still aggressive enough to filter out noise. For
unstable connections, the tolerance will be wider still -- a best effort in the
face of increasing unreliability.

With even 400ms stable RTT, you can easily synchronize your clocks to +/-10ms of
refence. If the network isn''t stable, then you can still calculate the range of
your error. As long as it''s below 60ms (2 frames), it''s not going to be visible
to the user.

I understand that you feel the Internet is simply too non-deterministic to maintain
a synchronized network clock, but I have practical first-hand experience to refute
this. My network engine has successfully run between a modem on the US east coast
and a client in Australia -- with 600ms latency and moderate variability, we saw
less than 30ms error.
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.

This topic is closed to new replies.

Advertisement