🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Exact Syncing

Started by
28 comments, last by SaltySnacks 23 years, 7 months ago
quote:
I enjoyed your articles, they are very informative. However in one article
you mentioned the use of splines to smooth out movement data, wouldn't such
a system add additional latency as you would buffer up current data points
to interperlate a smooth path.


First, let me clarify the difference between "prediction" and "interpolation".

Prediction is what you do when you have the previous location and velocity
of an object, but haven't gotten an update in time to draw the next frame. The
simplest and best known algorithm is dead-reckoning, but there are other solutions
as well. (For automated objects, such as NPCs, just run the script locally for an
unofficial hint as to what it'll do.)

Interpolation is how you fill in the blanks between "known" positions and
also how you resynchronize with an update packet that doesn't match up with the
previous path of the object. Keep in mind that there may be 4-8 graphic frames
for every update packet, so when you get a fresh data, you actually have a little
time to compensate for any differences before the next update arrives.

Now, on to your question: I don't have a graphic handy, but if you've used cubic
splines before, you know that they are defined by 4 points (2 endpoints and 2
control points). That said, you only need 2 data points to generate the spline
for your path correction.

Let's assume a constant rate of network packets per second, so the current packet
arrives at time T0 and the next at T1. Assuming you use plain dead reckoning for
all prediction, the endpoints (EP) and control points (CP) are:

EP1: The current location at T0 according to the local host.
CP1: The current prediction for T1 according to the local host.
CP2: The actual location at T0 according to the packet.
EP2: The updated prediction at T1 according to the packet.

The object will pass from EP1 to EP2 in (T1-T0) time, giving you several graphic
frames to interpolate the course correction before the next update arrives. Now
you can calculate the location and direction of the object for any time in between
using parametric equations. The control points are simply weights that make the
correction appear smooth onscreen.

Note that the apparent path will be longer than the actual one (especially for
major course corrections), so the object may appear to speed up. You can reduce
this effect (but never eliminate it) at the cost of some smoothing by shortening
the distance between end- and control points:

EP1: The current location at T0.0 according to the local host.
CP1: The current prediction for T0.5 according to the local host.
CP2: The updated prediction at T0.5 according to the packet.
EP2: The updated prediction at T1.0 according to the packet.

IMPORTANT: CP2 and EP2 are now *both* predicted from the contents of the update
packet.

This model isn't perfect, it's just a handy way to hide the effects of latency
so that objects don't appear to teleport. There are lots of ways to manipulate
cubic splines to generate useful information or fix other problems with this
model (such as collision detection or measuring velocity at a given point) --
but you'll have to find your own middle ground.


quote:
Also in one article you suggested packet reduction as a major way to improve
network performance. What do you think about using an aggressive compression
scheme? Perhaps you could do an article about it?


Compression is one way to reduce packet size, but I find that compression only
works well when you have enough data for it to find useful patterns. Obviously
your data packets may have certain characteristics that compress well with RLE,
or arithmetic compression -- but anything that builds a dictionary will bloat
more than it compresses. It's really dependent on the data you are sending.

You can also reduce the packet size by hand: aligning fields to remove padding
in data structures and approximating 32-bit values down to 16-bits, but the
best optimization is to avoid sending useless data (such as events on the other
side of the world). Just like optimizing code, using the right algorithm offers
exponentially more gains than writing the wrong one in assembly.

Also remember that reducing packet size is just one way to improve performance
-- reducing the number of packets is even more important. UDP packets have a
fixed 28-byte header, each one must compete with other data at each router, and
software on both ends has to buffer them (so that they can be placed in order
or resent if lost). Send fewer but larger packets in preference to more small
ones. (And for everyone's sake, don't send more than 10 packets per second per
client!)

If you have more questions, feel free to start a new thread! I feel like I've
ranted all over this one.

Edited by - fprefect on November 29, 2000 2:32:19 AM
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.
Advertisement
quote:
fprefect wrote:
UDP packets have a
fixed 28-byte header, each one must compete with other data at each router, and
software on both ends has to buffer them (so that they can be placed in order
or resent if lost).


UDP packets aren''t guaranteed to arrive in the order they were sent or to arrive at all (in other words, they are not resent unless you explicitly resend them yourself).

Perhaps you meant to say "TCP packets".
What I believe he meant was that some software, whether it''s the IP stack (TCP) or your application (UDP) have to buffer or discard packets to get them both sent and received in the correct order.
Also, fprefect...

NTP sounds pretty similiar to what Magmai was describing. And that was a good explanation of cubic splines. I''m not a game programming guy as much as I am a network programming guy. Latency hasn''t been an issue in my software to the degree it is in games, and prediction hasn''t even come up.

I''ll have to check out your website... sounds like it might be useful. Has a good name if nothing else. =)
I think a way you could sync this is actually really easy.

Think of it this way, you are playing a trivia game. A question pops up on an user''s screen. First perrson to answer it gets all the points. Who was actually first? This is your problem, right?

Well, to solve this, and not even have to worry much at all about latency is fairly simple. Instead of sending out the information as soon as they click, or whatever, why not do this instead.

When the question appears on the user''s screen, a very precise timer is started. As soon as they click, this timer is stopped and its value recorded. Then the client program sends to the server, a packet saying that they got the right answer and how long it took to guess it.

This way, no matter how long it takes their computer to send, when the server recieves the information, it can tell who clicked the button first. So even if in real-time not everyone gets the question at the exact same time, all that matters is how long it takes them to see the question, comprehend it, and select an answer. Their own machine times them.

Do you understand what I mean? A lot of trivia games online use this method of scoring. Just a higher score based on how fast you select it. I''m sure you could change this to work on other type of games.

-Tim Elliot (Demitri)
quote:
UDP packets aren''t guaranteed to arrive in the order they were sent or to arrive at all (in other words, they are not resent unless you explicitly resend them yourself).

Perhaps you meant to say "TCP packets".


No, I did mean UDP... assuming your application wants to send them reliably and in order, then you need to buffer them and that takes time+resources. TCP handles the work behind the scenes, but it also does this.
quote:
UDP packets aren''t guaranteed to arrive in the order they were sent or to arrive at all (in other words, they are not resent unless you explicitly resend them yourself).

Perhaps you meant to say "TCP packets".


No, I did mean UDP... assuming your application wants to send them reliably and in order, then you need to buffer them and that takes time+resources. TCP handles the work behind the scenes, but it also does this.
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.
thanks alot Demitri...that is the perfect solution to my problem...i cant beleive i didnt think of it that way.

thanks,
mark
quote:
NTP sounds pretty similiar to what Magmai was describing.


Yes, NTP is exactly what he described. However, I maintain that it''s better for everyone to synchonize to the clock on one host (the server or authority) than an arbitrary 3rd party -- because you will reduce your error by 50%, at least to that host.
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.
quote:
When the question appears on the user''s screen, a very precise timer is started. As soon as they click, this timer is stopped and its value recorded. Then the client program sends to the server, a packet saying that they got the right answer and how long it took to guess it.


Ignoring the security aspects, that''s an excellent example of eliminating the effects of latency.

Of course this type of optimization is hardly useful for most games, because it relies on certain characteristics of the problem set that aren''t very generalizable to network software in general. That said, you''ve struck on another important concept: take ruthless advantage of any shortcuts that present themselves.

Let me give another example: a head-to-head tetris game, with the playing field split into 2 (you and your opponent). While it''s tempting to send the data reliably and in real time, you really don''t need to. Think about it, a player only glances at his opponent''s side during the occasional lull in action, and then only to see if who is winning. There is no need for millisecond accurate collision detection because there are no collisions. Each player is his own authority, and as long as the simulation on other hosts is moderately accurate, they will never notice small glitches or corrections. The only thing that needs to be reliable and timestamped is when someone actually wins or loses.

What I''m getting at is this: for the most part writing network games involves lots of synchronization between every player so that objects paths and collision detection are as honest as possible. If you find an instance where you *don''t* need to negotiate something over the network, you can basically negate the effects of latency. That''s the holy grail of network games.
Matt Slot / Bitwise Operator / Ambrosia Software, Inc.

This topic is closed to new replies.

Advertisement