hplus0603 said:
A typical RTS game keeps running animations even while waiting for step resolutions, so the stutters are somewhat “hidden.”
True, so any “render” interpolation/extrapolation and GUI stuff is easy to keep going, even if the main logic/physics is lockstep.
hplus0603 said:
A typical game networking system that relies on smooth tick updates, will have a time buffer to allow re-sends of lost packets before they're noticed; e g this will be a buffer on the receiving side that queues commands for the future.
So you mean if the game measures say 70ms latency, it will actually essentially wait for say 100ms to give it some margin for delayed packets. But full resends, that seems like a long time to wait (at least 140ms in this case?)? Do games really delay each step over double the latency to allow for a lost packet detect + resend?
I guess it could only slow down so much if it saw packets actually being lost recently, lost packets seem fairly rare anyway?
hplus0603 said:
The general game look doesn't look like you're describing, either. It looks more like:
Hmm, I don't think I follow here. For a fast paced game with some way to correct for prediction errors (presumably fewer objects and sending more complete state like position/death/etc. occasional).
But in the context of a basic lockstep simulation, isn't it essential that the client has all the input data for a given “simulate_physics” before it does it? If that `while` loop runs different just once on some client then its going to desync?
So I maybe optimistically thought I could write something working from scratch in a few hours, hopefully will have something practical later in the week :(
But in a client-server lockstep model, I was thinking that the server/host player runs “a normal loop”, while all the client players would be dependent on waiting for the flow of data from the server, since they just can't run the simulation without that data, or they would desync immediately?
void run_client()
{
while (_window.handle_events())
{
_client->send_actions(_gui.take_actions()); // Player actions not immediately enacted, not tied to a specific frame, let the server synchronise and decide when these happen
auto actions = _client->recv_frame_actions(); // blocking, if it can't get data, its a disconnect
frame_pacing_sleep(); // Fixed-rate loop intentionally running a little behind recv_frame_actions to absorb latency variation
update(actions);
render(); // TODO: In actual game, rendering can be decoupled
}
}
void run_server() // or local. Basically the same for headless/dedicated server, window/gui can be the CLI or such, and don't render()
{
unsigned frame = 0;
while (_window.handle_events())
{
auto actions = _gui.take_actions();
if (_server)
{
_server->recv_player_actions(actions); // actions for everyone elses send_actions
_server->broadcast_actions(frame, actions); // Every client will enact these on the recv_frame_actions for same frame
}
update(actions);
render(); // TODO: In actual game, rendering can be decoupled
wait_for_next_frame(); // fixed rate loop
frame += 1;
}
}
For peer-to-peer, seems a lot more complex. I haven't really thought about it, and generally with NAT, firewalls, etc., not sure it is something with a great advantage over “dedicated host player/server”?