🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Using a physics engine on the server

Started by
31 comments, last by Krzych 8 years, 7 months ago


At the end of update() on the server, I do
std::this_thread::sleep_for(16ms);
And then update the physics engine using the delta time from the previous frame.
Doesn't this guarantee a 60Hz simulation step rate?

This will only be a 60hz step rate if you subtract the time taken to run update() from the 16ms. Also if your update takes more than 16ms then, you can end up in a "spiral of death".

The links provided about how to fix your time step are the correct approach for these reasons and more :)

Advertisement


At the end of update() on the server, I do
std::this_thread::sleep_for(16ms);
And then update the physics engine using the delta time from the previous frame.
Doesn't this guarantee a 60Hz simulation step rate?

This will only be a 60hz step rate if you subtract the time taken to run update() from the 16ms. Also if your update takes more than 16ms then, you can end up in a "spiral of death".

The links provided about how to fix your time step are the correct approach for these reasons and more smile.png

I didn't understand much from that article... But I found this one: http://gafferongames.com/game-physics/fix-your-timestep/

I'll work on it tomorrow.

At the end of update() on the server, I do
std::this_thread::sleep_for(16ms);
And then update the physics engine using the delta time from the previous frame.
Doesn't this guarantee a 60Hz simulation step rate?


Not really, for two reasons:

1) Simulation takes some time, so you really want to be using a monotonic clock to calculate "sleep until" time, rather than assume 16 ms per step.
2) This does not synchronize the clients with the servers in any way. The clients need to run the simulation at the same rate (although graphics may be faster or slower.)

Separately:

About the inputs, I don't this I can do this easily. This is because if packets are lost, the client resends them.


What does the server do, then? Wait for the input? That means any player can pause the server by simply delaying packets a bit.

In general, you don't want to stop, block, or delay anything in a smooth networked simulation. If you're worried about single packet losses, you can include the commands for the last N steps in each packet -- so, if you send packets at 30 Hz, and simulate at 60 Hz, you may include input for the last 8 steps in the packet. This will use some additional upstream bandwidth, but that's generally not noticeable, and it generally RLE compresses really well.

Being able to use the same step numbers on client and server to know "what time" you're talking about is crucial. Until you get to the same logical step rate on client and server, you'll keep having problems with physics sync.

Gaffer's article is almost exactly like the canonical game loop article; using either is fine.
enum Bool { True, False, FileNotFound };

At the end of update() on the server, I do
std::this_thread::sleep_for(16ms);
And then update the physics engine using the delta time from the previous frame.
Doesn't this guarantee a 60Hz simulation step rate?


Not really, for two reasons:

1) Simulation takes some time, so you really want to be using a monotonic clock to calculate "sleep until" time, rather than assume 16 ms per step.
2) This does not synchronize the clients with the servers in any way. The clients need to run the simulation at the same rate (although graphics may be faster or slower.)

Separately:

About the inputs, I don't this I can do this easily. This is because if packets are lost, the client resends them.


What does the server do, then? Wait for the input? That means any player can pause the server by simply delaying packets a bit.

In general, you don't want to stop, block, or delay anything in a smooth networked simulation. If you're worried about single packet losses, you can include the commands for the last N steps in each packet -- so, if you send packets at 30 Hz, and simulate at 60 Hz, you may include input for the last 8 steps in the packet. This will use some additional upstream bandwidth, but that's generally not noticeable, and it generally RLE compresses really well.

Being able to use the same step numbers on client and server to know "what time" you're talking about is crucial. Until you get to the same logical step rate on client and server, you'll keep having problems with physics sync.

Gaffer's article is almost exactly like the canonical game loop article; using either is fine.

1) I understand. I can subtract the time of the update() function like braindigitalis suggested.

But let's say the whole update() function took more than 16 ms. The result would be negative, so the thread wouldn't sleep at all.

Is this ok? (I don't think this will happen, as now it takes ~0.4 ms. But things might change in the future, so...)

Can I assume that the server's update function won't take more than 16ms?

If it takes longer, is it okay that the thread won't sleep at all?

2) Yes, this does not synchronize the clients. I will have to implement this on the client side as well.

No, the server's game loop does not wait for inputs. There is a separate thread that listens for inputs and when they are received they are passed to it.

By the way, the clients gather inputs every frame and send them every 33ms.

Can I assume that the server's update function won't take more than 16ms?

If it takes longer, is it okay that the thread won't sleep at all?

1: No.

2: Yes, but. Really check that gaffer articles, especially the last part and the use of the accumulator - that'll keep the server ticking nice and smooth even with a few slow frames.

If too many frames take longer, you will have to reduce frequency - it would mean your computer just cannot calculate that many update()s that fast.

however, try to simulate some lag early on in your development - it will make obvious a lot of problems you might otherwise miss and safes you a lot of rewriting later on.

maybe i have missed something, but do you run physics/collision-detection only on the server, and not on the clients as well?

maybe i have missed something, but do you run physics/collision-detection only on the server, and not on the clients as well?

You are correct. The physics / collision-detection is only on the server. Clients buffer 2 states from the server and interpolate between them.

But this temporary, I will implement it on the client as well, after I'm done with this.

After reading (and hopefully understanding the article), I've modified my code to look like:

// === THIS CODE IS RUN ON THE SERVER ===

const float TIME_STEP = 0.016f;

auto t0 = std::chrono::steady_clock::now();
auto t1 = t0;

float accumulator = 0.0f;

while (game->isRunning())
{
auto t1 = std::chrono::steady_clock::now();
auto frameTime = ((float)std::chrono::duration_cast<std::chrono::microseconds>(t1 - t0).count()) / 1'000'000.0f; // in seconds
t0 = t1;

accumulator += frameTime;

while (accumulator >= TIME_STEP)
{
game->update(TIME_STEP); // read client inputs and update physics accordingly
accumulator -= TIME_STEP;
}
}

This is Game::update(float):

1) read inputs for each client
2) if the player is moving set his velocity towards his looking direction, else set it to 0
3) update the (physics) world
4) sleep(TIME_STEP)

Is it correct this time?

Thanks to everyone for the help.

let's say the whole update() function took more than 16 ms.


Then your game is broken on that server.
An occasional timestep that takes longer might be OK, but if this happens with any frequency, then your hardware spec and software meeds mis-match.
You should at that point detect the problem, show a clear error message to the user, and end the game.
enum Bool { True, False, FileNotFound };
The sleep more like
sleep(TIME_STEP - accumulator);
And after the accum's while loop.
Rest looks good.

let's say the whole update() function took more than 16 ms.


Then your game is broken on that server.
An occasional timestep that takes longer might be OK, but if this happens with any frequency, then your hardware spec and software meeds mis-match.
You should at that point detect the problem, show a clear error message to the user, and end the game.

Alright. So, yes, I can assume that update() takes less than 16. If not, I'll need new hardware :)

So I've added this after the accumulator loop:

std::this_thread::sleep_for(std::chrono::microseconds(static_cast<int>((TIME_STEP - accumulator) * 1'000'0000.0f)));

If not, I'll need new hardware


Or profile/optimize the software! :-)
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement