🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Accurate tick rate for game servers without locking a thread

Started by
4 comments, last by alvaro 1 year, 7 months ago

I have recently been researching authoritative servers and slowly piecing together the various components I will need to start implementation. One thing that I seem to be hung up on at the moment is how one would go about implementing an accurate fixed tick rate without locking an entire thread in a while(true) loop. I've come across several examples where individuals are sleeping the thread at a fixed interval, and this is how I had originally thought this would work, however through some tests I'm now not so sure and was hoping someone with real-world experience might be able to clarify what the best way to go about this would be. For example:

Using sleep:

using namespace std::chrono;

auto fixedTimeStep = 16ms; // 60 times per second(ish)

auto next = steady_clock::now();
auto prev = next - fixedTimeStep;
while (true)
{
	// do stuff
	auto now = steady_clock::now();
	std::cout << duration_cast<milliseconds>(now - prev).count() << '\n';
	prev = now;

	// delay until time to iterate again
	next += fixedTimeStep;
	std::this_thread::sleep_until(next);
}

Yields the following results:

16
15
15
15
15
14
15
15
15
15
31
0
29
15
15
15
15
15
15
16
15

Where as if I were to NOT sleep as such:

using namespace std::chrono;

auto fixedTimeStep = 16ms; // 60 times per second(ish)
auto newTime = steady_clock::now();
auto prevTime = steady_clock::now();
auto tickTime = newTime - prevTime;

while (true)
{
	newTime = steady_clock::now();
	tickTime = newTime - prevTime;

	if (tickTime >= fixedTimeStep)
	{
		// do stuff
		std::cout << duration_cast<milliseconds>(newTime - prevTime).count() << '\n';

		prevTime = newTime;
	}
}

Results are always 16ms.

In my confusion, I started looking through the documentation/source for the networking library I plan to use, and in the usage example they're using the sleep method. I then recreated their loop using the functionality from the library as such:

double prevTime = yojimbo_time();
double m_time = 0.0f;
double fixedDt = 1.0 / 60.0;
while (true) {
	double currentTime = yojimbo_time();
	if (m_time <= currentTime) 
	{
		// do something
		std::cout << (currentTime - prevTime) << std::endl;
		prevTime = currentTime;

		m_time += fixedDt;
	}
	else 
	{
		yojimbo_sleep(m_time - currentTime);
	}
}

Which yielded similar results to my first loop using std chrono:

0.0149996
0.0160216
0.0160085
0.0160089
0.0147039
0.0303099
0.0150091
0.0155342
0.0158427
0.0155628
0.0155981
0.0151898
0.0158014
0.0154033
0.0159931
0.0159994
0.0160106
0.0159969
0.015992
0.0154033
0.0160725
0.0305259
0.0160052
0.0150037
0.0149955

Therefore I have to be confused about something, as the author of that library is highly regarded and I can't imagine they would be using sleep() if it was not a feasible solution….right? But then if your authoritative server MUST update at a fixed tickrate to insure a consistent accurate simulation…what gives?

Feel like I'm one “Aha” moment away from this making sense…hopefully someone on here can shed some light on where it is I'm confused ?

Advertisement

Here's a classic article describing it.

The short form is that you accumulate how much time has passed and then run as many ticks as needed to catch up.

There are many talks and discussions on youtube describing specific games. A somewhat recent talk One Frame in Halo Infinite was the GCD 2022 version, but it's been covered in some form most years of the past two decades. About 7 minutes in he explains the changes that specific game needed to make to account for it, which they called “multi-ticking” in the Halo 5 engine.

As for Sleep(), you're at the mercy of your operating system's task scheduler. You specify a sleep time as a parameter, but the OS is free to adjust it to whatever granularity it wants. On Windows you currently get about 15 milliseconds in a timeslice which is merely happenstance to be approximately the 60 frames per second timeslice game programmers used to favor. As 60 ms is largely historical - - hardware refresh rates of pre-1998 television - - games have moved on as well. Monitors of 72, 75, and 90 Hz are common especially in headsets. Gaming displays of 120, 144, and 240 Hz are also increasingly common, meaning time slices as little as 4ms per slice.

The general recommendation for clients is to go “as fast as possible” using a hardware signal or hardware blocking operation to alert you when to resume, rather than using the OS timeslice functionality. For server update frequency and physics simulations you'll want to use a fixed timestep that is suitable for your game. That's a number that is best determined experimentally. Popular games have ranged as far down as 4 ticks per second, as high as 100+ ticks per second. It mostly depends on what you're doing in your simulation and the nature of your game. For example, Minecraft runs at 20 ticks per second, or 50 milliseconds.

@frob I appreciate the response, however I probably did a poor job explaining where my confusion lies. I understand the fixed timestep pattern in regards to a client simulation where you run in a tight while loop and “accumulate” time in your render loop (as you would want to render as fast as possible), then perform 1 or many logical updates once you have accumulated enough time (interpolating between previous and current state based on accumulator).

What seems to be evading me at this time is how you would achieve something similar in a headless environment (for say a dedicated server running in a data center somewhere) without running a tight while loop, as you wouldn't want to hog an entire thread for one server process. And with Sleep() as you pointed out (and I seem to have discovered during these tests) you are not guaranteed the thread will wake up in a timely manner…however…hmm…

So ok, as I'm composing this maybe I just got what you are trying to say…to insure a consistent 16ms server tick rate while utilizing Sleep(), I suppose I could accumulate this time and do the same thing I'm currently doing within my client render loop couldn't I? So for example if I need a consistent 16ms tick, but Sleep() slept for 32ms, on the next loop after the thread wakes back up I could do two logic ticks to make up for the delay. That would essentially make the server, the part that's listening for connections and sleeping, be roughly equivalent to my client's render loop (variable rate), then the logic can be updated at the fixed rate…right?

whitwhoa said:
with Sleep() as you pointed out (and I seem to have discovered during these tests) you are not guaranteed the thread will wake up in a timely manner

Yes, you will be at the mercy of your OS. You might not need Sleep() though; you can usually use a timeout on a call like select() to schedule the desired wake-up time, and also handle incoming I/O at the same time. Once you wake up, you look at wallclock time, and figure out when the previous tick “should” have ticked, tick the right number of times, and go back to the select(). If you store the wallclock timestamp of the last tick, you don't need to “accumulate” time, you just need to update the “last ticked wallclock time” counter with a fixed value, rather than setting it to “now.”

A few other things:

Modern OS-es on modern CPUs will be very good about wake-up, at least for moderately high-priority threads. It used to be that shared EC2 instances would have significant scheduling jitter, though, so you might want to look out for that. If you have a “full” or “bare metal" instance, that's not a problem. The only time a “full” server with many cores won't wake you up very close to the desired time, is if you're using a legacy sleeping API, or if you're overloading the server (more threads wanting to run than available cores,) or if there's some really badly behaved device driver on the system.

Time, in a networked simulation, is more of a way to order things so they happen in a determined order, than it is an absolute timestamp-locked event. As long as things happen in the right sequence, and the player doesn't notice ssignificant jitter, that's the best you can do – especially once you start accumulating players from around the globe, and on very different network connections! It's usually worth it to schedule message sends/receives for more than a full tick in the future compared to what you “predict,” so that a bit of jitter won't make it arrive too late.

enum Bool { True, False, FileNotFound };

I went back and looked through the usage example provided in the yojimbo repo, and I believe I simply confused myself. My tests above were calculating the time between each while loop execution, then when I saw that these values were off my brain said “stop there's an issue here, this needs to be consistent”, but had I investigated further I would have found that they appear to be doing something similar as described by @hplus0603 above where they store a timestamp and update it by a fixed timestep only after at least that much time has gone by, and performing multiple updates if need be.

hplus0603 said:

Time, in a networked simulation, is more of a way to order things so they happen in a determined order, than it is an absolute timestamp-locked event. As long as things happen in the right sequence, and the player doesn't notice ssignificant jitter, that's the best you can do – especially once you start accumulating players from around the globe, and on very different network connections! It's usually worth it to schedule message sends/receives for more than a full tick in the future compared to what you “predict,” so that a bit of jitter won't make it arrive too late.

^^ This is very useful information! Thank you for sharing. ?

[Disclaimer: Not an expert, but I usually have decent ideas.]

I believe StarCraft divides time in turns (I don't know how long a turn is), and there is a scheduled delay of a few turns between the moment when a command is issued and when it takes effect, giving you some leeway to get the event distributed to all the clients (the user can get immediate visual and/or audio feedback to make the game seem responsive). In such an architecture, the server doesn't even need a clock: Once all the messages from all the clients for a particular turn have been received, they can be broadcasted back to the clients. All the clients then run the same update code with the same inputs, and that keeps their states in sync. This seems like a very robust solution to me, but it probably won't work for more “real timey” genres, like FPS.

This topic is closed to new replies.

Advertisement