🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Trying to emulate "typical" internet conditions

Started by
8 comments, last by Matt-D 8 years, 11 months ago

Hi,

I am looking over the netcode in my game and trying to make it more internet friendly. The game has been mostly tested in a LAN environment, where it works fine, but I would like to see how it performs when played over long distances, eg. over the atlantic.

I found the SoftPerfect Connection Emulator which seems very nice when it comes to emulating problems that might arise when sending data on the internet. Using it, it is very easy to completely break my simulation, so it seems I have some work to do still. However, before I start I would like to know what is a reasonable target. I would like to create a preset for the emulator that I can run every time I have made improvements to my netcode to see how well the game performs.

For example, I have a preset that have the following parameters:

- Speed limit: 768 Kbps (DSL)

- Latency: 50-100ms

- Packet loss: 1%

- Duplication: 0,1%

- Reordering: 0,1%

Now, I know it is very difficult to say "how well the internet works", but would you consider that a fair target to aim for? Ie. if I got the game working with those parameters would that be okay for general internet play, or should I aim for even worse conditions? My target audience (if there is such a thing) would be mainly european and american players, so I would have to prepare for transatlantic communication.

Thank you!

Advertisement

Requiring 100ms latency or better would mean I can't play your game, depending on where your server is located.

When in rural Missouri, I get 3 mbps down, but because I'm at a farther distance from servers, depending on the game I get about 250ms pings.

When in the city (Kansas City, Missouri), I get ~13 mbps at better ping times, but still not fantastic (~100-150ms), because many gaming servers are located on the coasts.

Are you going to have dozens of servers located across the entire country so they are near to each major city? If not, then you'll want your one or two servers to be able to tolerate pings from greater distances. Farther servers = game must be more tolerant to latency.

Suppose you have one server located in San Francisco. Suppose someone in New York wants to play. New York is 2800 miles away. Data travelling at the speed of light (186,000 miles per second) means it'll take 15 milliseconds there, and 15 millisecond back. That's already 30 milliseconds - 1/3rd of your allotted latency.

That's assuming the cables run in a roughly straight line between the two cities (they probably don't). That's assuming data travels at the speed of light (not quite). That's excluding the time each ISP node between the two destinations needs to redirect the packets. That's excluding the time it takes your client to figure out what to put into a packet, compressing and encrypting the data, having the OS send it, having the customer's router deal with it (possibly over WiFi), having the ISP receive it, and the time the server takes to uncompress and unencrypt the data, process it, decide what to do with it, take action, form a response packet, and send it back.

I just now pinged www.google.com (over WiFi to my router). Ping is: Min: 125, Avg: 129, Max: 138 - 11 hops (11 major servers that have to redirect it, and who knows how many smaller ones). For some reason, the ping is taking me to Mountain View, California (Google has a nearer servers, not sure why I got sent there) - but that's still travelling only about 2/3rds the distance from CA to New York.

And that's one-way.

I'd plan for 350ms latency if you can.

@Servant The term latency is a bit ambiguous, it can either be one-way or round-trip delay time. It seems like you mean the latter, is that correct? I believe latency in SoftPerfect Connection Emulator is one-way.

Yea, I meant round-trip including server and client processing. The amount of time it takes for me, as a user, to take an action and have it take effect on the server, and then get back to me and take effect on my machine.

I don't think that's aggressive enough. I'd suggest upping packet loss to 10%, and round-trip latency to 250 milliseconds or more.

Duplication and re-ordering are much less of a problem once a connection has been established, but because it happens, it's good that you have a small amount of packets in that class. Upping it to 1% each would probably help flush out some problems.

And if you don't think packet loss is real, my Comcast cable connection, going from me to Google data centers a few miles away, have about 1% constant loss of UDP packets, day or night, and have done so for years.
enum Bool { True, False, FileNotFound };

At minimum a tool like that can be used to make sure your network programming can properly handle even single incidents of all those flavors of traffic problems.

And since it can be adjusted you can then beat up your network solution to see Where/IF it starts falling apart (like lost packets retry storms).

Various scenarios then can be used as you add and test adaptability for your accepted degree of adverse network environment (including things like adding App level throttling and 'controlled' experience degredation).

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

Yeah, the latency in SCE is one-way.

I don't think that's aggressive enough. I'd suggest upping packet loss to 10%, and round-trip latency to 250 milliseconds or more.

Well damn, seems I have my work cut out for me then.

Duplication and re-ordering are much less of a problem once a connection has been established...

Out of interest, why is this? Since I am using UDP there is no connection per sé from the network's point of view, right?

Thank you everyone!

EDIT: fixed typos

Out of interest, why is this?


One common source of duplication or re-ordering is when routes change, or new routes are established, which only happens rarely, and generally happens most when a new flow is established.

WiFi connections may perhaps re-order, or even duplicate, packets at any time -- I haven't measured that. I know they add significant jitter to the latency.

Since I am using UDP there is no connection per sé from the network's point of view, right?


There are not "connections" at layer 4-and-up, but there are "flows" at the lower network layers.
Your NAT firewall will have to remember outgoing packets for some time, so return/responses can be properly re-written.
Routers will have to remember routing decisions for previous packets, because looking up brand new routes for each packet is terribly inefficient.
The network industry captures this concept into the name "flow," which is generally identified by a tuple that starts with source IP, destination IP, and protocol ID, and then also adds protocol-specific additional data (such as source port and destination port for TCP or UDP.)
Routers will then have something like a big hash table of "flow" to "routing decision," with a time-out, and each time a packet is routed, that time-out is reset.

When a router sees a new flow, it may need to hold the first packet while it's looking up the routing decision.
Meanwhile, a second packet may be sent, and may arrive after the routing decision has been made, but before the router has had time to forward the held packet.
The result is re-ordering of packets when a new flow is looked-up. The most common source of new flows is simply new "sessions" (at the user level) although this can also happen if routes are changed for reasons such as maintenance, failure, cost changes, etc.
enum Bool { True, False, FileNotFound };

Excellent explanation, thank you!

Perhaps this will be of some use:

https://github.com/tylertreat/Comcast // lovely name

https://jagt.github.io/clumsy/

This topic is closed to new replies.

Advertisement