🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Challenges of 100 Players

Started by
8 comments, last by hplus0603 5 years ago

What challenges do game developers face when trying to develop a 100 player battle royal?

What are some of the solutions to these challenges?

What will game streaming + cloud support do that will allow for 1000 players? A recent claim by Google and their Stadia platform.

Advertisement

The main problem is rendering high-end graphics for 100 separate moving player characters. People have certain expectations for player characters, and those are hard to meet when there are many.

There are some simulation challenges, especially if you want to simulate the skeletal character movement rather than just sweeping balls through a triangle soup tree.

Finally, there's a little bit of a networking challenge, in that consumer internet doesn't always take well to getting 100 high-resolution entity streams, although that's quite manageable with interest management and viewable sets.

What Google believe they can do, is change the network characteristics to be "one screen's worth" rather than "N players' worth" so it will not change when the number of players change. That being said, the data consumption of one interactive video stream is higher than a data stream of 100 player entities.

Another thing Google believe they can do, is make sure that the hardware platform is known; e g, they can guarantee that nobody expects to see 1000 player characters on an Intel Integrated Graphics display.

It turns out that Battle Royale is designed to manage these problems on its own. People naturally spread across the map, and thus the local client needs to only worry about a much smaller viewable set at once. Once the ring has tightened until everybody is concentrated, much fewer players are left alive and rendering.

Personally, I don't think any cloud service will work well for FPS/shooter gaming. It's been tried several times before, and the fractions of milliseconds improvement Google can have with their datacenter locations don't matter compared to the inherent round-trip latency of the architecture.

enum Bool { True, False, FileNotFound };

The biggest challenge is usually to retro fit existing architectures to achieve that. Companies don't write Battle Royale games from scratch, they just take existing engines and make it run, then they try to fix the most critical issues, while expanding the game. The better your engineers, the better the workarounds, but under the hood, it's still a mess.

e.g. 

 

 

Game Streaming can solve the scaling problem, as the data does not increase N*N with the user count (N player need data from N-1 other player -> 100 player generate 100*100/(16*16) -> 39 times  more data than a 16player game), but stays linear (one video feed).

The latency will not be bigger, most likely smaller, but there is no latency hiding on client side anymore, hence player will notice it more. Hard to predict how that will turn out, but game video streaming is more of a business thing than a tech thing. suddenly you can sell your game(-service) to everyone who can watch youtube. That's a far bigger market than these "few" current gen consoles.

Thank you @hplus0603 and @ProfL for the in-depth feedback.

I have an additional question if you would not mind. If this architecture exists: Player -> ISP -> Game Client -> Game Server -> Game Data Center

What would be the bottlenecks for Game Client to Game Server?

What would be the bottlenecks for Game Server to Game Data Center?

Would it be valid to say that these bottlenecks would be a CPU and Read/Write speed challenge?

if you are talking about game streaming, then the arch is a bit different.

player (light weight input client) -> ISP -> game server -> render/compression slave -> ISP -> player (visualization client, can be different than the input client)

1. You don't need to artificially run a game client, the server usually runs per client data already (which is usually mirrored to the client, but there is no point in that in video streaming, unless you just try to have a cheap/quick port).

2. not sure about that, a Data Center is just the physical space where racks of server (-hardware) is housed where game server (-software) runs. You need locations around the world to have physical servers close to clients/player, to keep the latency low.

3. The biggest bottleneck is most likely the GPU, in means of cost and performance. But that's something that nobody engaged yet, hence there is a lot of potential for improvement.

This was an absolutely critical detail, make sure it is not glossed over:

On 4/26/2019 at 5:10 PM, hplus0603 said:

although that's quite manageable with interest management and viewable sets.

The N-Squared problem remains absolutely real and requires some engineering effort to resolve.  Each new node joining the communications mesh means that everybody now must communicate much more, and the growth is n-squared.

You cannot continuously update all 100 players about the goings on for all the other players. 

There are many techniques to reduce the data set and reduce the difficulty, but don't underestimate the effort required.

I don't disagree, which is why I mentioned interest management and viewable sets, and further up, mentioned how the gameplay itself cleverly reduces the scope you need to actually see!

That being said, these games have very little customization, so updating 100 players about 100 players is totally doable at a 30 Hz network tick rate. Keeping everybody updated about all possible changes to the environment, if you have lots of clutter, loose rocks, doors/windows whose state matter, etc, is a harder, and is where the various methods come in.

Simple math: Player position/orientation/velocity/aim/health/ammo/display-bits update size might be about 8+3+6+4+(about 7) bytes, for 28 bytes per player per packet. This means each player needs to receive 28*100 bytes per packet, and 28*100*30 bytes per second. Nothing scary about that these days. Similarly, the server needs to send 28*100*100 bytes per network tick, which is a little more, but a simple gigabit connection is more than sufficient to funnel that out. (<9 MB per second including packet overhead, so you might even squeeze it into a 100 Mbit connection, but why would you? ?)

If the players host the server, yeah, no, that won't work so well. If you try to do this using peer-to-peer instead of client/server, then N-squared at each player will be a problem, both in and out. And if you do the math, 72 Mbit/s cannot support a player count of 100 streaming in 1080p using video streaming, so the bandwidth consumption (for the server, and for each individual player) would actually go up with the cloud-render-gaming approach.

Do you get even better results when applying more smarts? Absolutely!

enum Bool { True, False, FileNotFound };
 
 
 
2
On 5/4/2019 at 6:40 PM, hplus0603 said:

Simple math: Player position/orientation/velocity/aim/health/ammo/display-bits update size might be about 8+3+6+4+(about 7) bytes, for 28 bytes per player per packet. This means each player needs to receive 28*100 bytes per packet, and 28*100*30 bytes per second. Nothing scary about that these days. Similarly, the server needs to send 28*100*100 bytes per network tick, which is a little more, but a simple gigabit connection is more than sufficient to funnel that out. (<9 MB per second including packet overhead, so you might even squeeze it into a 100 Mbit connection, but why would you? ?)

High data transfer cost is another challenge. Say you are transferring more than 150TB a month. AWS will charge you at $0.08/GB data transferred.

https://aws.amazon.com/blogs/aws/aws-data-transfer-prices-reduced/

9MB a seconds is around 30GB an hour. That's about 2.5 dollars an hour. (30*0.08). 

SocketWeaver provides a flexible and powerful API for your networked games. Whether you are making an action RPG, a turn-based collectible card game, or a battle royale shooter game, SocketWeaver's SDK for the Unity Engine can help you get your multiplayer game up and running quickly.

A modern data center is generally well connected. Cogent will charge you maybe $2k/month for 10 Gbps unmetered backbone bandwidth. If you have 100 servers full of 100 players each, $2k is not going to be your biggest worry ?

If you host in Amazon, yes, you will pay through the nose. That's how they make their money!

Action games, in general, don't do well on AWS, for two reasons. One is the bandwidth cost (streaming in general has this problem, no matter whether video, or gameplay) and one is the virtualization jitter. Depending on where your server instance is placed, you may get smooth-as-glass performance, or you may have noisy neighbors and suffer 100 millisecond interruptions with some frequency. Not being able to know whether to blame the host, or something else, is a terrible handicap when trying to debug gameplay lag reports.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement