🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

(Research) Assuming I have 10000 concurrent players on a FPS Game, what would be some of the costs?

Started by
9 comments, last by evillive2 8 years, 6 months ago

Hi Guys,

First post on here, got directed over here from gamedev.stackexchange as my question was a bit too far from what they deal with.

The following segment is just a copy paste of my question, I've had a look at the sticky and will be looking through the FAQ for more in depth, this is just a quick dirty question to get an idea of numbers before I start researching the why/how.

I'm a video games programming student (SAE-QANTM Melbourne) researching some ideas on games for an assignment, and I delved around the internet and realized I wasn't able to find any business websites that sold server hardware (For medium/large business) that was higher performance (for video games).

PRE-ASSUMPTIONS:

-I have 10000 players per hour. (Meaning at any time on average, there are 10000 players currently connected and playing).

-The games will run for about 20 minutes including a 5 minute break, so let's assume 3 games per player per hour, with 10 people per game (3000 games per hour).

-It is an FPS Game, so I definitely want a low latency.

https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking#Basic_networking

https://www.reddit.com/r/GlobalOffensive/comments/2h3fsa/how_much_internet_does_go_use/

http://gaming.stackexchange.com/questions/210173/what-is-a-tick

-Having a look at these previous links, I would assume a low latency fps would be something close to CounterStrike with a 64 tick rate, which in the second link user "rajeshjsl" says using DU Meter he uses 87 MB per hour.

Crunching those numbers (87 Per Hour * 10000) gives us 870GB per hour upload. (Not sure if correct, again, stuff I will be learning with the FAQ or with everyone here pointing me in the correct direction smile.png ).

QUESTIONS:

-Assuming 870GB per hour upload, where would be a good place to look at server hardware/architecture for purchase?

-What sort of servers would you recommend that would handle a game like this as best as possible?

-Assuming there is a link, how many top end servers would be required to handle this amount of players? If one of these servers could easily handle 870gb/10000 players per hour, what would be their limit? (While running smoothly)

-If no links, what would be the estimated cost to purchase a server that can handle this?

-What would be the running costs associated with maintaining/running the server? (I'm currently thinking of cooling, electricity, actual internet bill, storage, cleaning, anything else?)

Thanks in advance.

Aidan

Advertisement

-Assuming 870GB per hour upload, where would be a good place to look at server hardware/architecture for purchase?

870 gigabytes per hour is ~32 megabits per second, so you need to look at any serious datacentre with dedicated hardware and network resources.
In Melbourne, a corporate fibre plan capable of that is about a thousand dollars per month (which is why it's rare to see dedicated servers hosted in Australia sad.png). Overseas, you could be looking at more like €100/mo. ... but if you're going for low-latency, you need servers in every region of the globe that you sell your game, so the hosting prices will vary greatly per region!

I delved around the internet and realized I wasn't able to find any business websites that sold server hardware (For medium/large business) that was higher performance (for video games).

What kind of HW are you looking for? A plain old i7 quad core, with 32GB DDR4 is €40/mo to rent (along with 30TB of Internet at 1Gbit/s).
If you're looking to build your own hardware, there's lots of local stores that will sell parts... We've got some serious-business rack-mounted dual-Xeon hexacores (24 HW threads) with 64GB of RAM that we bought in Melbourne locally (along with all our other desktop gaming-level PC's).

how many top end servers would be required to handle this amount of players?

It's impossible to answer that in general. You need to implement it, and then ask the technical/engine staff how many milliseconds of CPU time is consumed per player, per frame (or just per-frame total with 10k players). If you want a tick rate of 64, that gives you a budget of 15.625ms per frame - or 1.5625?s per player per frame (that's assuming single-threaded though -- an 8 core server would lift your budget to 12.5?s per player per frame).

I'm sure it's possible to write a server that updates 10k players within that budget... but that entirely depends on the team that you have available to you, and the constraints/requirements of your specific game.

Also, a tick rate of 65 is pretty overkill. CS defaults to 20 IIRC.

What would be the running costs associated with maintaining/running the server? (I'm currently thinking of cooling, electricity, actual internet bill, storage, cleaning, anything else?)

If you're renting a dedicated server in a data-centre, then you'll pay one single monthly cost for everything -- maybe anywhere from €50 to €500 per server per month.

...plus a staff member on your team to manage and operate those servers... which is probably more like $5k to $10k a month tongue.png

If you're planning on building your own data-centre, then yes, loads of electricity on running the machines, lights, cooling, etc, the fibre/ISP connection, constant money on maintenance (e.g. replacing dead HDD's), staff to man your facility, etc... You've got to reach a pretty decent scale before that's the economical choice.

So you are discounting development costs? That is going to be huge up front, but it looks like you are only interested in the data center after development work is complete.

Server load is going to be highly dependent on the game. Some games rely on a large number of high-load machines doing much work. Other games can handle a large number of concurrent players but only do a tiny amount of work for each. Even though you say it is an FPS that does not mean all the effort is necessarily on the servers. Some servers simulate everything and do all the game's processing. Other games the servers are little more than a matchmaking service, score reporting tool, and community lobby. You will have costs for those machines, but it could be a cost for a few shared virtual servers or a cost for thousands of dedicated machines.

Bandwidth is also a concern, but it varies by game. One one extreme you get games with lots of people running around in communications-heavy simulations interacting with each other constantly, such as when EVE Online goes into 'time dilation' mode to handle all the communications and processing. On the flip side, if all you are doing is matchmaking and score reporting the servers will be mostly idle, just a few events at the beginning and ending of each play session. Network bandwidth could really be just about any scale you can come up with.

Redundancy will be a concern for both of those. If you need a large number of redundant machines, or multiple sites around the globe, you're looking at costs for each site. So if you have a single server in one spot it will be cheaper than if you have servers in Asia, US East, US West, UK, Italy, Germany, Australia, etc. More locations give you more cost but potentially better service.

Server support staff and monitoring is going to be a cost. Like above, the cost is going to vary by what you need. If you have a game with thousands of dedicated servers you will need people employed to watch the boxes. If you can instead use a few virtual machines they can co-exist in a place maintained by others. If your needs require a dedicated 24/7 staff it will be different than if you can get by with a small team on call.


If you want numbers for various data center configurations, go look up Amazon's AWS pricing. You can plug in different values for types and number of machines, storage volumes, bandwidth options, and more. Amazon is a great platform for servers like this because they offer automatic scaling up when you need it, and drop off servers when you don't. Large organizations can quickly spin up machines that cost tens of thousands of dollars every hour.

Based on details your answers on costs could range from under a hundred dollars a month on the cheap side to hundreds of thousands of dollars per month on an expensive side.

-Assuming 870GB per hour upload, where would be a good place to look at server hardware/architecture for purchase?


This is a pretty old-school way of thinking about it. A better approach would be to consider things like:

- What regions are you targeting?
- What are the peak concurrent players per region?
- What times reach peak concurrent players?
- What hardware is your game optimized for?

A game averaging 860GBps might need 1400GBps during some hours and 300GBps the rest of the day in one region, 1200GPbs during some other hours in another region, etc. You have to target your server deployments based on your markets, their play habits, etc.

Then there's the question of whether you're hoping to have at least 10k players or know for a fact that you'll have around 10k players; if you're just hoping, you might either be over-estimating or under-estimating the reality. If you're over-estimating, you could end up over-provisioning and wasting a ton of money. Far worse, you could be under-estimating and release a game that gets overloaded and be unplayable for days or weeks upon launch, which is a very very terrible time to make a bad impression that might kill the entire project.

The world in general - including games - are moving to scale-on-demand solutions like Amazon Cloud or Azure or one of the other hobajillion alternatives. Such solutions remove compute resources when the number of players is low, saving money, and add resources when player counts are high. The cost-per-cycle of such solutions is often higher than a dedicated solution in the peak hours, but peak hours tend to be a small portion of the total time in a day; the savings made by reducing compute demand in non-peak hours can save you quite a bit of money, even accounting for the increased costs during peak hours.

That said, some regions may require you to host in a traditional data centers. There simply aren't any cloud datacenters in some regions of the world. If those regions are critical to your business then you might have to suck it up and build/lease a traditional datacenter in those areas.

For a latency-sensitive game you also just really want to avoid a single central datacenter. Such a central location results in players near it having a distinct advantage over everyone else. Spreading your servers out around the globe by region allows players to find the region hub that gives them the best connection and latency and hence the best experience.

Going back to your question, "top-end" servers is pretty meaningless. Top-end in what sense? CPU power? Memory bandwidth? GPGPU support? Single-thread performance or large numbers of cores? Which items matter the most will depend entirely upon your game's implementation. Your client needs to run very well on years-old commodity hardware and your server shouldn't be any different. Buying the best hardware doesn't magically make your game the most cost-effective; you want to minimize the amount of hardware required to serve but you also want to get a good power/efficiency ratio amongst a number of other variables. You might prefer optimizing to run very well on cheap many-core ARM servers rather than beefy Xeon CPUs, for example.

Sean Middleditch – Game Systems Engineer – Join my team!

I would assume a low latency fps would be something close to CounterStrike

aren't counterstrike servers hosted by one of the player?
I would consider this as an alternative to run all server by yourself.
Data throughput just isn't a big cost these days.

You can get a gigabit unmetered connection from various low-cost providers at $1000-2000/month.
You can get ten gigabits unmetered for about twice that.
This assumes that you're putting your servers in a well-connected co-location facility.

Let's say that a modern FPS supports 100 players per CPU socket, and you have two CPU sockets per server.
That means 200 CCU (concurrent users) per server, so 50 servers to support the 10,000 CCU.
Leasing on those servers will be several hundred dollars each per month. (Let's say $250 each? switches, routers, etc, need leasing payments too)
Finally, space, electricity/cooling, and incidentals probably add another $50/month.

So, for your totally hypothetical game with totally hypothetical cost structure:
2000+50*(50+250) == $17,000 / month.

By the way: you're saying 10,000 CCU "average." Unfortunately, you have to provision for you "peak" load.
If we assume your peak load is 20,000 CCU then you're looking at $32k/mo, and you'd probably want to bump up above a 1 Gbps commit.

Depending on where in the country/world you are, manpower is likely a much higher cost...

But this is entirely dependent on the specifics. If you let players host their own servers, you need much fewer servers.
If you can optimize your physics engine so you can run 300 users (30 games) instead of 200 users on a server, you can cut server costs by 30%.
And on, and on -- running a business is complex :-)

Typical low-end server hardware might look like:

- SuperMicro 1U servers: chassis/motherboard Some RAM (might want twice this kit) A couple of CPUs
- Juniper MX10 router
- Junper QFX3600 switches

You could also consider hosting it all in a virtualized data center, like Amazon Elastic Compute Cloud. Typically, that will cost a lot more than running your own.
enum Bool { True, False, FileNotFound };

Data throughput just isn't a big cost these days.
...
You could also consider hosting it all in a virtualized data center, like Amazon Elastic Compute Cloud. Typically, that will cost a lot more than running your own.

Yeah, in my experience, the "hidden cost" on Amazon is the bandwidth costs, which are *way* higher than dedicated hosts :(

In my experience, the "hidden cost" on Amazon is the bandwidth costs, which are *way* higher than dedicated hosts


Agreed! Although they also charge more-than-leasing by a significant mark-up for the hardware, too.
9 cents per gigabyte? Yeah... I'm going to go with "no." :-)
enum Bool { True, False, FileNotFound };
You may also want to consider other costs that are included in having such a huge server infrastructure.

If you're building and running a datacentre for such a game then you also need to staff it.

This means 24/7 support staff, systems administration, security (computer and physical, e.g. Someone standing at the door with a stern look and built like a brick toilet)

You will also need to consider redundancy and backup. If you need 300 servers you probably would have enough spare hard disks, motherboards, entire pcs to replace a specific percentage of the hardware at short notice should it fail.

You would also need to prepare a system which can rapidly image and set up such servers if you're not virtualised (virtualization would be a great idea for these sorts of things by the way).

This adds quite a bit to the basic fees you'd pay if you just wanted to "go live asap and to hell with the risk"...

If you're building and running a datacentre for such a game then you also need to staff it.

This means 24/7 support staff, systems administration, security (computer and physical, e.g. Someone standing at the door with a stern look and built like a brick toilet)


In a typical co-location space, the security, and some amount of hands-on, is included in the deal with the co-location facility.
Like, if you need to tell them "move the cable from port 33 on the top switch of rack 3-B to port 44" they can do that for you.
When you need to "rack and stack" the hardware, you need to be in there.
If you build a system with reliability and redundancy as a goal, you don't need "full shift" for 24/7; you can have monitors that send a page to someone who takes on-call for that day/week.

virtualization would be a great idea for these sorts of things by the way


Virtualization allows you to cram more "virtual servers" into physical hardware, especiallis quitehelpful if your game server software is not made to play nice as a multi-tenant solution. But the draw-back is that virtualization often adds scheduling jitter that can make low-latency "twitch" games play poorly.
If you build your server software to already work "multi tenant" style, then virtualization gives you almost nothing extra.
Orchestration is another matter, depending on how complex your deployment is -- virtualization of some sort may help there, like containers for example. But we're now pretty far from the back-of-the-envelope budgeting topic of the initial question ;-)
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement