Scaling Dedicated Game Servers with Kubernetes: Part 2 – Managing CPU and Memory

This is part two of a five part series, on scaling game servers with Kubernetes

In the part 1 of this series, we looked at how we could take our dedicated game server, containerise it with Docker, and then host and manage it on Kubernetes, which was an excellent start. However since our Kubernetes clusters are generally of a fixed size we could potentially run out of available capacity to run all the game server containers we need to match all the players who want to play our game – and that would be a very bad thing.

There are lots of options for scaling up and down Kubernetes clusters, and we will go deeper into a custom Kubernetes node scaler in future posts. First we have to work out one very important thing: How much CPU and memory does my game server actually take up?

Without this knowledge, there is no way we can match up the CPU and/or memory utilisation of our game server with the available resources in our Kubernetes cluster, and therefore know how many game servers can be run within a cluster of a given size. We also won’t be able to calculate how many nodes to add when the number of players in our game increases (as it naturally will 😉) and we want to make sure the cluster is large enough to accommodate all of them. We will also want to scale down our cluster if traffic starts to drop to take advantage of cost savings by removing cluster nodes that aren’t being used.

Kubernetes Dashboard

The great thing is, visualisation of CPU and memory usage is another feature that Kubernetes provides out of the box. Kubernetes is bundled with a dashboard that gives us a nice visual representation of what is happening inside our cluster – such as listing Pods and Services, providing us with graphs of CPU, memory usage, and more.

(The Kubernetes dashboard!)

To securely connect to the Kubernetes dashboard, Kubernetes’ command line tool has the command:

kubectl proxy

Which when run, will return with:

Starting to serve on 127.0.0.1:8001

This creates a proxied connection to the Kubernetes master, (assuming you have permission to access the cluster) which hosts the Kubernetes cluster’s API and the dashboard that is displayed above.

If we now point our browser to http://localhost:8001/ui, we will see the dashboard.

Determine CPU and Memory Usage

You may have noticed that the dashboard gives us aggregate statistics of CPU and memory across the cluster, but it can also provide us with the same information at the Pod level as well! Therefore, all we need to do to determine how much CPU and memory our game server is using, is deploy a Pod containing our game server (which we set up in the previous post), do some load tests by running multiple game sessions on it, and have a look at the provided graphs.

I did this for the Paddle Soccer game, and you can see one of the results below:

In the test above, this simple dedicated game server peaks its usage at around 0.08 CPU cores and just over 34 Megabytes memory. This was the maximum usage we saw when doing load tests on the dedicated game server, so we’ll draw a line in the sand here and say that this is the upper limit of what our server utilises, add some buffer, and plan accordingly.

Restricting CPU and Memory Usage

One of the very useful capabilities of software containers such as Docker, is that it has the ability to put constraints on both CPU and memory usage of the running container, and therefore the processes within it.  Kubernetes exposes this to us through its Pod configuration, which means that we can explicitly ensure that CPU and memory usage won’t go over a certain threshold and adversely impact other game servers running on the same node. Given how sensitive game servers can be to CPU fluctuations, this is a very useful thing!

Therefore, if we want to limit the CPU usage of the Pod that contains our game server, we can do so by updating our previous Pod definition:

It’s worth noting that in my sample code, I use the Kubernetes API to provide a configuration identical to the one above, but the yaml version is easier to understand, and it is the format we’ve been using throughout this series.

In the above example we have set a limit of “0.1”, which ensures that our game server only ever has access to one tenth of a CPU on the node that it is running on. We did this by adding the resources section with a corresponding limits and cpu section in our yaml for the game server container definition.

I chose to set a maximum CPU usage of 0.1 to give some padding to the game server 0.08 core usage result we saw above, while still letting me fit 10 game servers per core on each Kubernetes cluster node, which should scale well for our needs.

We could also do similar limits with memory usage, but for the sake of simplicity we’re going to just be limiting CPU usage, and ultimately using just CPU for our scaling metrics as well. If you want to learn more, the “Managing Compute Resources for Containers” documentation is also an excellent guide on these Kubernetes features.

Next Steps

Now that we have a known CPU usage and limits on our game servers, we will combine this in our next posts with the capacity of nodes that Kubernetes API reports, to enable scaling up and scaling down the cluster size in relation to the number of game servers requested on the cluster.

In the meantime, as with the previous post – I welcome questions and comments here, or reach out to me via Twitter. You can see my presentation at GDC this year on this topic as well as check out the code in GitHub, which is still being actively worked on!

All posts in this series:

  1. Containerising and Deploying
  2. Managing CPU and Memory
  3. Scaling Up Nodes (upcoming)
  4. Scaling Down Nodes (upcoming)
  5. Running Globally (upcoming)

Leave a Comment