Scaling Dedicated Game Servers with Kubernetes: Part 1 – Containerising and Deploying

For the past year and a half, I’ve been researching and building a variety of multiplayer games. Particularly those in the MMO or FPS genres, as they have some unique and very interesting requirements for communication protocols and scaling strategies. Due to my background with software containers and Kubernetes, I started to explore how they could be used to to manage and run dedicated game servers at scale. This became an area of exploration for me, and honestly I’ve been quite impressed with the results.

Why Are You Doing This Thing?

While containers and Kubernetes are cool technologies, why would we want to run game servers on this platform?

  • Game server scaling is hard, and often the work of proprietary software – software containers and Kubernetes should make it easier, with less coding.
  • Containers give us a single deployable artifact that can be used to run game servers. This removes the need to install dependencies or configure machines during deployment, as well as greatly increases confidence that software will run the same on development and testing as it will in production.
  • The combination of software containers and Kubernetes lets us build on top of a solid foundation for running essentially any type of software at scale – from deployment, health checking, log aggregation, scaling and more, with APIs to control these things at almost all levels.
  • At its core, Kubernetes is really just a cluster management solution that works for almost any type of software. Running dedicated games at scale requires us to manage game server processes across a cluster of machines – so we can take advantage of the work already done in this area, and just tailor it to fit our specific need.
  • Both of these projects are open source, and actively developed, so we can also take advantage of.any new features that are developed going forward

If you want to learn more about the intricacies of designing and developing communication and backends for FPS/MMO multiplayer games from the ground up I suggest starting with the articles on Gaffer on Games or, if you prefer books, The Development and Deployment of Multiplayer Games.

Disclaimer

I will put one proviso on all of this. While I am aware of more than a few companies that are containerising their game servers and running them in production, I am not aware of any that are doing so on Kubernetes. I have heard of some that are experimenting with it, but don’t have any confirmed production uses at this stage. That being said, Kubernetes itself is being used by many large enterprises, and I feel this technology combination is solid, super interesting, and could potentially save game studios a lot of time. However, I am still working out where all the edges are. That being said, I will be happily sharing the results of all my research – and would love to hear from others on their experiences.

Paddle Soccer

(Paddle Soccer in action. What a game!)

To test out my theories, I created a very simple Unity based game called Paddle Soccer, which is essentially exactly as described. It’s a two-player, online game in which each player is a paddle, and they play soccer, attempting to score goals against each other. It has a Unity client as well as a Unity dedicated server. It takes advantage of the Unity High Level Networking API to provide the game state synchronisation and UDP transport protocol between the servers and the client. If you are curious, all the code is available on GitHub for your perusal.

It’s worth noting that this is a session-based game; i.e.. you play for a while, and then the game finishes and you go back to the lobby to play again, so we will be focusing on that kind of scaling as well as using that design to our advantage when deciding when to add or remove server instances. That being said, in theory these techniques would work with a MMO type game, with some adjustment.

Paddle Soccer Architecture

Paddle Soccer uses a traditional overall architecture for session-based multiplayer games:

Architecture diagram

  1. Players connect to a matchmaker service, which pairs them together, using Redis to help facilitate this.
  2. Once two players are joined for a game session, the matchmaker talks to the game server manager, to get it to provide a game server on our cluster of machines.
  3. The game server manager creates a new instance of a game server that runs on one of the machines in the cluster
  4. The game server manager also grabs the IP address and the port that the game server is running on, and passes that back the matchmaker service
  5. The matchmaker service passes the IP and port back to the players’ clients
  6. …and finally the players connect directly to the game server and can now start playing the game against each other

Since we don’t want to build this type of cluster management and game server orchestration ourselves, we can rely on the power and capabilities of containers and Kubernetes to handle as much of this work as possible.

Containerising the Game Server

The first step in this process is putting the game server into a software container, so that Kubernetes can deploy it. Putting the game server inside a Docker container is essentially the same as containerising any other piece of software.

If this is not something you have done before, you’ll want to follow Docker’s tutorial, or if you like books, have a read of The Docker Book: Containerization is the new virtualization.

Here is the Dockerfile that is used to put the Unity dedicated game server in a container:

Because Docker runs as root by default, I like to create a new user and run all my processes inside a container under that account. Therefore, I’ve created a “unity” user for the game server and copied the game server into its home directory. As part of my build process, I create a tarball of my dedicated game server, and it’s been built such that it will run on a Linux operating system.

The only other interesting thing I do, is that when I set the ENTRYPOINT (the process to run when the container starts) I tell Unity to output the logs to /dev/stdout (standard out, i.e. display in the foreground), as that is where Docker and Kubernetes will take logs to aggregate from.

From here I am able to build this image and push it to a Docker registry, so that I can share and deploy this image to my Kubernetes cluster. I use Google Cloud Platform’s private Container Registry for this, so that I have a private and secure repository of my Docker images.

Running the Game Server

For more traditional systems, Kubernetes provides several really useful constructs, including the ability to run multiple instances of an application across a cluster of machines and great tooling to load-balance between them.  However, for game servers, this is the direct opposite of what we want. Game servers usually maintain stateful data about players and the game in memory, and require very low latency connections to maintain the synchronicity of that state with game clients such that players do not notice a delay. Therefore, we need to have a direct connection to the game server without any intermediaries in the way adding latency, as every millisecond counts.

The first step is to run the game server. Each instance is not the same as the other, as they are stateful, so we can’t use a Deployment like we would for most stateless systems (such as web servers). Instead, we will lean on the most basic building blocks of deploying software on Kubernetes – the Pod.

A Pod is simply one or more containers that run together with some shared resources, such as an IP address and port space. In this particular instance, we will only have one container per Pod, so if it makes things easier to understand, just think of Pod as synonymous with software container for the duration of article.

Connecting Directly to the Container

Normally, a container runs in its own network namespace and it isn’t directly connectable via the host without some work to forward the open ports inside the running container to the host. Running containers on Kubernetes is no different – usually you use a Kubernetes Service as a load balancer to expose one or more backing containers. However, for game servers, that just won’t work, due to the low latency requirement for network traffic.

If you want to learn about the basics of deploying to Kubernetes, try the interactive tutorials.

Fortunately, Kubernetes allows Pods to use the host networking namespace directly by setting hostNetwork to true when configuring the Pod. Since the container runs on the same kernel as the host, this gives us a direct network connection without additional latencies, and means we can connect directly to the IP of the machine the Pod is running on and connect directly to the running container.

While my example code makes a direct API call against Kubernetes to create the Pod, common practice is to keep your pod definitions in  YAML files that are sent to the Kubernetes cluster through the command line tool kubectl. Here’s an example of a YAML file that tells Kubernetes to create a Pod for the dedicated game server, so that we can discuss the finer details of what is going on:

Let’s break this down:

  1. kind
    Tell Kubernetes that we want a Pod!
  2. metadata > generateName
    Tell Kubernetes to generate a unique name for this Pod within the cluster, with the prefix “game-”
  3. spec > hostNetwork
    Since this is set to true, the Pod will run in the same network namespace as the host.
  4. spec > restartPolicy
    By default, Kubernetes will restart a container if it falls over. In this instance, we don’t want that to happen, as we have game state in memory, and if the server crashes, it’s very hard to restart back where the game was originally.
  5. spec > containers > image
    Tells Kubernetes which container image to deploy to the Pod. Here we are using the container image we created earlier for the dedicated game server.
  6. spec > containers > env > SESSION_NAME
    We are going to pass into the container the cluster-unique name for the Pod as an environment variable SESSION_NAME, as we will use it later. This is powered by the Kubernetes Downward API.

If we deploy this YAML file to Kubernetes with the kubectl command line tool, and we know what port it is going to open, we can use the command line tools and/or the Kubernetes API to find the IP of the node in the Kubernetes cluster it is running on, and send that to the game client so it can connect directly!

Since we can also create a Pod via the Kubernetes API, Paddle Soccer has a game server management system called sessions, which has a /create handler to create new instances of the game server on Kubernetes. When called, it will create a game server as a Pod with the above details. This can then be invoked via a matchmaking service whenever it has a need for a new game server to be started to allow two players to play a game!

We can also use the built-in Kubernetes API to determine which node in the cluster the new Pod is on, by looking it up from its generated Pod name. In turn, we can then look up the external IP of the node, and now we know what IP address to send to game clients.

This solves some problems for us already:

  • We have a prebuilt solution for deploying a server to our cluster of machines through container images and Kubernetes.
  • Kubernetes manages scheduling the game servers across the cluster, without us having to write our own bin-packing algorithm to optimise our resource usage.
  • New versions of the game server can be deployed through standard Docker/Kubernetes mechanisms; we don’t need to write our own.
  • We get all sorts of goodies for free – from log aggregation to performance monitoring and more.
  • We don’t have to write much code (~500 LOC) to coordinate game servers across a cluster of machines.

Port Management

Since we will likely have multiple dedicated game servers running on each of the nodes within our Kubernetes cluster, they will each need their own port to run on. Unfortunately, this isn’t something that Kubernetes will help us with, but solving this problem isn’t particularly difficult.

The first step is to decide on a range of ports that you want traffic to go through. This makes things easier for network rules for your cluster (if you don’t want to add/remove network rules on the fly), but also makes things easier for your players if they ever need to setup port forwarding or the like on their own networks.

To solve this problem, I tend to keep things as simple as possible: I pass the port range that could be used as two environment variables when creating my pod, and have the Unity dedicated server randomly select a value between that range, until it opens a socket successfully.

You can see the Paddle Soccer Unity game server doing exactly this:

Each call to SelectPort chooses a random port within a range, to be opened on StartServer invocation. StartServer will return false if it was unable to open a port and start the server.

You may also have noticed the call to instance.Register. This is because Kubernetes doesn’t give us any way to introspect what port this container started on, so we’ll need to write our own. To that end, the Paddle Soccer game server manager has a simple /register REST endpoint backed by Redis for storage that takes the Pod name that Kubernetes provides (which we pass through by environment variable), and stores the port the server started on. It also provides a /get endpoint for looking up what port the game server started on.  This has been packaged along with the REST endpoints that create game servers, so we have a single service for managing game servers within Kubernetes.

Here is the dedicated game server registering code:

You can see where the game server takes the environment variable SESSION_NAME with the cluster-unique Pod name and combines it with the port. This combinations is then sent as a JSON packet to the /register handler of the game server manager, sessions’  /register handler.

Putting it All Together

If we combine this with the Paddle Soccer game clients, and a very simple matchmaker, we end up with the following:

Kubernetes API step through

  1. One player’s client connects to the matchmaker service, and it does nothing, since it needs two players to play
  2. A second player’s client connects to the matchmaker service, and the matchmaker determines it needs a game server to connect these two players to, so it sends a request to the game server manager
  3. The game server manager makes a call to the Kubernetes API to tell it to start a Pod in the cluster with the dedicated game server inside it
  4. The dedicated game server starts
  5. The dedicated game server registers itself with the game server manager, telling it what port it started on
  6. The game server manager grabs the aforementioned port information, and the IP information for the Pod from Kubernetes and passes it back to the Matchmaker
  7. The matchmaker passes the port and IP information back to the two player clients
  8. The clients now connect directly to the dedicated game server, and play the game

Et Voilà! We have a multiplayer dedicated game running in our cluster!

In this example, a relatively small amount of custom code (~500 loc) was able to deploy, create, and manage game servers across a large cluster of machines by leveraging the power of software containers and Kubernetes.Honestly, it’s pretty awesome the power that containers and Kubernetes gives you!

This is but part one in the series however! In the next episode, we will look at how we can use APIs to scale up our Kubernetes cluster as demand for our game increases.

In the meantime, I welcome questions and comments here, or reach out to me via Twitter. You can see my presentation at GDC this year on this topic as well as check out the code in GitHub, and still being actively worked on!

Cloud NEXT for Game Developers

If you are a game developer that works on any kind of mobile, multiplayer or online internet connected game, you may want to attend Google Cloud’s premier conference in San Francisco, Cloud NEXT, especially since it is conveniently timed right after GDC. Even if you aren’t attending GDC, if you are building any backend infrastructure for online or multiplayer games, analytics or machine learning over player data, there is plenty to see at Cloud NEXT.

That said, there are over 200 breakout sessions over the three days of the event, and it can be hard to work out which sessions might actually be useful to you, as a game developer, so this guide is here to help!

While there are wide variety of sessions that would be applicable if you do any kind of backend development for all types of online games, here are a few that are directly impactful for game developers, and are a definite must see at Cloud NEXT.

Open Persistent, Multiplayer Worlds

The session I’ll be co-presenting with Rob Whitehead, CTO of Improbable is Building Massive Online Worlds with SpatialOS and Google Cloud Platform.

Late last year we announced our partnership with Improbable who run their product SpatialOS on Google Cloud Platform. This is particularly exciting because it makes building almost any kind of large, persistent open world with multitudes of players as well as autonomous agents, such as NPCs, in the hands of indie developers. It’s built as a back end as a service, so SpatialOS will manage scaling to meet your needs without your team’s intervention, saving your team a lot of infrastructure management responsibilities and the costs associated.

Location Based Games

With the explosion that was Pokemon Go, location based gaming is also becoming a really hot topic. The session Location-based gaming: trends and outlook will look at emerging trends in this space and how you can use Google tools to get an accurate representation of the world around you and use them within your game. I particularly love the combination of this technology with augmented reality games. I’m predicting we’ll see a lot of these types of games coming out in the future.

GPUs

While not specifically a session directly for game developers, if you need GPUs for your virtual machines to render models, or run games in real time in the cloud, I would also put Supercharge performance using GPUs in the cloud as a must see session. I’m particularly excited about games streaming alternate viewing cameras in online games, especially in e-sports, via running connected client instances games in the cloud – and for this, you have to have GPUs.

Game Analytics

Finally, if you want to see the Product Manager for Games at Google Cloud Platform, Daniel Grachanin, who is co-presenting this talk, or are looking for a very practical session on how you can use Big Data to instrument and introspect your game, you should check out the session Game changer: Google Cloud’s powerful analytics tools to collect and visualize game telemetry and data. I always really enjoy talks like this, because so often players don’t end up doing what you would expect them to do as a game designer. Using tools like this gives you insight into how your players are actually engaging with in your game, which is always fascinating.

While on the surface it may not seem like Cloud NEXT is an event that is aimed at game developers. But if you are looking to build any kind of backend infrastructure to support your game, there is a plethora of information available at this event. From the talks above, to many sessions and bootcamps on general application development, security, big data and analytics, machine learning and more.

Finally, if you come – make sure to wander up and say hello!

Recording: Creating interactive multiplayer experiences with Firebase

Earlier this year, I had the honour of presenting at Google I/O 2016, on how to build multiplayer games with Firebase.

If you aren’t familiar with Firebase, it’s a fantastic Backend-as-a-Service that has several features, most notably the Realtime Database, that can cut down the amount of time for certain types of games quite drastically.

In this presentation, I build a simple web based game, using only client side Javascript (no frameworks!) and Firebase, that shows off real time chat, some very simple matchmaking, and a warped version of Rock-Paper-Scissors that can be played over the internet: Happy-Angry-Surprised.

The premise of the game is such – a web cam takes a picture of your face, and a picture of the person you are playing. Your facial emotion is determined via Cloud Vision API, and the winner is determined by:

  1. Happy beats Angry
  2. Surprised beats Happy
  3. Angry beats Surprised

Game screenshot

That’s really about it!

This was an incredibly fun demo to build, and also presentation to give. I especially love that I am able to play the game with an audience member when I demo the final application – it’s a load of fun.

If you want to see the code, it’s available on Github for your download and perusal as well.

One Year as a Developer Advocate

A little while ago, I passed my first year mark of working for Google. This also marked the first anniversary of me moving from a (long?) career as a software engineer, into a full time position in Developer Relations, and more specifically Developer Advocacy.

I wanted to write a post about this, as when I often talk to my fellow tech-community members, I discuss the transition from a software engineer, into that of a Developer Advocate, I normally get a response along the lines of:

“Oh, I don’t think I could stop being an engineer”

“How is it going not writing code anymore?”

And various things along those lines.  Usually at that point, we go into a more detailed and nuanced conversation about what it means to be a Developer Advocate, at least from my, somewhat limited perspective, and those sort of comments tend to fade into the background.

So, in lieu of having this conversation with everyone over and over again, I thought I’d write down some of my thoughts and feeling about what I feel it is to be a Developer Advocate, the transition from being a full time software engineer, and some of what I’ve been up to for the past year, since joining Google, and more specifically Google Cloud Platform – hopefully you will find it interesting.

Before I get started, I’ll throw out a big disclaimer – the whole developer relations space, and the various roles that are found in it, seems to vary quite greatly as you move from company to company, team to team, and even within individuals within teams – please take this as my own personal opinion, and feel free to take it with a large handful of salt (I’ve only been professionally at this for a year!!!). What I call “Developer Advocacy”, other people may call other things, and may blend into other jobs in other places.

So what does it mean to me to be a Developer Advocate? I feel that at its most core level, it means that my job is to advocate for external developers of a given product. Which is about as literal a meaning as you can get.

This means that I want to communicate to external developers, and tell them about the product that I represent (in this case Google Cloud Platform), both to get them excited about our product, but also to give them enough information about it, so they can know whether it fits their needs.  To belabour this point a little deeper – I advocate for developers – so if there is a better product for your needs, I feel my job as a Developer Advocate is to point you in that direction.  My job is not sales (which is a totally different, yet just as important role), I don’t have quota, so if you are happy using the tools and services you use, then that is fantastic, if I can just give you some more information that may be of service down the line, then great, if it’s not, that’s totally fine as well.

I have people skills!

Advocating for developers also means internally as well. This is the part that many people don’t see, and often the perception can be that as a job, we are entirely outward facing. I want to be able to hand a developer the best version of a product that I can, because I am on your side, so that means providing feedback as well internally – to Product Managers, documentation, engineering, and anywhere else I can see where there is a place to make an improvement.

I’ve found this role has suited me quite well. Even when I was a full time software engineer, I always had a strong interest in how to build community, how to effectively engage with people and the whole world of social interaction. In many ways, the open source projects I started, and subsequent communities I helped build up around those projects was a natural extension of that interest I held. In that open source arena, I quickly learned that the greatest project in the world wouldn’t have any users if all you did was whisper its praise into a well.  Getting the message out into the world, in an interesting and engaging way was the only way to (a) get people interested in the product and (b) get people to be invested enough to contribute back to the project, via feedback, pull requests or otherwise, and thereby improve the project overall.

On top of that, I invested heavily into the communities I was involved with.  I was an Adobe Community Professional for many years, I presented at a variety of conferences, including Adobe MAX, I ran a conference in Melbourne for a number of years, as well as quite a few other endeavours in similar vein.  So when the opportunity raised its head to move into the developer relations space, it was really just a flipping of software engineer being greater than advocacy, to having advocacy be greater than software engineer in my day to day life.

I will readily admit that it was an interesting mental shift making that flip. I know that I personally (and from anecdotal evidence I have from talking to peers, many feel the same), self identified strongly as a “software engineer” first and foremost, and it was an initial struggle to think that I was going to be giving that up, before I truly understood this role, and also let go of some cognitive dissonance.  Now that I’ve had some time in this position, I can say that in this job, I’ve never felt that I’ve stopped being a software engineer. In fact, being a software engineer is a critical part of me being able to do my job effectively, but now I just use it in a slightly different way.  Yes, I do sometimes miss solving the large architectural problems or consistently writing code all day long, but the flip side is, the code I write is generally only the code I want to write. If I want to spend some time working out how I can use Haskell on Google Cloud Platform, I can make that decision to do so (this is on my todo list, once I get a better understanding of Haskell). On top of that, no one is going to call me at 3am to let me know a server is on fire, and I need to get up and fix it. So there are pros and cons both ways.

So, given the above – what does a typical “day at the office” look like for me? Honestly, and one of my favourite things about this job, is that there are really no “typical days”. Each day vastly differs from the next, but here is a most likely incomplete list of thing I’ve been up to in the past year just to give you an idea of what sort of things I’ve been involved with.

Doing a Lot of Research

This may not be immediately obvious from the outside, but I spent a lot of time reading articles, running through online training, building proof of concepts, just to learn new things.  I was thankful that I had had prior experience with Google Cloud Platform before joining but I quickly found (a) that I really only had a small understanding of the platform, which was naturally biased towards the types of applications I was previously building on it and (b) Google Cloud Platform is constantly releasing new things, so choosing which products were going to be relevant to the audience I feel I identify with and learning about them is a constant, ongoing task.

This wasn’t just limited to Google Cloud either – I spent an inordinate amount of time playing with Docker containers, in some rather bizarre and interesting ways, to try and get a deep understanding of the platform (and also because it’s a ridiculous amount of fun).

On top of that, in the past five months or so I’ve been increasingly game development focused. Over the past few years, I’ve had a romantic notion of one day getting more involved with game development, and working on Google Cloud has allowed me to foster that. Which means I spent a lot of time both reading and playing with various tools and libraries, but also talking to various people across Google and outside from sales to marketing, product management and game developers to get truly understand what game developers need out of cloud infrastructure, and also how having an internet connected game changes the types of games that can be built and how they are developed in the first place. I’ve still got a lot more to learn in in this area, as it is massive and varied, but it’s been an absolute delight to get involved in something I’ve been getting increasingly passionate about as I get older.

Writing Software

It’s true! I do spend time writing software! Crazy!

More often than not, this usually consists of building fun and interesting demonstrations of Google Cloud Platform, which eventually ends up being content for presentations, and also as a perfect avenue for truly having an understanding of the product(s) that I’m talking about, so this goes hand in hand with the research section above.

Sometimes this is just open source, for open source’s sake, because it’s an interest, or useful to what I do.  The demos I build also become open source projects as well, although that being said, I’m currently behind on releasing about 3 things, just because I’ve got to send them through code reviews and the like.

Presenting at Conferences and Events

Unsurprisingly, I do a fair amount of presentation at conference and other events, such as meetups.  I’ve had the pleasure of attending some fantastic events, and particularly enjoyed having the opportunity to present at Google I/O this year.

I find a distinct pleasure in getting up on stage and building a narrative that people can use to understand a topic, and having them be able to walk away having a deeper understanding than they had before.  I work very hard to make sure people can walk away from a presentation with enough of an understanding to know whether the topic at hand is something they are keen to explore further, or if the given topic is not relevant to them at this time, which in my mind, is just as important information.  Also, getting a laugh out of an audience is just a great feeling as well.

Creating Online Content

Online content has also been an interesting adventure. I’ve always been a relatively intermittent blogger, and while this job has me writing somewhat more than before, I’m nowhere near what I would call “prolific”. That being said – I did have the wonderful opportunity to start the weekly Google Cloud Platform Podcast with my teammate Francesc Campoy Flores.

Podcast setup

Credit: Robert Kubis @ GCP NEXT, San Francisco

I’ve been an avid podcast listener for a long time, and started the randomly released 2 Developer Down Under podcast with Kai Koenig several years ago, but the Google Cloud Podcast has become a far more professional endeavour, and a much larger audience. It has given me the opportunity to interview both a wide variety of people inside of Google, and a slew of customer and users of Google Cloud outside of Google as well.  We’ve had guests from from San Francisco to South Africa, and feedback to the podcast has been consistently positive, which has been incredible.

Interacting with the Community

Interacting with the community is also an integral part of what I do as a Developer Advocate. This can be in person at meetups and conferences, via Hangouts for in depth customer meetings as well as online via Twitter and the Google Cloud Slack community that I founded with teammates Sandeep Dinesh and Ray Tsang.

If you are interested in Google Cloud Platform, the Slack community has grown by leaps and bounds, and is a great place for Google Cloud users to interact with each other and share experiences. There are also a large number of Googlers that usually hang out there for you to chat with as well.  I’m very happy to see how much the community has come together through a shared passion for Google Cloud Platform, especially considering how widely dispersed they all are.

Overall, community interaction is a hugely valuable activity for me, because this is where I really get to see what software people are building “out in the wild”, ideally help them find solutions that will aid them in their endeavours as well as mentally match that back to what we are doing at Google so we can build a better service.

Product Feedback

Doing all of the above, gives me the research to effectively provide feedback to the right people in Google Cloud platform. Without my external facing endeavours, it would simply just be solely my opinion (and feel free to give that the level of respect you feel is appropriate), and nowhere near as valuable as it is when combined with the voices of people I meet at events, interact with online and talk to on the podcast.

Feedback takes the form of reports from events, that are shared far and wide, regular old bug filing, to reporting just general pain points. It can be sit down meetings with product managers to discuss existing features, or even proposing new features that I feel would be a great addition to a product.

I feel this is the part that most people don’t think of when they think of developer advocacy, which is totally understandable, it’s not something that is immediately obvious to the outside world, but I feel that closing this loop, from external to internal advocacy is an important part of what I do in my job, and developer advocacy in general – because if I’m not making sure the product is getting better, I’m not making sure the people I advocate for – the developers – are getting a better product.

Overall, I have to admit, I’m adoring this new role. I feel like it’s been a very natural progression for me, and most importantly, it’s an incredible amount of fun building crazy-silly demos, running around and talking to people and generally just spending every day involved in some super, super cool technology at Google.

On top of that, this is honestly, the best team I’ve ever worked on, with the widest variety of incredibly intelligent and creative people I’ve had the pleasure of being in a team with.

Hopefully, that gives you a better understanding of what it means to be a Developer Advocate, or at least my current perspective on the subject.  Please add comments, questions or suggestions below, this is a fascinating topic for me, so always keen to engage in meaningful discourse around it.  Alternatively, if you want to talk privately about developer relations, pretty much anything to do with the cloud, or are looking for a speaker for an event, or almost anything at all vaguely related to tech, don’t hesitate to drop me a line via the the contact methods above.  I love to talk to people, and am always looking to have a chat.

Brute 0.4.0 – From CLJX to Reader Conditionals

This release of Brute, provides no new features, but is instead a migration from using CLJX as the support for both Clojure and ClojureScript at once, to upgrading to Clojure 1.7, and utilising the (relatively?) new feature of Reader Conditionals.

In terms of changing the implementing code, this was a fairly straightforward task. It was essentially a case of changing file extensions from cljx to cljc, moving the code out of the cljx folders and back into the src directory, and then converting the markers for where clj implementations would be switched out with cljs implementations, depending on which platform it was running on.

For example, the function I had to generate UUIDs, used to looks like this:

But now it looks like this:

As you can see, there are some minor syntactic changes, but essentially it’s the same structure and format. Not a hard thing to do with some time and some patience.

The more interesting part was more finding a test framework that worked well with reader conditionals. Previously, I had all my tests implemented in Midje, but found that the autotest functionality doesn’t work with reader conditionals, which for me was it’s biggest feature.  It also didn’t have core support for ClojureScript, but instead was implemented by a different developer as part of the purnam library. While the implementation for ClojureScript is quite good, it hadn’t been touched in ten months, which has me concerned.

After hunting around a bit, I ended up settling on using good ol’ clojure.test and cljs.test. It’s well supported across both platforms, and as I recently discovered has amazing integration with Cursive Clojure!  It took me a little while to get all the tests ported across, but otherwise the experience was also very smooth.  Now I have a testing platform I can continue to develop on, and I know will continue to support both Clojure and ClojureScript for the foreseeable future.

I have some minor enhancements for Brute that I will probably jump on at some point in the near future, but as always, feature requests and bug reports are always appreciated.

Recording: Containers in Production – Crazy? Awesome? or Crazy Awesome!

Late last year I was invited to participate in a panel discussion at New Relic’s FutureStack conference on using Software Containers (which generally means Docker) in production environments.  We had an excellent discussion about what is good about the ecosystem, as well as what needs to improve, as well as best practices and approaches for running Containers in production environments – at either the large scale, or the small.

I was super happy that I managed to get one of my main talking points about Containers into the conversation, specifically that Containers do not necessarily equal micro-services. I’ve seen people feel that these two are inextricably linked, and they definitely are not. You are perfectly able to leverage the benefits of Containers, without going down the extra complexity of micro-services (which has it’s own set of pros and cons) if you do not want to.

During the discussion I reference a talk by John Wilks, discussing Borg at Google, and how it directly influences the design decisions behind Kubernetes. It’s one of my favourite presentations, and well worth a watch as well.

Futurestack was a great event, and it was a pleasure to be able to attend.

Recording: Wrapping Clojure Tooling in Containers (Part 2)

A few weeks ago, I had the distinct opportunity to attend and present at clojure/conj in Philadelphia. This was the first time for me attending the event, but it had been on my list of conferences to be at for a very long time.  Now that I live in San Francisco, it’s great that I can take advantage of the situation and get to attend the events I had watched enviously from across the ocean. Especially since I’ve been playing with and (very occasionally) working with Clojure for a while now.

Wrapping Clojure Tooling in Containers(link to video)

The talk I gave at the conference was an update on a previous talk I had done at the local Clojure Meetup. The difference being that when I wrote the original talk, I was attempting to build Docker development environments that were one size fits all, by leveraging ZSH. Since then, I’ve switched to developing with per-project Docker development environments, powered by Makefiles that are shipped along with, and contain all the dependencies for, said project. This talk represented this change, as well as several other tips and tricks I’ve discovered along the way (cross platform GUIs running in containers anyone?)

Hopefully you can’t tell, but the first section of the presentation was given without the slides. We had some technical difficulties with my laptop and the projector, but the excellent people at the event managed to get it all working just in the nick of time for the demo, and while it did cut my presentation time a little short, I still had time to cover the points I wanted to cover.

If you are interested in the code that powers this talk, it is all up on github, so you can pick it apart at your leisure.

 

Recording: Scaling Node.js with Docker and Kubernetes

Last month I had the pleasure of presentation at the Connect.js conference in Atlanta, Georgia, on scaling Node.js with Docker and Kubernetes.

I really enjoyed the conference, and giving this talk. I feel that Kubernetes really shows the power of what software containers can do to give generic solutions to general application development problems like scaling and deploying, regardless of language or application design.

Unfortunately the audio isn’t the best (and the slides are a little squished), but it’s definitely watchable. Huge thanks to Connect.js for taking the time to make the recordings and pushing them live.

You can also grab all the source code for review as well.

Recording: Wrapping Clojure Tooling in Containers

I recently had the pleasure of doing a short presentation to the Bay Area Clojure User Group on Wrapping Clojure Tooling in Containers.

We went from having no Clojure tooling, and Java 7 on my host machine, to quickly firing up a new terminal shell running in a Docker container with Leiningen pre-installed along with Java 8.

This lets us create a Docker container that we can then share with out team, or with a wider open source community that we know isn’t going change, except for the parts we want to change.

We also discussed file permission issues between hosts and containers, and showed off an interesting solution for sharing a JVM inside a container with the host.

Thanks to my co-worker, Francesc for recording the talk, and putting it up on Youtube!

Configuring Your GOPATH with Go and Google App Engine

When I started working with Google App Engine and Go, I wasn’t sure how to best configure my GOPATH when developing Google App Engine applications.  You can find documentation on this aspect on the Go and App Engine page, but being new to both Go and App Engine I was not aware of what options were available, and what their pros and cons would be.

If you are not sure what a GOPATH is, I recommend this video tutorial on setting up a Go workspace and running and testing code as it helped explain the concept to me.  It is also worth noting, that some Go programmers have a single GOPATH they use for all their projects on a given computer, however, for the sake of this article, we will be considering the use of a separate GOPATH per-project instead.

Up until a point, you can actually get away with not having a GOPATH at all when developing with App Engine. If you have a simple project with a single directory and no dependencies, you have no need to set a GOPATH and wouldn’t notice any difference if it was missing.  On top of this, if you do have dependencies, goapp will actually store anything you goapp get in the Google App Engine SDK’s gopath subdirectory.

However, I strongly advocate having a GOPATH set as a mainstay for developing with Go and App Engine.  Managing your code through an idiomatically Go way, i.e. with a GOPATH, will ensure that your code remains manageable as it gets more sophisticated and complex, and that all your dependencies for a specific project are retained within its specific GOPATH, not shared between any projects using the SDK.  Using GOPATH has an added benefit if you ever switch between regular Go development and App Engine development — in that case, there should be minimal context switching on your development approach and toolchains.

The aim of this post, and the attached sample code, is to show several options for GOPATH and dependency management when working with Go and App Engine, while exploring some of the pros and cons of each approach. Understanding these options will enable you to start with an initial code layout and GOPATH strategy that will work with your project at its start and well into its lifecycle.

 

Initial Configuration

To get started, let’s git clone this project, and have a look at its structure.

image16

This looks very much like a regular Go project. We have a src folder that contains our Go code. Within that, we have a modules folder that contains three different App Engine
Modules
 (basic, vendored, and gb), each implemented with a different GOPATH strategy.  Within each module subfolder, there exists an app.yaml file that has the App Engine settings for that module, and a routes.go file that specifies the http endpoints for that module.  We also have a lib folder, which contains code that is shared by all three of these modules, to show one possibility of how we can share code between App Engine Modules with all three GOPATH structures.

For the sake of this article, I’m making the assumption that the Google Cloud SDK and the Go App Engine SDK is already installed on your system.  That being said, it is worth noting at this step that this whole project is totally Make driven.  The Makefiles specify what the GOPATH is set to, and perform all our operations on this code base. So, if you want to follow along at home, you don’t need to worry about corrupting an already set GOPATH or other environment variables as this example code will not alter them in any way.

I used a few general tools, such as golint and goimports, in developing this project, some of which we will look at while we go through this example, so you will need to install them if you decide to run through the code yourself:

image01

Now that the tools are in your ./bin folder, your Makefiles can reference them.

 

A Basic GOPATH

This is the simplest implementation. We have a GOPATH with a single entry (more on that later), and we are using the basic goapp tooling that is provided with the App Engine SDK (also more on that later).

Let’s take the opportunity to look at the code in ./src/modules/basic/routes.go

This is a simple HTTP handler, which uses the template display in template.go to return some HTML that shows us the module name, and uses our lib dependency reverse and a third-party dependency github.com/nu7hatch/gouuid to output several values on screen.  We also call a GiveMeANumber() function that is implemented in number.go in the same directory as the routes.go file.

First things first, let’s have a look at what GOPATH the Makefile has set. There is a handy Make target named debug-env that shows us all GO environment variables that are set.

image00

We can see here that the GOPATH is set to the directory we cloned this repository to, which keeps things very simple.

To install our third-party dependencies, we have a deps target in our Makefile that uses the goapp get tool to download the third-party dependency of github.com/nu7hatch/gouuid

Let’s run this target, and see where our code ends up.

image14

We can see here that the gouuid package is stored under /src/github.com/nu7hatch/gouuid, which anyone who is used to working with regular GOPATHs would have expected.

This single level GOPATH approach works well in that it is very clear and easy to understand. Every piece of Go code you use is placed in the same directory and you know exactly where it all sits.  The downside to this approach is that all your third-party dependencies can get mixed into your custom code base, which can feel kind of messy and could potentially be confusing.

Let’s run this, and see it in action. Our Makefile has a serve target that will spin up a local App Engine instance for development:

image07

Browsing to http://localhost:8080 we can see the result we wanted: a UUID, an integer, and our UUID reversed:

image03

 

A Vendored GOPATH

This GOPATH implementation is slightly more complex, but nicely separates our third-party dependencies from our own custom code.  We are still using the standard goapp tool, but we implement a two-level GOPATH to allow our dependencies to be placed in a different location.

Let’s have a look at the code in ./src/modules/vendored/routes.go

We can see that the code is essentially the same as before. We still have the dependency on the third-party library github.com/nu7hatch/gouuid, we have a local function in this module named GiveMeACapitalLetter(), and we are outputting several values to a HTML page through our template display.

Again, let’s look at the GOPATH for this module using the debug-env Makefile target

image15

We can see that the GOPATH that has been set here has a : in the middle of it. This makes Go look in both /home/mark/workspace/appengine-golang-gopath/vendor and also /home/mark/workspace/appengine-golang-gopath when looking for Go source code. It’s also worth noting that goapp get (and go get) will place any dependencies it retrieves in the first path it finds in that GOPATH list, which, as you’ll see shortly, is a very useful behaviour.

Let’s clean out our old dependencies, re-run make deps in this module, and see where our uuid third-party dependency ends up:

image09

This is getting interesting! Rather than our third-party dependency being stored in the same directory as our custom code, it gets placed in a vendor directory. This means that there is a very clear separation between our dependencies and what we are authoring, and there is very little chance for confusion between the two, at the expense of having a slightly more complex GOPATH configuration.

Let’s run make serve to see our code run.

image17

Browsing to http://localhost:8080 we again can see the result we wanted: a UUID, a letter, and our UUID reversed:

image06

 

GB

This approach is a bit more interesting, in that it uses no GOPATH at all. Instead it uses a tool called gb recently written by Dave Cheney. This tool is one of many Go dependency management tools in existence, but it has risen quickly in popularity, and has become one of my favourite tools when developing Go applications across the board. It rewrites the Go tool chain to make project-based development easier, and it has an ecosystem of plugins to help write Go and, in our case, Google App Engine applications.

Having a look at our routes source code in ./src/modules/gb/routes.go, we can see that the code is almost identical to our last two examples:

The only difference from our previous routes.go is that we have a different package local function GiveMeASymbol(), which returns a random ASCII symbol.

While the overall code structure looks the same, let’s have a look at our GOPATH:

image10

Wow, there is no GOPATH at all! Gb instead goes looking for a directory that has a src subdirectory, which a GOPATH oriented project usually does — so no changes needed there either.  This is one of the nice things about gb, you don’t have to worry about environment variables; all your code organisation happens through convention.

When we run our make deps target, you can see that the usual goapp get commands have been switched out for gb vendor fetch commands. This is powered by the optional gb-vendor plugin, which downloads third-party dependencies and works slightly differently from the standard goapp get.

image05

The gb-vendor plugin downloads third-party dependencies into a vendor folder, almost exactly the same as we had before, but without having to directly specify it in the GOPATH. This approach gives you the same separation of third-party dependencies from your custom code, but without the extra work of managing your own GOPATH configuration.

Note that gb-vendor also creates a manifest file in the vendor directory:

This is a central repository of which exact version of the dependency you have downloaded. This is useful, as you can then share this manifest your vendor directory in your source control, and other team members and build systems can ensure that they all have exactly the same dependencies. without having to store your third-party code in your repository if you don’t want to. (Update: 16th, July 2015 – gb-vendor plugin is of the opinion that you should store your vendored dependencies in your source control. See this blog post for reasons.)

gb allows for plugins, and there is a community contributed gb-gae plugin that integrates gb with Google App Engine. In the gb Makefile, we use this to start up the local App Engine development server when we run the make serve target:

image11

Browsing to http://localhost:8080 we can see the result we wanted: a UUID, a symbol, and our UUID reversed:

image13

It is worth noting that the gb-gae plugin can also be used to build, test, and deploy App Engine applications as well, so it can be used for all your Go and App Engine needs.

 

Conclusion

To recap, we’ve gone through several GOPATH solutions here that can work for building Go applications on App Engine. The key things to remember are:

If you like simplicity above all else, the single layer, basic GOPATH may be the right option for you.

If you like clear separation between your third-party dependencies and your own code, the dual layer GOPATH that vendors your dependencies may be right for you.

If you like a tool that not only vendors your dependencies, but also manages which version is being used across teams and platforms and has an ecosystem of plugins as well, the gb approach may be right for you.

Hopefully that has given you some ideas on how you would like to structure the code in your next Go App Engine product.  Good luck, and happy Go coding!

All code show here is licensed under Apache 2. For more details find the original source on GitHub.