Configuring Your GOPATH with Go and Google App Engine

When I started working with Google App Engine and Go, I wasn’t sure how to best configure my GOPATH when developing Google App Engine applications.  You can find documentation on this aspect on the Go and App Engine page, but being new to both Go and App Engine I was not aware of what options were available, and what their pros and cons would be.

If you are not sure what a GOPATH is, I recommend this video tutorial on setting up a Go workspace and running and testing code as it helped explain the concept to me.  It is also worth noting, that some Go programmers have a single GOPATH they use for all their projects on a given computer, however, for the sake of this article, we will be considering the use of a separate GOPATH per-project instead.

Up until a point, you can actually get away with not having a GOPATH at all when developing with App Engine. If you have a simple project with a single directory and no dependencies, you have no need to set a GOPATH and wouldn’t notice any difference if it was missing.  On top of this, if you do have dependencies, goapp will actually store anything you goapp get in the Google App Engine SDK’s gopath subdirectory.

However, I strongly advocate having a GOPATH set as a mainstay for developing with Go and App Engine.  Managing your code through an idiomatically Go way, i.e. with a GOPATH, will ensure that your code remains manageable as it gets more sophisticated and complex, and that all your dependencies for a specific project are retained within its specific GOPATH, not shared between any projects using the SDK.  Using GOPATH has an added benefit if you ever switch between regular Go development and App Engine development — in that case, there should be minimal context switching on your development approach and toolchains.

The aim of this post, and the attached sample code, is to show several options for GOPATH and dependency management when working with Go and App Engine, while exploring some of the pros and cons of each approach. Understanding these options will enable you to start with an initial code layout and GOPATH strategy that will work with your project at its start and well into its lifecycle.

 

Initial Configuration

To get started, let’s git clone this project, and have a look at its structure.

markmandel@markmandel: ~-workspace-appengine-golang-gopath_004.pngimage16

This looks very much like a regular Go project. We have a src folder that contains our Go code. Within that, we have a modules folder that contains three different App Engine
Modules
 (basic, vendored, and gb), each implemented with a different GOPATH strategy.  Within each module subfolder, there exists an app.yaml file that has the App Engine settings for that module, and a routes.go file that specifies the http endpoints for that module.  We also have a lib folder, which contains code that is shared by all three of these modules, to show one possibility of how we can share code between App Engine Modules with all three GOPATH structures.

For the sake of this article, I’m making the assumption that the Google Cloud SDK and the Go App Engine SDK is already installed on your system.  That being said, it is worth noting at this step that this whole project is totally Make driven.  The Makefiles specify what the GOPATH is set to, and perform all our operations on this code base. So, if you want to follow along at home, you don’t need to worry about corrupting an already set GOPATH or other environment variables as this example code will not alter them in any way.

I used a few general tools, such as golint and goimports, in developing this project, some of which we will look at while we go through this example, so you will need to install them if you decide to run through the code yourself:

image01

Now that the tools are in your ./bin folder, your Makefiles can reference them.

 

A Basic GOPATH

This is the simplest implementation. We have a GOPATH with a single entry (more on that later), and we are using the basic goapp tooling that is provided with the App Engine SDK (also more on that later).

Let’s take the opportunity to look at the code in ./src/modules/basic/routes.go

This is a simple HTTP handler, which uses the template display in template.go to return some HTML that shows us the module name, and uses our lib dependency reverse and a third-party dependency github.com/nu7hatch/gouuid to output several values on screen.  We also call a GiveMeANumber() function that is implemented in number.go in the same directory as the routes.go file.

First things first, let’s have a look at what GOPATH the Makefile has set. There is a handy Make target named debug-env that shows us all GO environment variables that are set.

image00

We can see here that the GOPATH is set to the directory we cloned this repository to, which keeps things very simple.

To install our third-party dependencies, we have a deps target in our Makefile that uses the goapp get tool to download the third-party dependency of github.com/nu7hatch/gouuid

Let’s run this target, and see where our code ends up.

image14

We can see here that the gouuid package is stored under /src/github.com/nu7hatch/gouuid, which anyone who is used to working with regular GOPATHs would have expected.

This single level GOPATH approach works well in that it is very clear and easy to understand. Every piece of Go code you use is placed in the same directory and you know exactly where it all sits.  The downside to this approach is that all your third-party dependencies can get mixed into your custom code base, which can feel kind of messy and could potentially be confusing.

Let’s run this, and see it in action. Our Makefile has a serve target that will spin up a local App Engine instance for development:

image07

Browsing to http://localhost:8080 we can see the result we wanted: a UUID, an integer, and our UUID reversed:

image03

 

A Vendored GOPATH

This GOPATH implementation is slightly more complex, but nicely separates our third-party dependencies from our own custom code.  We are still using the standard goapp tool, but we implement a two-level GOPATH to allow our dependencies to be placed in a different location.

Let’s have a look at the code in ./src/modules/vendored/routes.go

We can see that the code is essentially the same as before. We still have the dependency on the third-party library github.com/nu7hatch/gouuid, we have a local function in this module named GiveMeACapitalLetter(), and we are outputting several values to a HTML page through our template display.

Again, let’s look at the GOPATH for this module using the debug-env Makefile target

image15

We can see that the GOPATH that has been set here has a : in the middle of it. This makes Go look in both /home/mark/workspace/appengine-golang-gopath/vendor and also /home/mark/workspace/appengine-golang-gopath when looking for Go source code. It’s also worth noting that goapp get (and go get) will place any dependencies it retrieves in the first path it finds in that GOPATH list, which, as you’ll see shortly, is a very useful behaviour.

Let’s clean out our old dependencies, re-run make deps in this module, and see where our uuid third-party dependency ends up:

image09

This is getting interesting! Rather than our third-party dependency being stored in the same directory as our custom code, it gets placed in a vendor directory. This means that there is a very clear separation between our dependencies and what we are authoring, and there is very little chance for confusion between the two, at the expense of having a slightly more complex GOPATH configuration.

Let’s run make serve to see our code run.

image17

Browsing to http://localhost:8080 we again can see the result we wanted: a UUID, a letter, and our UUID reversed:

image06

 

GB

This approach is a bit more interesting, in that it uses no GOPATH at all. Instead it uses a tool called gb recently written by Dave Cheney. This tool is one of many Go dependency management tools in existence, but it has risen quickly in popularity, and has become one of my favourite tools when developing Go applications across the board. It rewrites the Go tool chain to make project-based development easier, and it has an ecosystem of plugins to help write Go and, in our case, Google App Engine applications.

Having a look at our routes source code in ./src/modules/gb/routes.go, we can see that the code is almost identical to our last two examples:

The only difference from our previous routes.go is that we have a different package local function GiveMeASymbol(), which returns a random ASCII symbol.

While the overall code structure looks the same, let’s have a look at our GOPATH:

image10

Wow, there is no GOPATH at all! Gb instead goes looking for a directory that has a src subdirectory, which a GOPATH oriented project usually does — so no changes needed there either.  This is one of the nice things about gb, you don’t have to worry about environment variables; all your code organisation happens through convention.

When we run our make deps target, you can see that the usual goapp get commands have been switched out for gb vendor fetch commands. This is powered by the optional gb-vendor plugin, which downloads third-party dependencies and works slightly differently from the standard goapp get.

image05

The gb-vendor plugin downloads third-party dependencies into a vendor folder, almost exactly the same as we had before, but without having to directly specify it in the GOPATH. This approach gives you the same separation of third-party dependencies from your custom code, but without the extra work of managing your own GOPATH configuration.

Note that gb-vendor also creates a manifest file in the vendor directory:

This is a central repository of which exact version of the dependency you have downloaded. This is useful, as you can then share this manifest your vendor directory in your source control, and other team members and build systems can ensure that they all have exactly the same dependencies. without having to store your third-party code in your repository if you don’t want to. (Update: 16th, July 2015 – gb-vendor plugin is of the opinion that you should store your vendored dependencies in your source control. See this blog post for reasons.)

gb allows for plugins, and there is a community contributed gb-gae plugin that integrates gb with Google App Engine. In the gb Makefile, we use this to start up the local App Engine development server when we run the make serve target:

image11

Browsing to http://localhost:8080 we can see the result we wanted: a UUID, a symbol, and our UUID reversed:

image13

It is worth noting that the gb-gae plugin can also be used to build, test, and deploy App Engine applications as well, so it can be used for all your Go and App Engine needs.

 

Conclusion

To recap, we’ve gone through several GOPATH solutions here that can work for building Go applications on App Engine. The key things to remember are:

If you like simplicity above all else, the single layer, basic GOPATH may be the right option for you.

If you like clear separation between your third-party dependencies and your own code, the dual layer GOPATH that vendors your dependencies may be right for you.

If you like a tool that not only vendors your dependencies, but also manages which version is being used across teams and platforms and has an ecosystem of plugins as well, the gb approach may be right for you.

Hopefully that has given you some ideas on how you would like to structure the code in your next Go App Engine product.  Good luck, and happy Go coding!

All code show here is licensed under Apache 2. For more details find the original source on GitHub.

Migrating My Blog to Google Cloud Platform

Since I am now working for Google, and specifically the Google Cloud Platform, I took the opportunity to test out our Cloud Launcher offerings to migrate this blog over to the Cloud Platform as quickly as possible.

This site runs on WordPress, mainly because I found it the easiest to migrate all the content I have written from 2004 onwards, and since then, it’s been a stable and easy to use platform.

There are several options for running WordPress on Google Cloud Platform, including, as I recently found out, running on App Engine, but the Cloud Launchers let you create an instance on Google Compute Engine, which is our Infrastructure as a Service offering.  I didn’t need to install any SDK tools to get WordPress installed and running, as well as implement my specific customisations, I could do it all through the Developer Console in the browser.

Going to the Cloud Launcher page, and typing in “Wordpress”, results in several results, including two separate providers for a single WordPress install.  I ended up choosing the Bitnami solution for the following reasons:

WordPress Launcher

It is worth noting that this install does have the following caveats:

The installation screen of the WordPress Launcher is fairly straight forward, including automatically opening network ports for HTTP and HTTPS traffic.

If you want to have a static IP (which I know I did), make sure to open up the Management, disk, networking, access & security options, and select Networking. If you look at the drop down forExternal IP, you are able to create a new static IP right then and there.

Network Configuration

After clicking the Create VM button and waiting a few minutes for the virtual machine to be initialised, I had a brand new WordPress install with a temporary admin password and some sample WordPress plugins installed, ready to go.

My next task was to migrate across the custom theme that my blog uses, which means SSH’ing into the server.  Personally, I hate having to worry about managing all the security keys I have for various servers.  The developer console makes this ridiculously simple: click the SSH button on the console, and it starts up a bash console in your VM.

Bash in the browser

From here it was very easy to transfer my skin across to this new machine and install it in the appropriate WordPress directory.

I used the WordPress Import/Export Tool to port across all my content, which included comments and images, and it worked perfectly.  I did manually re-install my WordPress Plugins, such as Akismet, Crayon Syntax Highlighter and W3 Total Cache, but it only took me 10 minutes to copy paste the configurations across from one browser window to another.

That is really it. Moving my blog to Google Cloud Platform was very simple, and I didn’t have to install a single SDK or download any SSH keys.

Some fun things to do once you have your WordPress install up and running:

It’s worth noting, that if WordPress is not your thing, you can also check out our other Cloud Launcher options for blogs, of which we have a few, including Ghost and Publify.

If you are interested in trying this out, sign up for a free trial. You get up to 60 days to play around with Google Cloud Platform, and this is an easy way to test out the platform with no risk.

Hello USA, and Hello Google

I guess I must have decided that life was too simple and boring, and I needed to change pretty much every aspect of my life.

Change All The Things

In just under a week, I’ll be moving my entire family up to the Bay Area in California from our home here in Melbourne Australia, and shortly thereafter joining the Developer Advocate team for the Google Cloud Platform, working out of the San Francisco office.

This is going to be a big difference from the past few years of my life. Not only are we all (dog included) shifting over to a different country, this role is quite different from what I have been doing professionally up until this point.  That being said, I’m really excited to join the Developer Advocate team, as it gives me a chance to do all things I used to do on the side for fun, but full time: Presenting, talking to people, building community and generally having smart conversations with super smart people to enable them to build bigger and better things.

The Google Cloud Platform is a really interesting piece of technology and it’s going to be incredibly enjoyable to dig deeper into the parts that I’ve already worked with, as well as have a good look at the parts I have yet to explore.

I’ll be going into an office again, which is going to be an adjustment after working from home for the past seven years. That being said, I think I will manage to cope with the difference given the awesome offices that Google has on offer, and the very intelligent people I will be working alongside. The fact Google is a dog friendly workspace also helps, although I’ve no idea if I will be able to convince Sukie to get onto the BART.

I’m also very much looking forward to working along side the wonderful Terry Ryan. I’ve known Terry for many years through various Adobe circles and always had a lot of respect for him, so being on the same team is going to be an absolute pleasure.

Last but not least, I have to give a huge amount of thanks to my wife Amy. Without her by my side this most definitely would not have been possible. The Google hiring process is nothing short of gruelling, and she was there with me every step of the way, supporting and encouraging me whenever I needed it. Not to mention the fact she also agreed to leave all her family and friends here in Melbourne and travel with me half way around the world, which is also no small feat. She’s pretty ace.

Next stop, USA!

Mini – Game Dev Diary #1

I’ve been having lots of fun this break getting back into writing a top down racing game that I had originally started (way) earlier in the year, so I thought I would start writing a little dev diary on it, to aid me in keeping up my momentum in developing it.  It’s still very much in the prototype phase, but I’m starting to see real things come out of it.

The basic gist of the game is:

  • Top down racing game, very much inspired by the early Micro Machines PC game of my childhood years (hence the code name Mini for the game).
  • I want the steering and handling to be “drifty” and very arcade like – basically not technical and lots of fun to play.
  • I have this feeling of wanting a lot of “bounce” between artefacts in the game. For example, if you hit a wall, you don’t stop, you ricochet off it. We’ll see how this pans out in actual game play though.

I have a few more ideas on top of this, but this gives you a feel for what I am going for.

I wanted to write more Clojure, so I ended up picking this as my language of choice, and then using libGDX as my game development framework, and Box2d as my physics engine. Clojure is awesome, and libGDX is a great framework, but in retrospect, I’ve been wondering if it would have been faster to write this in something like Unity instead.  That being said, I’m being productive, and I do enjoy writing Clojure, so I’ll continue the current course for now (When I started, Arcadia Unity didn’t exist either).

I also chose to use Brute, my entity component framework, as the other main library to write my game with. So far, I’ve been very happy with it, and I’ve been able to add any features I needed quite easily to the library.

The first thing I did (and what took the longest), was to write my own wrapper around libGDX to use with Clojure. It would have been far faster to use play-clj, which I have used in the past, but I had found it previously had issues with clojure.tools.namespace and having a user namespace you could reset your state with, as in the Clojure Reloaded Workflow.  I probably should have spent more time trying to get play-clj to work better with a reloaded workflow, because it took me at least about three months of my spare time (and an entire CampJS weekend) to get my wrapper for libGDX to be in a place that I was genuinely happy with.

For the car and the steering I went with a super simple approach. There are a whole load of articles on how to simulate a top down car in Box2d, but I didn’t want a simulation, I want something fun and arcadey, and also something I could implement easily. Therefore, my car is just a rectangle, which gets pushed from the back when accelerating, pushed from the front when braking and pushed from the top left and right when turning.

This was quick and easy to do, however it gives you a very “floaty” feel to your vehicle as you drive (or you could see it as I was really getting an extra helping of the drift I was looking for). If you have ever played the original Asteroids, you know exactly the movement I’m talking about, so I had a new problem to solve.  I quickly surmised that what I needed to do was fake the auto correction you get when driving a car when you stop turning, let go of the steering wheel but keep accelerating, but I was quite unsure on how to get this to happen. After some fruitless Googling and several way too complicated solutions, I realised I could simply reduce the Car’s angular velocity if the up arrow was depressed (car accelerating), but neither the left or right key was depressed, and this has seemed to work really, really well.

I dropped in some sample wall blocks to drive around, and tweaked the numbers until I was happy with how the steering felt.

You can’t really see the auto correction on the steering working in the video, but here is the code that powers it:

The input system calls accelerate-car directly with various inputs, depending on what arrow keys are pressed. Many of the magic numbers that determine how the Car operates are set on the Car component itself, so I can have different models of cars down the line that can have different handling and acceleration.

Finally, I needed the camera to always have the car be in the centre of the screen. This would mean I could have tracks that are bigger than the display and also goes back to the feel of that original Micro Machines game.  This was remarkably easier than I had anticipated.  I created a Cam component and attached it to my player car. From there it was just a matter of updating the current Camera with the centre position of the Sprite that has the Cam component, and everything worked perfectly.

The code is as follows:

What I found quite surprising, was that by changing the camera to follow the car, I was no longer happy with how the Car’s handling felt. It felt kind of sluggish now, even through I hadn’t changed any of the values I had previously set.  I’ll leave it alone for the moment, and came back to it once I have some more elements to the game in place.

Coming up next, I want to lay out a simple track I can drive around, and then I can work which features I want to prioritise from there.

Brute 0.3.0 – Now Supporting ClojureScript

Brute has a few new features with this new release. The most exciting is that thanks to the cljx project, and the hard work of Martin Janiczek, Brute now supports both Clojure and ClojureScript!

There are also a couple of new features, including an implementation of update-component that takes a function and arguments to allow you to functionally change data within the system (Thanks to Yair Iny).

For example:

Also, if you have a function that you only want to happen every n milliseconds (a physics library, for instance), you can now throttle system functions.

For example:

Hope you enjoy these new features, and as always, feedback and pull requests are always welcome!

Testing Go Http Handlers in Google App Engine with Mux and Higher Order Functions

For those people that aren’t familiar to building applications with Go and Google App Engine, the core data structure that is used whenever you need to make a request to any of the provided Google App Engine Services, is a Context.

Usually, this is created by passing in the http.Request object that is created when serving a Http request, however, when you want to automate the testing of your Http Request Handlers, you usually do something like the following:

You create your own Request with the values you want to test, attempt to create a Context from it…. and blammo, GAE panics because the Request wasn’t sent through the actual GAE development server.

There are a few ways to solve this problem, but this is the way I found worked best for the situation I had. I’m using Mux for doing my routing (which is a great library), which provides me with a http.Handler to server all my requests through. My first (naive) solution, was to use (another great) library Context, which enables you to attach data to the running Request.  Which meant my Handler function ended up looking something like this:

In my my tests I could create a Context with aetest, which creates a test Context for you, and attach it to the request for my Handler function to find along the way.

This didn’t feel like a good solution, and would mean that each of my http handler functions would be peppered with this boilerplate check to see there was a context or not, which I wasn’t happy with. As looked at before, I could have wrapped Mux with my a custom http.Handler, which would have worked, but given my recent proclivity for functional programming, I leaned more towards solving this problem by manipulating functions, than creating objects with state.

The first thing I did was define three types, to make writing out my higher order functions easier:

The first type HandleFunc is simply a convenience type for our usual http Handler function signature. The second type ContextHandlerFunc is a type that has the function signature of what I want my http Handlers to look like when I magically have an appengine.Context available. Finally I have a ContextHandlerToHandlerHOF, which gives me the the function signature that I will need to take in a ContextHandlerFunc and convert it into a HandleFunc, so that it can be used with regular http routing APIs.

Therefore, for my application code, I have this function below, which takes in the ContextHandlerFunc, and returns a function that matches the HandleFunc signature, which, when invoked, will create a new appengine.Context and pass it through to my ContextHandlerFunc.

I then have a second function called CreateHandler. It’s job it to create the mux.Router. As an argument it takes in a ContextHandlerToHandlerHOF, whose job it is to make the conversion to a standard HandleFunc() format. This means we can change how the appengine.Context gets created by passing in different ContextHandlerToHandlerHOF implementations.  In this case, our init() function uses the one we want to use for our production code, that we defined above.

This means that for my tests, I have to get a bit more creative, because I need access to my aetest.Context outside of when I create my handler, mainly because in my tests it’s very important to Close() it after you are done.

So below, you can see CreateContextHandlerToHttpHandler, which creates a ContextHandlerToHandlerHOF, with the appengine.Context that is being provided, and rather than creating a new Context like in production, it simply uses the one provided.

Now I don’t get a panic from my local Google App Engine Development server when I run my tests, as I can easily switch out how the appengine.Context is created, depending on what environment the code is running in.

I’ve also found I’ve been able able to also extend this approach, to use functional composition for my middleware layer as well (another post for another time). All in all, I’m very happy that Go has first class functions!

Brute Entity Component System Library 0.2.0 – The Sequel

This post could also be entitled How I Learned to Love Immutability, and You Won’t Believe What Happened Next!

A few weeks ago I released a library called Brute, which is an Entity Component System Library for Clojure.  This was the first Clojure library I have released, and I wanted it to be as easy to use as possible.  Therefore, falling back on my imperative roots, I decided to maintain the internal state of the library inside itself, so that it was nicely hidden from the outside world.

That should have been my first red flag. But I missed it.

The whole time I was writing the library, I kept having thought of “what happens if two threads hit this api at the same time” and worrying about concurrency and synchronisation.

That should have been the second red flag. But I missed it too.

So I released the library, and all was well and good with the world, until I got this fantastic piece of feedback shortly thereafter. To quote the salient parts:

In reading this library, one thing stuck out to me like a sore thumb: every single facet of your CES is stored inside a set of globally shared atoms.

After a bit of back and forth, there was a resounding noise as the flat of my palm came crashing into the front of my face.

Two items on the list of core foundations of Clojure are:

  1. Immutability
  2. Pure Functions

Rather than adhere to them as much as was pragmatically possible, I flew in completely the other direction. Brute’s functions had side effects, changing the internal state that was stored in it’s namespace, rather than just simply keeping my functions pure and passing around simple, immutable data collections.  This made it icky, very constrained in its applications, and also far harder to test. All very bad things.

So I’ve rewritten Brute to be pure, not to maintain state internally, and simply pass around an immutable data structure, and this has made it far, far better than the original version.

Looking at the API, it’s not a huge departure from the original, but from a functional programming perspective, it’s like night and day. Suddenly all my concerns about concurrency and data synchronisation with each function call are gone – which is one of the whole points of using Clojure in the first place.

To start with Brute, now you need to create its basic data structure for stories Entities and Components.  Since Brute no longer stores data internally, it is up to the application to store the data structure state, and also choose when the appropriate time is to mutate that state. This makes things far simpler than the previous implementation.  It is expected that most of the time, the system data structure will be stored in a single atom and reset! on each game loop.

For example:

From here, (almost) every Brute function takes the system as it’s first argument, and returns a new copy of the immutable data structure with the changes requested. For example, here is a function that creates a Ball:

The differences here from before are:

  • create-entity now just returns a UUID. It doesn’t change any state like it did before.
  • You can see  that system is threaded through each call to add-entity and add-component. These each return a new copy of the immutable data structure, rather than changing encapsulated state.

This means that state does not change under your feet as you are developing (which it would have in the previous implementation). This makes developing your application a whole lot simpler and easier to manage and develop.

There are also some extra benefits by rewriting this library as well:

  • How the entity data structure is persisted is up to you and the library you are using, which gives you complete control over when state mutation occurs – if it occurs at all. This makes concurrent processes much simpler to develop.
  • You get direct access to the ES data structure, in case you want to do something with it that isn’t exposed in the current API.
  • You can easily have multiple ES systems within a single game, e.g. for sub-games.
  • Saving a game becomes simple: Just serialise the ES data structure and store. Deserialise to load.
  • Basically all the good stuff having immutable data structures and pure functions should give you.

Hopefully this also helps shed some light on why immutability and purity of functions are deemed good things, as well as why Clojure is also such a great language to develop with.

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

Brute – Entity Component System Library for Clojure

Warning: If you are new to Entity Component Systems, I would highly recommend Adam Martin’s blog series on them, he goes into great detail about what problem they solve, and what is required to implement them.  I’m not going to discuss what Entity Component Systems are in this blog post, so you may want to read his series first.

Doing some more fun time with game development, I wanted to use a Entity Component System Library for my my next project. Since I’m quite enamoured with Clojure at the moment, I went looking to see what libraries currently existed to facilitate this.

I found simplecs, which is quite nice, but I wanted something that was far more lightweight, and used simple Clojure building blocks, such as defrecord and functions to build your Entities, Components and Systems.  To that end, I wrote a library called brute, which (I think) does exactly that.

I wrote a Pong Clone example application to test out Brute, and I feel that it worked out quite well for my objectives.  Below are a few highlights from developing my example application with Brute that should hopefully give you a decent overview of the library.

As we said before, we can use defrecords to specify the Components of the system. For example, the components for the Ball:

We have a:

  • Rectangle, which defines the position of the Ball, the dimensions of the rectangle to be rendered on screen, and its colour.
  • A Ball component as a marker to delineate an Entity is a Ball.
  • Velocity to determine what direction and speed the Ball is currently travelling.

As you can see, there is nothing special, we have just used regular old defrecord. Brute, by default will use the the Component instance’s class as the Component type, but this can be extended and/or modified (although we don’t do that here).

Therefore, to create the ball in the game, we have the following code:

This creates a Ball in the centre of the playing field, with a white rectangle ready for rendering, and a Velocity of 300 pointing in a random direction.

As you can see here, creating the entity (which is just a UUID), is a simple call to create-entity!. From there we can add components to the Entity, such as an instance of the Ball defrecord, by calling add-component! passing in the entity and the relevant instance. Since we are using the defrecord classes as our Component types, we can use those classes to retrieve Entities from Brute.

For example, to retrieve all Entities that have Rectangle Components attached to them, it is simply a matter of using get-all-entities-with-component

From there, we can use get-component to return the actual Component instance, and any data it may hold, and can perform actions accordingly.

Systems become far more simple in Brute than they would when building an Entity System architecture on top of an Object Oriented language.

Systems in Brute are simply a function, that takes a delta argument, for the number of milliseconds that have occurred since the last processing of a game tick. This leaves the onus up to the game author to structure Systems how they like around this core concept, while still giving a simple and clean entry point into getting this done.

Brute maintains a sequence of System functions in a registry, which is very simple to add to through the appropriately named add-system-fn! function.

Here is my System function for keeping score:

Here we add it to the registry:

Finally, to call all registered system functions are fired, by using the function process-one-game-tick, which calls all registered System functions in the order they were registered – and in theory, your game should run!

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

As always, feedback is appreciated.

Looking for a Remote Job With Go?

Pretty exciting stuff – I’m co-founder of a new venture that is looking to do some interesting things in the e-commerce space.

We are building with Go and Google App Engine, for a variety of good reasons. Mostly because of how fast Go is (wow is it fast), and how many nice things GAE gives us out of the box that we can leverage.

No in depth details about the venture yet, but we are looking for like minded developers who love e-commerce, Go, and working from home. If this sounds like you, please have a look at our Careers Page, and send us through an email.

We will also be in Sydney the week of the 23rd of March (and will be attending the Go Sydney Meetup Event on the 26th), so if you are in the area and would like to talk face to face about the position, drop us a line via the email provided on our careers page, we would love to hear from you.

Writing AngularJS with ClojureScript and Purnam

I’ve got a project that I’ve been using to learn Clojure.  For the front end of this project, I wanted to use ClojureScript and AngularJS so I could share code between and my server and client (and have another avenue for learning Clojure), and also because Angular is totally awesome. Hunting around for details on how to integrate AngularJS with ClojureScript, I did find a few articles, but eventually I came across the library Purnam, and I knew I had a winner. Purnam is a few things wrapped up into one:

  1. A very nice extra layer for JavaScript interoperability above and beyond what ClojureScript gives you out of the box
  2. Both a Jasmine and Midje style test framework, with integration into the great Karma test runner
  3. ClojureScript implementations of AngularJS directives, controllers, services, etcetera which greatly reduce the amount of boilerplate code you would otherwise need
  4. ClojureScript implementations that make testing of AngularJS directives, controllers, services, etcetera a breeze to set up under the above test framework.
  5. Fantastic documentation.

I could go on in detail on each of these topics, but it’s covered very well in the documentation. Instead I’ll expand on how easy it was to get up and running with a simple AngularJS module and controller configuration, and also testing it with Jasmine and Karma to give you a taste of what Purnam can do for you.

To create an AngularJS controller, we can define it like so (I’ve already defined my module “chaperone.app” in chaperone.ng.core):

This shows off a few things:

  • def.controller : This is the Purnam macro for defining an AngularJS controller. Here I’ve created the controller AdminUserControl in the module chaperone.app.
  • You will notice the use of !. This is a Purnam construct that allows you to set a value to a Javascript property, but allows you to define that property with a dot syntax, as opposed to having to use something like (set! (.-init $scope) (fn [] ... ))  or (aset $scope "init" (fn [] ...))  which, you may not prefer.  Purnam has quite a few constructs that allow you to use dot notation with JavaScript interoperability, above and beyond this one.  I personally prefer the Purnam syntax, but you can always choose not to use it.

One thing I discovered very quickly, I needed to make sure I required chaperone.ng.core which contains my Angular module definition, in the above controller’s namespace, even though it is not actually used in the code.  This was so that the Angular module definition would show up before the controller definition in the final JavaScript output. Otherwise, Angular would throw an error because it could not find the module, as it had yet to be defined.

Purnam also makes it easy to run AngularJS unit tests. Here is a simple test I wrote to test a $scope value that should have been set after I ran the init function on my AdminUserCtrl controller

As you can see, Purnam takes away a lot of the usual Jasmine + AngularJS boilerplate code, and you end up with a nice, clean way to write AngularJS tests. Purnam also implicitly injects into your tests it’s ability to interpret dot notation on JavasScript objects for you, which is handy if you want to use it.

Purnam also has capabilities to test out Services, Directives and other aspects of the AngularJS ecosystem as well, though it’s describe.ng macro, which gives you full control over what Angular elements are created through Angular’s dependency injection capabilities.

Finally, Purnam integrates into Karma, which lets you run your tests in almost any javascript environment, be it NodeJS or inside a Chrome web browser.

Configuring Karma is as simple as running the standard karma init command, which asks you a series of questions about what test framework you want (Jasmine) and what platform you want to test on (Chrome for me), and results in a karma.conf.js file in the root of your directory.

One thing I found quickly when setting up my karma.conf.js file was that it was very necessary to specify which file you want included, and in what order. For example:

In my initial setup I had simply used a glob of test/*.js, which caused me all sorts of grief as angular-mocks.js needed to be loaded after angular.min.js (for example), but the resultant file list didn’t come out that way, and I got all sorts of very strange errors. Specifying exactly which files I needed to be using and in the correct order fixed all those issues for me.

I’m really enjoying working with the combination of ClojureScript and AngularJS, and Purnam gives me the glue to hook it all up together without getting in my way at all. Hopefully that gives you enough of a taste of Purnam to get your interest piqued, and you’ll take it for a spin as well!