Brute 0.4.0 – From CLJX to Reader Conditionals

This release of Brute, provides no new features, but is instead a migration from using CLJX as the support for both Clojure and ClojureScript at once, to upgrading to Clojure 1.7, and utilising the (relatively?) new feature of Reader Conditionals.

In terms of changing the implementing code, this was a fairly straightforward task. It was essentially a case of changing file extensions from cljx to cljc, moving the code out of the cljx folders and back into the src directory, and then converting the markers for where clj implementations would be switched out with cljs implementations, depending on which platform it was running on.

For example, the function I had to generate UUIDs, used to looks like this:

But now it looks like this:

As you can see, there are some minor syntactic changes, but essentially it’s the same structure and format. Not a hard thing to do with some time and some patience.

The more interesting part was more finding a test framework that worked well with reader conditionals. Previously, I had all my tests implemented in Midje, but found that the autotest functionality doesn’t work with reader conditionals, which for me was it’s biggest feature.  It also didn’t have core support for ClojureScript, but instead was implemented by a different developer as part of the purnam library. While the implementation for ClojureScript is quite good, it hadn’t been touched in ten months, which has me concerned.

After hunting around a bit, I ended up settling on using good ol’ clojure.test and cljs.test. It’s well supported across both platforms, and as I recently discovered has amazing integration with Cursive Clojure!  It took me a little while to get all the tests ported across, but otherwise the experience was also very smooth.  Now I have a testing platform I can continue to develop on, and I know will continue to support both Clojure and ClojureScript for the foreseeable future.

I have some minor enhancements for Brute that I will probably jump on at some point in the near future, but as always, feature requests and bug reports are always appreciated.

Recording: Containers in Production – Crazy? Awesome? or Crazy Awesome!

Late last year I was invited to participate in a panel discussion at New Relic’s FutureStack conference on using Software Containers (which generally means Docker) in production environments.  We had an excellent discussion about what is good about the ecosystem, as well as what needs to improve, as well as best practices and approaches for running Containers in production environments – at either the large scale, or the small.

I was super happy that I managed to get one of my main talking points about Containers into the conversation, specifically that Containers do not necessarily equal micro-services. I’ve seen people feel that these two are inextricably linked, and they definitely are not. You are perfectly able to leverage the benefits of Containers, without going down the extra complexity of micro-services (which has it’s own set of pros and cons) if you do not want to.

During the discussion I reference a talk by John Wilks, discussing Borg at Google, and how it directly influences the design decisions behind Kubernetes. It’s one of my favourite presentations, and well worth a watch as well.

Futurestack was a great event, and it was a pleasure to be able to attend.

Recording: Wrapping Clojure Tooling in Containers (Part 2)

A few weeks ago, I had the distinct opportunity to attend and present at clojure/conj in Philadelphia. This was the first time for me attending the event, but it had been on my list of conferences to be at for a very long time.  Now that I live in San Francisco, it’s great that I can take advantage of the situation and get to attend the events I had watched enviously from across the ocean. Especially since I’ve been playing with and (very occasionally) working with Clojure for a while now.

Wrapping Clojure Tooling in Containers(link to video)

The talk I gave at the conference was an update on a previous talk I had done at the local Clojure Meetup. The difference being that when I wrote the original talk, I was attempting to build Docker development environments that were one size fits all, by leveraging ZSH. Since then, I’ve switched to developing with per-project Docker development environments, powered by Makefiles that are shipped along with, and contain all the dependencies for, said project. This talk represented this change, as well as several other tips and tricks I’ve discovered along the way (cross platform GUIs running in containers anyone?)

Hopefully you can’t tell, but the first section of the presentation was given without the slides. We had some technical difficulties with my laptop and the projector, but the excellent people at the event managed to get it all working just in the nick of time for the demo, and while it did cut my presentation time a little short, I still had time to cover the points I wanted to cover.

If you are interested in the code that powers this talk, it is all up on github, so you can pick it apart at your leisure.

 

Recording: Scaling Node.js with Docker and Kubernetes

Last month I had the pleasure of presentation at the Connect.js conference in Atlanta, Georgia, on scaling Node.js with Docker and Kubernetes.

I really enjoyed the conference, and giving this talk. I feel that Kubernetes really shows the power of what software containers can do to give generic solutions to general application development problems like scaling and deploying, regardless of language or application design.

Unfortunately the audio isn’t the best (and the slides are a little squished), but it’s definitely watchable. Huge thanks to Connect.js for taking the time to make the recordings and pushing them live.

You can also grab all the source code for review as well.

Recording: Wrapping Clojure Tooling in Containers

I recently had the pleasure of doing a short presentation to the Bay Area Clojure User Group on Wrapping Clojure Tooling in Containers.

We went from having no Clojure tooling, and Java 7 on my host machine, to quickly firing up a new terminal shell running in a Docker container with Leiningen pre-installed along with Java 8.

This lets us create a Docker container that we can then share with out team, or with a wider open source community that we know isn’t going change, except for the parts we want to change.

We also discussed file permission issues between hosts and containers, and showed off an interesting solution for sharing a JVM inside a container with the host.

Thanks to my co-worker, Francesc for recording the talk, and putting it up on Youtube!

Configuring Your GOPATH with Go and Google App Engine

When I started working with Google App Engine and Go, I wasn’t sure how to best configure my GOPATH when developing Google App Engine applications.  You can find documentation on this aspect on the Go and App Engine page, but being new to both Go and App Engine I was not aware of what options were available, and what their pros and cons would be.

If you are not sure what a GOPATH is, I recommend this video tutorial on setting up a Go workspace and running and testing code as it helped explain the concept to me.  It is also worth noting, that some Go programmers have a single GOPATH they use for all their projects on a given computer, however, for the sake of this article, we will be considering the use of a separate GOPATH per-project instead.

Up until a point, you can actually get away with not having a GOPATH at all when developing with App Engine. If you have a simple project with a single directory and no dependencies, you have no need to set a GOPATH and wouldn’t notice any difference if it was missing.  On top of this, if you do have dependencies, goapp will actually store anything you goapp get in the Google App Engine SDK’s gopath subdirectory.

However, I strongly advocate having a GOPATH set as a mainstay for developing with Go and App Engine.  Managing your code through an idiomatically Go way, i.e. with a GOPATH, will ensure that your code remains manageable as it gets more sophisticated and complex, and that all your dependencies for a specific project are retained within its specific GOPATH, not shared between any projects using the SDK.  Using GOPATH has an added benefit if you ever switch between regular Go development and App Engine development — in that case, there should be minimal context switching on your development approach and toolchains.

The aim of this post, and the attached sample code, is to show several options for GOPATH and dependency management when working with Go and App Engine, while exploring some of the pros and cons of each approach. Understanding these options will enable you to start with an initial code layout and GOPATH strategy that will work with your project at its start and well into its lifecycle.

 

Initial Configuration

To get started, let’s git clone this project, and have a look at its structure.

image16

This looks very much like a regular Go project. We have a src folder that contains our Go code. Within that, we have a modules folder that contains three different App Engine
Modules
 (basic, vendored, and gb), each implemented with a different GOPATH strategy.  Within each module subfolder, there exists an app.yaml file that has the App Engine settings for that module, and a routes.go file that specifies the http endpoints for that module.  We also have a lib folder, which contains code that is shared by all three of these modules, to show one possibility of how we can share code between App Engine Modules with all three GOPATH structures.

For the sake of this article, I’m making the assumption that the Google Cloud SDK and the Go App Engine SDK is already installed on your system.  That being said, it is worth noting at this step that this whole project is totally Make driven.  The Makefiles specify what the GOPATH is set to, and perform all our operations on this code base. So, if you want to follow along at home, you don’t need to worry about corrupting an already set GOPATH or other environment variables as this example code will not alter them in any way.

I used a few general tools, such as golint and goimports, in developing this project, some of which we will look at while we go through this example, so you will need to install them if you decide to run through the code yourself:

image01

Now that the tools are in your ./bin folder, your Makefiles can reference them.

 

A Basic GOPATH

This is the simplest implementation. We have a GOPATH with a single entry (more on that later), and we are using the basic goapp tooling that is provided with the App Engine SDK (also more on that later).

Let’s take the opportunity to look at the code in ./src/modules/basic/routes.go

This is a simple HTTP handler, which uses the template display in template.go to return some HTML that shows us the module name, and uses our lib dependency reverse and a third-party dependency github.com/nu7hatch/gouuid to output several values on screen.  We also call a GiveMeANumber() function that is implemented in number.go in the same directory as the routes.go file.

First things first, let’s have a look at what GOPATH the Makefile has set. There is a handy Make target named debug-env that shows us all GO environment variables that are set.

image00

We can see here that the GOPATH is set to the directory we cloned this repository to, which keeps things very simple.

To install our third-party dependencies, we have a deps target in our Makefile that uses the goapp get tool to download the third-party dependency of github.com/nu7hatch/gouuid

Let’s run this target, and see where our code ends up.

image14

We can see here that the gouuid package is stored under /src/github.com/nu7hatch/gouuid, which anyone who is used to working with regular GOPATHs would have expected.

This single level GOPATH approach works well in that it is very clear and easy to understand. Every piece of Go code you use is placed in the same directory and you know exactly where it all sits.  The downside to this approach is that all your third-party dependencies can get mixed into your custom code base, which can feel kind of messy and could potentially be confusing.

Let’s run this, and see it in action. Our Makefile has a serve target that will spin up a local App Engine instance for development:

image07

Browsing to http://localhost:8080 we can see the result we wanted: a UUID, an integer, and our UUID reversed:

image03

 

A Vendored GOPATH

This GOPATH implementation is slightly more complex, but nicely separates our third-party dependencies from our own custom code.  We are still using the standard goapp tool, but we implement a two-level GOPATH to allow our dependencies to be placed in a different location.

Let’s have a look at the code in ./src/modules/vendored/routes.go

We can see that the code is essentially the same as before. We still have the dependency on the third-party library github.com/nu7hatch/gouuid, we have a local function in this module named GiveMeACapitalLetter(), and we are outputting several values to a HTML page through our template display.

Again, let’s look at the GOPATH for this module using the debug-env Makefile target

image15

We can see that the GOPATH that has been set here has a : in the middle of it. This makes Go look in both /home/mark/workspace/appengine-golang-gopath/vendor and also /home/mark/workspace/appengine-golang-gopath when looking for Go source code. It’s also worth noting that goapp get (and go get) will place any dependencies it retrieves in the first path it finds in that GOPATH list, which, as you’ll see shortly, is a very useful behaviour.

Let’s clean out our old dependencies, re-run make deps in this module, and see where our uuid third-party dependency ends up:

image09

This is getting interesting! Rather than our third-party dependency being stored in the same directory as our custom code, it gets placed in a vendor directory. This means that there is a very clear separation between our dependencies and what we are authoring, and there is very little chance for confusion between the two, at the expense of having a slightly more complex GOPATH configuration.

Let’s run make serve to see our code run.

image17

Browsing to http://localhost:8080 we again can see the result we wanted: a UUID, a letter, and our UUID reversed:

image06

 

GB

This approach is a bit more interesting, in that it uses no GOPATH at all. Instead it uses a tool called gb recently written by Dave Cheney. This tool is one of many Go dependency management tools in existence, but it has risen quickly in popularity, and has become one of my favourite tools when developing Go applications across the board. It rewrites the Go tool chain to make project-based development easier, and it has an ecosystem of plugins to help write Go and, in our case, Google App Engine applications.

Having a look at our routes source code in ./src/modules/gb/routes.go, we can see that the code is almost identical to our last two examples:

The only difference from our previous routes.go is that we have a different package local function GiveMeASymbol(), which returns a random ASCII symbol.

While the overall code structure looks the same, let’s have a look at our GOPATH:

image10

Wow, there is no GOPATH at all! Gb instead goes looking for a directory that has a src subdirectory, which a GOPATH oriented project usually does — so no changes needed there either.  This is one of the nice things about gb, you don’t have to worry about environment variables; all your code organisation happens through convention.

When we run our make deps target, you can see that the usual goapp get commands have been switched out for gb vendor fetch commands. This is powered by the optional gb-vendor plugin, which downloads third-party dependencies and works slightly differently from the standard goapp get.

image05

The gb-vendor plugin downloads third-party dependencies into a vendor folder, almost exactly the same as we had before, but without having to directly specify it in the GOPATH. This approach gives you the same separation of third-party dependencies from your custom code, but without the extra work of managing your own GOPATH configuration.

Note that gb-vendor also creates a manifest file in the vendor directory:

This is a central repository of which exact version of the dependency you have downloaded. This is useful, as you can then share this manifest your vendor directory in your source control, and other team members and build systems can ensure that they all have exactly the same dependencies. without having to store your third-party code in your repository if you don’t want to. (Update: 16th, July 2015 – gb-vendor plugin is of the opinion that you should store your vendored dependencies in your source control. See this blog post for reasons.)

gb allows for plugins, and there is a community contributed gb-gae plugin that integrates gb with Google App Engine. In the gb Makefile, we use this to start up the local App Engine development server when we run the make serve target:

image11

Browsing to http://localhost:8080 we can see the result we wanted: a UUID, a symbol, and our UUID reversed:

image13

It is worth noting that the gb-gae plugin can also be used to build, test, and deploy App Engine applications as well, so it can be used for all your Go and App Engine needs.

 

Conclusion

To recap, we’ve gone through several GOPATH solutions here that can work for building Go applications on App Engine. The key things to remember are:

If you like simplicity above all else, the single layer, basic GOPATH may be the right option for you.

If you like clear separation between your third-party dependencies and your own code, the dual layer GOPATH that vendors your dependencies may be right for you.

If you like a tool that not only vendors your dependencies, but also manages which version is being used across teams and platforms and has an ecosystem of plugins as well, the gb approach may be right for you.

Hopefully that has given you some ideas on how you would like to structure the code in your next Go App Engine product.  Good luck, and happy Go coding!

All code show here is licensed under Apache 2. For more details find the original source on GitHub.

Migrating My Blog to Google Cloud Platform

Since I am now working for Google, and specifically the Google Cloud Platform, I took the opportunity to test out our Cloud Launcher offerings to migrate this blog over to the Cloud Platform as quickly as possible.

This site runs on WordPress, mainly because I found it the easiest to migrate all the content I have written from 2004 onwards, and since then, it’s been a stable and easy to use platform.

There are several options for running WordPress on Google Cloud Platform, including, as I recently found out, running on App Engine, but the Cloud Launchers let you create an instance on Google Compute Engine, which is our Infrastructure as a Service offering.  I didn’t need to install any SDK tools to get WordPress installed and running, as well as implement my specific customisations, I could do it all through the Developer Console in the browser.

Going to the Cloud Launcher page, and typing in “Wordpress”, results in several results, including two separate providers for a single WordPress install.  I ended up choosing the Bitnami solution for the following reasons:

WordPress Launcher

It is worth noting that this install does have the following caveats:

The installation screen of the WordPress Launcher is fairly straight forward, including automatically opening network ports for HTTP and HTTPS traffic.

If you want to have a static IP (which I know I did), make sure to open up the Management, disk, networking, access & security options, and select Networking. If you look at the drop down forExternal IP, you are able to create a new static IP right then and there.

Network Configuration

After clicking the Create VM button and waiting a few minutes for the virtual machine to be initialised, I had a brand new WordPress install with a temporary admin password and some sample WordPress plugins installed, ready to go.

My next task was to migrate across the custom theme that my blog uses, which means SSH’ing into the server.  Personally, I hate having to worry about managing all the security keys I have for various servers.  The developer console makes this ridiculously simple: click the SSH button on the console, and it starts up a bash console in your VM.

Bash in the browser

From here it was very easy to transfer my skin across to this new machine and install it in the appropriate WordPress directory.

I used the WordPress Import/Export Tool to port across all my content, which included comments and images, and it worked perfectly.  I did manually re-install my WordPress Plugins, such as Akismet, Crayon Syntax Highlighter and W3 Total Cache, but it only took me 10 minutes to copy paste the configurations across from one browser window to another.

That is really it. Moving my blog to Google Cloud Platform was very simple, and I didn’t have to install a single SDK or download any SSH keys.

Some fun things to do once you have your WordPress install up and running:

It’s worth noting, that if WordPress is not your thing, you can also check out our other Cloud Launcher options for blogs, of which we have a few, including Ghost and Publify.

If you are interested in trying this out, sign up for a free trial. You get up to 60 days to play around with Google Cloud Platform, and this is an easy way to test out the platform with no risk.

Hello USA, and Hello Google

I guess I must have decided that life was too simple and boring, and I needed to change pretty much every aspect of my life.

Change All The Things

In just under a week, I’ll be moving my entire family up to the Bay Area in California from our home here in Melbourne Australia, and shortly thereafter joining the Developer Advocate team for the Google Cloud Platform, working out of the San Francisco office.

This is going to be a big difference from the past few years of my life. Not only are we all (dog included) shifting over to a different country, this role is quite different from what I have been doing professionally up until this point.  That being said, I’m really excited to join the Developer Advocate team, as it gives me a chance to do all things I used to do on the side for fun, but full time: Presenting, talking to people, building community and generally having smart conversations with super smart people to enable them to build bigger and better things.

The Google Cloud Platform is a really interesting piece of technology and it’s going to be incredibly enjoyable to dig deeper into the parts that I’ve already worked with, as well as have a good look at the parts I have yet to explore.

I’ll be going into an office again, which is going to be an adjustment after working from home for the past seven years. That being said, I think I will manage to cope with the difference given the awesome offices that Google has on offer, and the very intelligent people I will be working alongside. The fact Google is a dog friendly workspace also helps, although I’ve no idea if I will be able to convince Sukie to get onto the BART.

I’m also very much looking forward to working along side the wonderful Terry Ryan. I’ve known Terry for many years through various Adobe circles and always had a lot of respect for him, so being on the same team is going to be an absolute pleasure.

Last but not least, I have to give a huge amount of thanks to my wife Amy. Without her by my side this most definitely would not have been possible. The Google hiring process is nothing short of gruelling, and she was there with me every step of the way, supporting and encouraging me whenever I needed it. Not to mention the fact she also agreed to leave all her family and friends here in Melbourne and travel with me half way around the world, which is also no small feat. She’s pretty ace.

Next stop, USA!

Mini – Game Dev Diary #1

I’ve been having lots of fun this break getting back into writing a top down racing game that I had originally started (way) earlier in the year, so I thought I would start writing a little dev diary on it, to aid me in keeping up my momentum in developing it.  It’s still very much in the prototype phase, but I’m starting to see real things come out of it.

The basic gist of the game is:

  • Top down racing game, very much inspired by the early Micro Machines PC game of my childhood years (hence the code name Mini for the game).
  • I want the steering and handling to be “drifty” and very arcade like – basically not technical and lots of fun to play.
  • I have this feeling of wanting a lot of “bounce” between artefacts in the game. For example, if you hit a wall, you don’t stop, you ricochet off it. We’ll see how this pans out in actual game play though.

I have a few more ideas on top of this, but this gives you a feel for what I am going for.

I wanted to write more Clojure, so I ended up picking this as my language of choice, and then using libGDX as my game development framework, and Box2d as my physics engine. Clojure is awesome, and libGDX is a great framework, but in retrospect, I’ve been wondering if it would have been faster to write this in something like Unity instead.  That being said, I’m being productive, and I do enjoy writing Clojure, so I’ll continue the current course for now (When I started, Arcadia Unity didn’t exist either).

I also chose to use Brute, my entity component framework, as the other main library to write my game with. So far, I’ve been very happy with it, and I’ve been able to add any features I needed quite easily to the library.

The first thing I did (and what took the longest), was to write my own wrapper around libGDX to use with Clojure. It would have been far faster to use play-clj, which I have used in the past, but I had found it previously had issues with clojure.tools.namespace and having a user namespace you could reset your state with, as in the Clojure Reloaded Workflow.  I probably should have spent more time trying to get play-clj to work better with a reloaded workflow, because it took me at least about three months of my spare time (and an entire CampJS weekend) to get my wrapper for libGDX to be in a place that I was genuinely happy with.

For the car and the steering I went with a super simple approach. There are a whole load of articles on how to simulate a top down car in Box2d, but I didn’t want a simulation, I want something fun and arcadey, and also something I could implement easily. Therefore, my car is just a rectangle, which gets pushed from the back when accelerating, pushed from the front when braking and pushed from the top left and right when turning.

This was quick and easy to do, however it gives you a very “floaty” feel to your vehicle as you drive (or you could see it as I was really getting an extra helping of the drift I was looking for). If you have ever played the original Asteroids, you know exactly the movement I’m talking about, so I had a new problem to solve.  I quickly surmised that what I needed to do was fake the auto correction you get when driving a car when you stop turning, let go of the steering wheel but keep accelerating, but I was quite unsure on how to get this to happen. After some fruitless Googling and several way too complicated solutions, I realised I could simply reduce the Car’s angular velocity if the up arrow was depressed (car accelerating), but neither the left or right key was depressed, and this has seemed to work really, really well.

I dropped in some sample wall blocks to drive around, and tweaked the numbers until I was happy with how the steering felt.

You can’t really see the auto correction on the steering working in the video, but here is the code that powers it:

The input system calls accelerate-car directly with various inputs, depending on what arrow keys are pressed. Many of the magic numbers that determine how the Car operates are set on the Car component itself, so I can have different models of cars down the line that can have different handling and acceleration.

Finally, I needed the camera to always have the car be in the centre of the screen. This would mean I could have tracks that are bigger than the display and also goes back to the feel of that original Micro Machines game.  This was remarkably easier than I had anticipated.  I created a Cam component and attached it to my player car. From there it was just a matter of updating the current Camera with the centre position of the Sprite that has the Cam component, and everything worked perfectly.

The code is as follows:

What I found quite surprising, was that by changing the camera to follow the car, I was no longer happy with how the Car’s handling felt. It felt kind of sluggish now, even through I hadn’t changed any of the values I had previously set.  I’ll leave it alone for the moment, and came back to it once I have some more elements to the game in place.

Coming up next, I want to lay out a simple track I can drive around, and then I can work which features I want to prioritise from there.

Brute 0.3.0 – Now Supporting ClojureScript

Brute has a few new features with this new release. The most exciting is that thanks to the cljx project, and the hard work of Martin Janiczek, Brute now supports both Clojure and ClojureScript!

There are also a couple of new features, including an implementation of update-component that takes a function and arguments to allow you to functionally change data within the system (Thanks to Yair Iny).

For example:

Also, if you have a function that you only want to happen every n milliseconds (a physics library, for instance), you can now throttle system functions.

For example:

Hope you enjoy these new features, and as always, feedback and pull requests are always welcome!