Hello USA, and Hello Google

I guess I must have decided that life was too simple and boring, and I needed to change pretty much every aspect of my life.

Change All The Things

In just under a week, I’ll be moving my entire family up to the Bay Area in California from our home here in Melbourne Australia, and shortly thereafter joining the Developer Advocate team for the Google Cloud Platform, working out of the San Francisco office.

This is going to be a big difference from the past few years of my life. Not only are we all (dog included) shifting over to a different country, this role is quite different from what I have been doing professionally up until this point.  That being said, I’m really excited to join the Developer Advocate team, as it gives me a chance to do all things I used to do on the side for fun, but full time: Presenting, talking to people, building community and generally having smart conversations with super smart people to enable them to build bigger and better things.

The Google Cloud Platform is a really interesting piece of technology and it’s going to be incredibly enjoyable to dig deeper into the parts that I’ve already worked with, as well as have a good look at the parts I have yet to explore.

I’ll be going into an office again, which is going to be an adjustment after working from home for the past seven years. That being said, I think I will manage to cope with the difference given the awesome offices that Google has on offer, and the very intelligent people I will be working alongside. The fact Google is a dog friendly workspace also helps, although I’ve no idea if I will be able to convince Sukie to get onto the BART.

I’m also very much looking forward to working along side the wonderful Terry Ryan. I’ve known Terry for many years through various Adobe circles and always had a lot of respect for him, so being on the same team is going to be an absolute pleasure.

Last but not least, I have to give a huge amount of thanks to my wife Amy. Without her by my side this most definitely would not have been possible. The Google hiring process is nothing short of gruelling, and she was there with me every step of the way, supporting and encouraging me whenever I needed it. Not to mention the fact she also agreed to leave all her family and friends here in Melbourne and travel with me half way around the world, which is also no small feat. She’s pretty ace.

Next stop, USA!

Mini – Game Dev Diary #1

I’ve been having lots of fun this break getting back into writing a top down racing game that I had originally started (way) earlier in the year, so I thought I would start writing a little dev diary on it, to aid me in keeping up my momentum in developing it.  It’s still very much in the prototype phase, but I’m starting to see real things come out of it.

The basic gist of the game is:

  • Top down racing game, very much inspired by the early Micro Machines PC game of my childhood years (hence the code name Mini for the game).
  • I want the steering and handling to be “drifty” and very arcade like – basically not technical and lots of fun to play.
  • I have this feeling of wanting a lot of “bounce” between artefacts in the game. For example, if you hit a wall, you don’t stop, you ricochet off it. We’ll see how this pans out in actual game play though.

I have a few more ideas on top of this, but this gives you a feel for what I am going for.

I wanted to write more Clojure, so I ended up picking this as my language of choice, and then using libGDX as my game development framework, and Box2d as my physics engine. Clojure is awesome, and libGDX is a great framework, but in retrospect, I’ve been wondering if it would have been faster to write this in something like Unity instead.  That being said, I’m being productive, and I do enjoy writing Clojure, so I’ll continue the current course for now (When I started, Arcadia Unity didn’t exist either).

I also chose to use Brute, my entity component framework, as the other main library to write my game with. So far, I’ve been very happy with it, and I’ve been able to add any features I needed quite easily to the library.

The first thing I did (and what took the longest), was to write my own wrapper around libGDX to use with Clojure. It would have been far faster to use play-clj, which I have used in the past, but I had found it previously had issues with clojure.tools.namespace and having a user namespace you could reset your state with, as in the Clojure Reloaded Workflow.  I probably should have spent more time trying to get play-clj to work better with a reloaded workflow, because it took me at least about three months of my spare time (and an entire CampJS weekend) to get my wrapper for libGDX to be in a place that I was genuinely happy with.

For the car and the steering I went with a super simple approach. There are a whole load of articles on how to simulate a top down car in Box2d, but I didn’t want a simulation, I want something fun and arcadey, and also something I could implement easily. Therefore, my car is just a rectangle, which gets pushed from the back when accelerating, pushed from the front when braking and pushed from the top left and right when turning.

This was quick and easy to do, however it gives you a very “floaty” feel to your vehicle as you drive (or you could see it as I was really getting an extra helping of the drift I was looking for). If you have ever played the original Asteroids, you know exactly the movement I’m talking about, so I had a new problem to solve.  I quickly surmised that what I needed to do was fake the auto correction you get when driving a car when you stop turning, let go of the steering wheel but keep accelerating, but I was quite unsure on how to get this to happen. After some fruitless Googling and several way too complicated solutions, I realised I could simply reduce the Car’s angular velocity if the up arrow was depressed (car accelerating), but neither the left or right key was depressed, and this has seemed to work really, really well.

I dropped in some sample wall blocks to drive around, and tweaked the numbers until I was happy with how the steering felt.

You can’t really see the auto correction on the steering working in the video, but here is the code that powers it:

The input system calls accelerate-car directly with various inputs, depending on what arrow keys are pressed. Many of the magic numbers that determine how the Car operates are set on the Car component itself, so I can have different models of cars down the line that can have different handling and acceleration.

Finally, I needed the camera to always have the car be in the centre of the screen. This would mean I could have tracks that are bigger than the display and also goes back to the feel of that original Micro Machines game.  This was remarkably easier than I had anticipated.  I created a Cam component and attached it to my player car. From there it was just a matter of updating the current Camera with the centre position of the Sprite that has the Cam component, and everything worked perfectly.

The code is as follows:

What I found quite surprising, was that by changing the camera to follow the car, I was no longer happy with how the Car’s handling felt. It felt kind of sluggish now, even through I hadn’t changed any of the values I had previously set.  I’ll leave it alone for the moment, and came back to it once I have some more elements to the game in place.

Coming up next, I want to lay out a simple track I can drive around, and then I can work which features I want to prioritise from there.

Brute 0.3.0 – Now Supporting ClojureScript

Brute has a few new features with this new release. The most exciting is that thanks to the cljx project, and the hard work of Martin Janiczek, Brute now supports both Clojure and ClojureScript!

There are also a couple of new features, including an implementation of update-component that takes a function and arguments to allow you to functionally change data within the system (Thanks to Yair Iny).

For example:

Also, if you have a function that you only want to happen every n milliseconds (a physics library, for instance), you can now throttle system functions.

For example:

Hope you enjoy these new features, and as always, feedback and pull requests are always welcome!

Testing Go Http Handlers in Google App Engine with Mux and Higher Order Functions

For those people that aren’t familiar to building applications with Go and Google App Engine, the core data structure that is used whenever you need to make a request to any of the provided Google App Engine Services, is a Context.

Usually, this is created by passing in the http.Request object that is created when serving a Http request, however, when you want to automate the testing of your Http Request Handlers, you usually do something like the following:

You create your own Request with the values you want to test, attempt to create a Context from it…. and blammo, GAE panics because the Request wasn’t sent through the actual GAE development server.

There are a few ways to solve this problem, but this is the way I found worked best for the situation I had. I’m using Mux for doing my routing (which is a great library), which provides me with a http.Handler to server all my requests through. My first (naive) solution, was to use (another great) library Context, which enables you to attach data to the running Request.  Which meant my Handler function ended up looking something like this:

In my my tests I could create a Context with aetest, which creates a test Context for you, and attach it to the request for my Handler function to find along the way.

This didn’t feel like a good solution, and would mean that each of my http handler functions would be peppered with this boilerplate check to see there was a context or not, which I wasn’t happy with. As looked at before, I could have wrapped Mux with my a custom http.Handler, which would have worked, but given my recent proclivity for functional programming, I leaned more towards solving this problem by manipulating functions, than creating objects with state.

The first thing I did was define three types, to make writing out my higher order functions easier:

The first type HandleFunc is simply a convenience type for our usual http Handler function signature. The second type ContextHandlerFunc is a type that has the function signature of what I want my http Handlers to look like when I magically have an appengine.Context available. Finally I have a ContextHandlerToHandlerHOF, which gives me the the function signature that I will need to take in a ContextHandlerFunc and convert it into a HandleFunc, so that it can be used with regular http routing APIs.

Therefore, for my application code, I have this function below, which takes in the ContextHandlerFunc, and returns a function that matches the HandleFunc signature, which, when invoked, will create a new appengine.Context and pass it through to my ContextHandlerFunc.

I then have a second function called CreateHandler. It’s job it to create the mux.Router. As an argument it takes in a ContextHandlerToHandlerHOF, whose job it is to make the conversion to a standard HandleFunc() format. This means we can change how the appengine.Context gets created by passing in different ContextHandlerToHandlerHOF implementations.  In this case, our init() function uses the one we want to use for our production code, that we defined above.

This means that for my tests, I have to get a bit more creative, because I need access to my aetest.Context outside of when I create my handler, mainly because in my tests it’s very important to Close() it after you are done.

So below, you can see CreateContextHandlerToHttpHandler, which creates a ContextHandlerToHandlerHOF, with the appengine.Context that is being provided, and rather than creating a new Context like in production, it simply uses the one provided.

Now I don’t get a panic from my local Google App Engine Development server when I run my tests, as I can easily switch out how the appengine.Context is created, depending on what environment the code is running in.

I’ve also found I’ve been able able to also extend this approach, to use functional composition for my middleware layer as well (another post for another time). All in all, I’m very happy that Go has first class functions!

Brute Entity Component System Library 0.2.0 – The Sequel

This post could also be entitled How I Learned to Love Immutability, and You Won’t Believe What Happened Next!

A few weeks ago I released a library called Brute, which is an Entity Component System Library for Clojure.  This was the first Clojure library I have released, and I wanted it to be as easy to use as possible.  Therefore, falling back on my imperative roots, I decided to maintain the internal state of the library inside itself, so that it was nicely hidden from the outside world.

That should have been my first red flag. But I missed it.

The whole time I was writing the library, I kept having thought of “what happens if two threads hit this api at the same time” and worrying about concurrency and synchronisation.

That should have been the second red flag. But I missed it too.

So I released the library, and all was well and good with the world, until I got this fantastic piece of feedback shortly thereafter. To quote the salient parts:

In reading this library, one thing stuck out to me like a sore thumb: every single facet of your CES is stored inside a set of globally shared atoms.

After a bit of back and forth, there was a resounding noise as the flat of my palm came crashing into the front of my face.

Two items on the list of core foundations of Clojure are:

  1. Immutability
  2. Pure Functions

Rather than adhere to them as much as was pragmatically possible, I flew in completely the other direction. Brute’s functions had side effects, changing the internal state that was stored in it’s namespace, rather than just simply keeping my functions pure and passing around simple, immutable data collections.  This made it icky, very constrained in its applications, and also far harder to test. All very bad things.

So I’ve rewritten Brute to be pure, not to maintain state internally, and simply pass around an immutable data structure, and this has made it far, far better than the original version.

Looking at the API, it’s not a huge departure from the original, but from a functional programming perspective, it’s like night and day. Suddenly all my concerns about concurrency and data synchronisation with each function call are gone – which is one of the whole points of using Clojure in the first place.

To start with Brute, now you need to create its basic data structure for stories Entities and Components.  Since Brute no longer stores data internally, it is up to the application to store the data structure state, and also choose when the appropriate time is to mutate that state. This makes things far simpler than the previous implementation.  It is expected that most of the time, the system data structure will be stored in a single atom and reset! on each game loop.

For example:

From here, (almost) every Brute function takes the system as it’s first argument, and returns a new copy of the immutable data structure with the changes requested. For example, here is a function that creates a Ball:

The differences here from before are:

  • create-entity now just returns a UUID. It doesn’t change any state like it did before.
  • You can see  that system is threaded through each call to add-entity and add-component. These each return a new copy of the immutable data structure, rather than changing encapsulated state.

This means that state does not change under your feet as you are developing (which it would have in the previous implementation). This makes developing your application a whole lot simpler and easier to manage and develop.

There are also some extra benefits by rewriting this library as well:

  • How the entity data structure is persisted is up to you and the library you are using, which gives you complete control over when state mutation occurs – if it occurs at all. This makes concurrent processes much simpler to develop.
  • You get direct access to the ES data structure, in case you want to do something with it that isn’t exposed in the current API.
  • You can easily have multiple ES systems within a single game, e.g. for sub-games.
  • Saving a game becomes simple: Just serialise the ES data structure and store. Deserialise to load.
  • Basically all the good stuff having immutable data structures and pure functions should give you.

Hopefully this also helps shed some light on why immutability and purity of functions are deemed good things, as well as why Clojure is also such a great language to develop with.

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

Brute – Entity Component System Library for Clojure

Warning: If you are new to Entity Component Systems, I would highly recommend Adam Martin’s blog series on them, he goes into great detail about what problem they solve, and what is required to implement them.  I’m not going to discuss what Entity Component Systems are in this blog post, so you may want to read his series first.

Doing some more fun time with game development, I wanted to use a Entity Component System Library for my my next project. Since I’m quite enamoured with Clojure at the moment, I went looking to see what libraries currently existed to facilitate this.

I found simplecs, which is quite nice, but I wanted something that was far more lightweight, and used simple Clojure building blocks, such as defrecord and functions to build your Entities, Components and Systems.  To that end, I wrote a library called brute, which (I think) does exactly that.

I wrote a Pong Clone example application to test out Brute, and I feel that it worked out quite well for my objectives.  Below are a few highlights from developing my example application with Brute that should hopefully give you a decent overview of the library.

As we said before, we can use defrecords to specify the Components of the system. For example, the components for the Ball:

We have a:

  • Rectangle, which defines the position of the Ball, the dimensions of the rectangle to be rendered on screen, and its colour.
  • A Ball component as a marker to delineate an Entity is a Ball.
  • Velocity to determine what direction and speed the Ball is currently travelling.

As you can see, there is nothing special, we have just used regular old defrecord. Brute, by default will use the the Component instance’s class as the Component type, but this can be extended and/or modified (although we don’t do that here).

Therefore, to create the ball in the game, we have the following code:

This creates a Ball in the centre of the playing field, with a white rectangle ready for rendering, and a Velocity of 300 pointing in a random direction.

As you can see here, creating the entity (which is just a UUID), is a simple call to create-entity!. From there we can add components to the Entity, such as an instance of the Ball defrecord, by calling add-component! passing in the entity and the relevant instance. Since we are using the defrecord classes as our Component types, we can use those classes to retrieve Entities from Brute.

For example, to retrieve all Entities that have Rectangle Components attached to them, it is simply a matter of using get-all-entities-with-component

From there, we can use get-component to return the actual Component instance, and any data it may hold, and can perform actions accordingly.

Systems become far more simple in Brute than they would when building an Entity System architecture on top of an Object Oriented language.

Systems in Brute are simply a function, that takes a delta argument, for the number of milliseconds that have occurred since the last processing of a game tick. This leaves the onus up to the game author to structure Systems how they like around this core concept, while still giving a simple and clean entry point into getting this done.

Brute maintains a sequence of System functions in a registry, which is very simple to add to through the appropriately named add-system-fn! function.

Here is my System function for keeping score:

Here we add it to the registry:

Finally, to call all registered system functions are fired, by using the function process-one-game-tick, which calls all registered System functions in the order they were registered – and in theory, your game should run!

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

As always, feedback is appreciated.

Looking for a Remote Job With Go?

Pretty exciting stuff – I’m co-founder of a new venture that is looking to do some interesting things in the e-commerce space.

We are building with Go and Google App Engine, for a variety of good reasons. Mostly because of how fast Go is (wow is it fast), and how many nice things GAE gives us out of the box that we can leverage.

No in depth details about the venture yet, but we are looking for like minded developers who love e-commerce, Go, and working from home. If this sounds like you, please have a look at our Careers Page, and send us through an email.

We will also be in Sydney the week of the 23rd of March (and will be attending the Go Sydney Meetup Event on the 26th), so if you are in the area and would like to talk face to face about the position, drop us a line via the email provided on our careers page, we would love to hear from you.

Writing AngularJS with ClojureScript and Purnam

I’ve got a project that I’ve been using to learn Clojure.  For the front end of this project, I wanted to use ClojureScript and AngularJS so I could share code between and my server and client (and have another avenue for learning Clojure), and also because Angular is totally awesome. Hunting around for details on how to integrate AngularJS with ClojureScript, I did find a few articles, but eventually I came across the library Purnam, and I knew I had a winner. Purnam is a few things wrapped up into one:

  1. A very nice extra layer for JavaScript interoperability above and beyond what ClojureScript gives you out of the box
  2. Both a Jasmine and Midje style test framework, with integration into the great Karma test runner
  3. ClojureScript implementations of AngularJS directives, controllers, services, etcetera which greatly reduce the amount of boilerplate code you would otherwise need
  4. ClojureScript implementations that make testing of AngularJS directives, controllers, services, etcetera a breeze to set up under the above test framework.
  5. Fantastic documentation.

I could go on in detail on each of these topics, but it’s covered very well in the documentation. Instead I’ll expand on how easy it was to get up and running with a simple AngularJS module and controller configuration, and also testing it with Jasmine and Karma to give you a taste of what Purnam can do for you.

To create an AngularJS controller, we can define it like so (I’ve already defined my module “chaperone.app” in chaperone.ng.core):

This shows off a few things:

  • def.controller : This is the Purnam macro for defining an AngularJS controller. Here I’ve created the controller AdminUserControl in the module chaperone.app.
  • You will notice the use of !. This is a Purnam construct that allows you to set a value to a Javascript property, but allows you to define that property with a dot syntax, as opposed to having to use something like (set! (.-init $scope) (fn [] ... ))  or (aset $scope "init" (fn [] ...))  which, you may not prefer.  Purnam has quite a few constructs that allow you to use dot notation with JavaScript interoperability, above and beyond this one.  I personally prefer the Purnam syntax, but you can always choose not to use it.

One thing I discovered very quickly, I needed to make sure I required chaperone.ng.core which contains my Angular module definition, in the above controller’s namespace, even though it is not actually used in the code.  This was so that the Angular module definition would show up before the controller definition in the final JavaScript output. Otherwise, Angular would throw an error because it could not find the module, as it had yet to be defined.

Purnam also makes it easy to run AngularJS unit tests. Here is a simple test I wrote to test a $scope value that should have been set after I ran the init function on my AdminUserCtrl controller

As you can see, Purnam takes away a lot of the usual Jasmine + AngularJS boilerplate code, and you end up with a nice, clean way to write AngularJS tests. Purnam also implicitly injects into your tests it’s ability to interpret dot notation on JavasScript objects for you, which is handy if you want to use it.

Purnam also has capabilities to test out Services, Directives and other aspects of the AngularJS ecosystem as well, though it’s describe.ng macro, which gives you full control over what Angular elements are created through Angular’s dependency injection capabilities.

Finally, Purnam integrates into Karma, which lets you run your tests in almost any javascript environment, be it NodeJS or inside a Chrome web browser.

Configuring Karma is as simple as running the standard karma init command, which asks you a series of questions about what test framework you want (Jasmine) and what platform you want to test on (Chrome for me), and results in a karma.conf.js file in the root of your directory.

One thing I found quickly when setting up my karma.conf.js file was that it was very necessary to specify which file you want included, and in what order. For example:

In my initial setup I had simply used a glob of test/*.js, which caused me all sorts of grief as angular-mocks.js needed to be loaded after angular.min.js (for example), but the resultant file list didn’t come out that way, and I got all sorts of very strange errors. Specifying exactly which files I needed to be using and in the correct order fixed all those issues for me.

I’m really enjoying working with the combination of ClojureScript and AngularJS, and Purnam gives me the glue to hook it all up together without getting in my way at all. Hopefully that gives you enough of a taste of Purnam to get your interest piqued, and you’ll take it for a spin as well!

Clojure EDN Walkthrough

When I set myself the task of learning how to use EDN, or Extensible Data Notation,  in Clojure, I couldn’t find a simple tutorial on how it worked, so I figured I would write one for anyone else having the same troubles I did. I was keen to learn EDN, as I am working on a Clojure based web application in my spare time, and wanted a serialisation format I could send data in that could be easily understood by both my Clojure and ClojureScript programs.

Disclaimer: This is my first blog post on Clojure, and I’m still learning the language. So if you see anything that can be improved / is incorrect, please let me know.

If you’ve never looked at EDN, it looks a lot like Clojure, which is not surprising, as it is actually a subset of Clojure notation.   For example, this is what a vector looks like in EDN:

Which should be fairly self explanatory.  If you want to know more about the EDN specification, have a read. You should be able to read through in around ten minutes, as it’s nice and lightweight.

The first place I started with EDN, was with the clojure.edn namespace, which has a very short API documentation and this was my first point of confusion. I could see a read and read-string method… but couldn’t see how I would actually write EDN?  Coming from a background that was used to JSON, I expected there to be some sort of equivalent Clojure to-edn function lying around, which I could not seem to find. The concept connection I was missing, was that the since EDN was a subset of Clojure, and the Clojure Reader supports, EDN, you only had to look at the IO Clojure functions to find that there are functions pr and prn whose job it is to take an object and “By default, pr and prn print in a way that objects can be read by the reader.”  Now pr and prn output to the current output stream. However we can use either pr-str or prn-str to give us a string output, which is far easier to use for our examples.  Let’s have a quick look at an example of that, with a normal Clojure Map:

Obviously this could have been written simpler, but I wanted to break it down into individual chunks to make learning a little bit easier. As you can see, to convert our sample-map into EDN, all we had to do was call prn-str on it to return the EDN string.  From there, to convert it back to EDN, it’s as simple as passing that string into the edn/read-string function, and we get back a new map with the same values as before.

So far, so good, but the next tricky bit (I found) comes when we want to actually extend EDN for our own usage. The immediate example that came to mind for me, was for use with defrecords. Out of the box, prn-str will convert a defrecord into EDN without any external intervention.  For example:

The nice thing here, is that prn-str provides us with a EDN tag for our defrecord out of the box: “#edn_example.core.Goat”. This let’s the EDN reader know that this is not a standard Clojure type, and that it will need to be handled differently from normal. The EDN reader makes it very easy to tell it how to handle this new EDN tag:

You can see that when we call edn/read-string we pass through an option of :readers with a map of our custom EDN tag as the key ( 'edn_example.core.Goat ), to functions that return what we finally want from our EDN deserialisation as the values ( map->Goat ).  This is the magic glue that tells the EDN reader what to do with your custom EDN tag, and we can tell it whatever we want it to do from that point forward. Since we have used custom EDN tags, if we didn’t do this, the EDN reader would throw an exception saying “No reader function for tag edn_example.core.Goat”, when we attempted to deserialise the EDN.

Therefore, as the EDN reader parses:

#edn_example.core.Goat{:stuff I love Goats, :things Goats are awesome}

It first looks at #edn_example.core.Goat, and matches that tag to the map->Goat function.  This function is then passed the deserialised value of {:stuff “I love Goats”, :things “Goats are awesome”} to it. map->Goat takes that map, and converts it into our Goat defrecord, and presto we are able to serialise and deserialise our new Goat defrecord.

This isn’t just limited to derecords, it could be any custom EDN we want to use. For example, if we wanted to write our own crazy EDN string for our Goat defrecord, rather than use the default, we could flatten out the map into a sequence, like so:

We can then apply the same strategy as we did before to convert it back:

Finally, there is the question of what do you do when you don’t know all the EDN tags that you will be required to parse? Thankfully the EDN reader handles this elegantly as well. You are able to provide the reader with a :default option, that gets called if a custom namespace can’t be found for the given EDN string, and gets passed the tag and the deserialised value as well.

For example, here is a function that simply converts the incoming EDN string into a map with :tag and :value values for the incoming EDN:

That about wraps up EDN. Hopefully that makes things easier for people who want to start using EDN, and encountered some of the stumbling blocks I did when trying to fit all the pieces together. EDN is a very nice data format for usage with Clojure communication.

The full source code for this example can be found on Github as well, if you want to have a look. Simply clone it and execute lein run to watch it go.

Provision Your Local Machines

This is an old photo, but my home workspace looks somewhat akin to this:

Twitter Desks

As you can see, I have a lot of monitors… and three separate laptops, running Synergy to share my keyboard and mouse across all the machines.

Since I have the three machines, migrating settings and software that I had set up on one machine to another started becoming more and more painful.  To add insult to injury, Ubuntu releases a new version every six months, and when I upgrade I tend to wipe my entire partition and reinstall from scratch. Therefore I have to reinstall all my software, which is isn’t that much fun once, let alone three times over.  For a while I was keeping setting files (tmux, zsh etc) in a Dropbox folder, but that still required manual intervention when setting up a new machine, didn’t cover software installation, and there were many pieces of software that doesn’t work for.

I was already experimenting with Ansible, and it seemed like it’s simple, no nonsense approach to machine provisioning was going to be a perfect fit for automating the installation and configuration of the software on my local machines.  Since it’s SSH based, and also allows local connections, and has no need of any server or client install, it’s also exceedingly lightweight, which again, was a perfect fit for my needs.

My final result is a combination of a couple of small bash scripts, to bootstrap the process, combined with several Ansible Playbooks (which defines what to actually install and configure) to do the actual provisioning.  You can find it all up on github.

Some interesting highlights include:

  • I have an install.sh to install the base dependencies to get going when I’ve got a bare machine.
  • The up.sh is the script to run to actually provision a machine. This does a few things:
    • Updates my local forked Ansible Git repository (So I can hack on Ansible when I want)
    • Updates my local git repository with all my Ansible Playbooks, so when the provision runs, it self updates to the latest version of my configuration.
    • The actual Playbook to run comes from a .playbook file that is not stored in the git repository. This means I can easily have specific playbooks to each machine. For example, the primary.yml playbook is for my main machine, but I have a variation of the seconday.yml for my other two (at time of writing, one secondary is running Ubuntu 13.10, while the other is running 13.04)
    • Ansible has a feature called Roles that gives you the ability to group together a set of configuration for reuse, and I’ve used several roles:
      • core – The configuration to be used across all machines
      • primary – The configuration to be used on my main development machine
      • secondary – The configuration the be used on my secondary (left and right) machines
      • ubuntu_x – This is so that I can have slightly different configuration as necessary for different ubuntu versions. For example, PPA software repositories will change depending on what version of Ubuntu you are on.
      • xmonad – I wasn’t sure I wanted to use xmonad, so I did it as a separate role while I was trying it out. Maybe one day I’ll merge it into core, since I’m pretty hooked on it now.

I’ve really enjoyed having my local machine provisioning, because:

  • I can take a completely clean machine, and have it up and running and ready for me to write code on in around an hour (depending on download speeds).
  • Wiping a partition and reinstalling Ubuntu is a clean and easy process now.
  • I can set up software and/or configuration tweaks on a secondary machine, and transport it across all my machines with ease.
  • If I want to test out software installation and configurations without committing to them, it’s only a git branch away.
  • It’s really easy to clear out cruft on any machine. I’ve no need for Thunderbird, or Empathy, and when I was running Unity, I could instantly remove all those annoying Shopping Lenses.
  • I never have to go back to old blog posts to remember how I installed / fixed / configured various things. It’s a write once, fixed forever type deal.

Strangely enough, it’s many of the same benefits you get when you automate the provisioning of your servers, but applied locally. If you are new to machine provisioning, starting by provisioning your local machine is also a low risk way to get up to speed on provisioning, while giving you a lot of benefits.

Hope that has been enough to inspire you to automate more on your local machines!