Testing Go Http Handlers in Google App Engine with Mux and Higher Order Functions

For those people that aren’t familiar to building applications with Go and Google App Engine, the core data structure that is used whenever you need to make a request to any of the provided Google App Engine Services, is a Context.

Usually, this is created by passing in the http.Request object that is created when serving a Http request, however, when you want to automate the testing of your Http Request Handlers, you usually do something like the following:

You create your own Request with the values you want to test, attempt to create a Context from it…. and blammo, GAE panics because the Request wasn’t sent through the actual GAE development server.

There are a few ways to solve this problem, but this is the way I found worked best for the situation I had. I’m using Mux for doing my routing (which is a great library), which provides me with a http.Handler to server all my requests through. My first (naive) solution, was to use (another great) library Context, which enables you to attach data to the running Request.  Which meant my Handler function ended up looking something like this:

In my my tests I could create a Context with aetest, which creates a test Context for you, and attach it to the request for my Handler function to find along the way.

This didn’t feel like a good solution, and would mean that each of my http handler functions would be peppered with this boilerplate check to see there was a context or not, which I wasn’t happy with. As looked at before, I could have wrapped Mux with my a custom http.Handler, which would have worked, but given my recent proclivity for functional programming, I leaned more towards solving this problem by manipulating functions, than creating objects with state.

The first thing I did was define three types, to make writing out my higher order functions easier:

The first type HandleFunc is simply a convenience type for our usual http Handler function signature. The second type ContextHandlerFunc is a type that has the function signature of what I want my http Handlers to look like when I magically have an appengine.Context available. Finally I have a ContextHandlerToHandlerHOF, which gives me the the function signature that I will need to take in a ContextHandlerFunc and convert it into a HandleFunc, so that it can be used with regular http routing APIs.

Therefore, for my application code, I have this function below, which takes in the ContextHandlerFunc, and returns a function that matches the HandleFunc signature, which, when invoked, will create a new appengine.Context and pass it through to my ContextHandlerFunc.

I then have a second function called CreateHandler. It’s job it to create the mux.Router. As an argument it takes in a ContextHandlerToHandlerHOF, whose job it is to make the conversion to a standard HandleFunc() format. This means we can change how the appengine.Context gets created by passing in different ContextHandlerToHandlerHOF implementations.  In this case, our init() function uses the one we want to use for our production code, that we defined above.

This means that for my tests, I have to get a bit more creative, because I need access to my aetest.Context outside of when I create my handler, mainly because in my tests it’s very important to Close() it after you are done.

So below, you can see CreateContextHandlerToHttpHandler, which creates a ContextHandlerToHandlerHOF, with the appengine.Context that is being provided, and rather than creating a new Context like in production, it simply uses the one provided.

Now I don’t get a panic from my local Google App Engine Development server when I run my tests, as I can easily switch out how the appengine.Context is created, depending on what environment the code is running in.

I’ve also found I’ve been able able to also extend this approach, to use functional composition for my middleware layer as well (another post for another time). All in all, I’m very happy that Go has first class functions!

Brute Entity Component System Library 0.2.0 – The Sequel

This post could also be entitled How I Learned to Love Immutability, and You Won’t Believe What Happened Next!

A few weeks ago I released a library called Brute, which is an Entity Component System Library for Clojure.  This was the first Clojure library I have released, and I wanted it to be as easy to use as possible.  Therefore, falling back on my imperative roots, I decided to maintain the internal state of the library inside itself, so that it was nicely hidden from the outside world.

That should have been my first red flag. But I missed it.

The whole time I was writing the library, I kept having thought of “what happens if two threads hit this api at the same time” and worrying about concurrency and synchronisation.

That should have been the second red flag. But I missed it too.

So I released the library, and all was well and good with the world, until I got this fantastic piece of feedback shortly thereafter. To quote the salient parts:

In reading this library, one thing stuck out to me like a sore thumb: every single facet of your CES is stored inside a set of globally shared atoms.

After a bit of back and forth, there was a resounding noise as the flat of my palm came crashing into the front of my face.

Two items on the list of core foundations of Clojure are:

  1. Immutability
  2. Pure Functions

Rather than adhere to them as much as was pragmatically possible, I flew in completely the other direction. Brute’s functions had side effects, changing the internal state that was stored in it’s namespace, rather than just simply keeping my functions pure and passing around simple, immutable data collections.  This made it icky, very constrained in its applications, and also far harder to test. All very bad things.

So I’ve rewritten Brute to be pure, not to maintain state internally, and simply pass around an immutable data structure, and this has made it far, far better than the original version.

Looking at the API, it’s not a huge departure from the original, but from a functional programming perspective, it’s like night and day. Suddenly all my concerns about concurrency and data synchronisation with each function call are gone – which is one of the whole points of using Clojure in the first place.

To start with Brute, now you need to create its basic data structure for stories Entities and Components.  Since Brute no longer stores data internally, it is up to the application to store the data structure state, and also choose when the appropriate time is to mutate that state. This makes things far simpler than the previous implementation.  It is expected that most of the time, the system data structure will be stored in a single atom and reset! on each game loop.

For example:

From here, (almost) every Brute function takes the system as it’s first argument, and returns a new copy of the immutable data structure with the changes requested. For example, here is a function that creates a Ball:

The differences here from before are:

  • create-entity now just returns a UUID. It doesn’t change any state like it did before.
  • You can see  that system is threaded through each call to add-entity and add-component. These each return a new copy of the immutable data structure, rather than changing encapsulated state.

This means that state does not change under your feet as you are developing (which it would have in the previous implementation). This makes developing your application a whole lot simpler and easier to manage and develop.

There are also some extra benefits by rewriting this library as well:

  • How the entity data structure is persisted is up to you and the library you are using, which gives you complete control over when state mutation occurs – if it occurs at all. This makes concurrent processes much simpler to develop.
  • You get direct access to the ES data structure, in case you want to do something with it that isn’t exposed in the current API.
  • You can easily have multiple ES systems within a single game, e.g. for sub-games.
  • Saving a game becomes simple: Just serialise the ES data structure and store. Deserialise to load.
  • Basically all the good stuff having immutable data structures and pure functions should give you.

Hopefully this also helps shed some light on why immutability and purity of functions are deemed good things, as well as why Clojure is also such a great language to develop with.

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

Brute – Entity Component System Library for Clojure

Warning: If you are new to Entity Component Systems, I would highly recommend Adam Martin’s blog series on them, he goes into great detail about what problem they solve, and what is required to implement them.  I’m not going to discuss what Entity Component Systems are in this blog post, so you may want to read his series first.

Doing some more fun time with game development, I wanted to use a Entity Component System Library for my my next project. Since I’m quite enamoured with Clojure at the moment, I went looking to see what libraries currently existed to facilitate this.

I found simplecs, which is quite nice, but I wanted something that was far more lightweight, and used simple Clojure building blocks, such as defrecord and functions to build your Entities, Components and Systems.  To that end, I wrote a library called brute, which (I think) does exactly that.

I wrote a Pong Clone example application to test out Brute, and I feel that it worked out quite well for my objectives.  Below are a few highlights from developing my example application with Brute that should hopefully give you a decent overview of the library.

As we said before, we can use defrecords to specify the Components of the system. For example, the components for the Ball:

We have a:

  • Rectangle, which defines the position of the Ball, the dimensions of the rectangle to be rendered on screen, and its colour.
  • A Ball component as a marker to delineate an Entity is a Ball.
  • Velocity to determine what direction and speed the Ball is currently travelling.

As you can see, there is nothing special, we have just used regular old defrecord. Brute, by default will use the the Component instance’s class as the Component type, but this can be extended and/or modified (although we don’t do that here).

Therefore, to create the ball in the game, we have the following code:

This creates a Ball in the centre of the playing field, with a white rectangle ready for rendering, and a Velocity of 300 pointing in a random direction.

As you can see here, creating the entity (which is just a UUID), is a simple call to create-entity!. From there we can add components to the Entity, such as an instance of the Ball defrecord, by calling add-component! passing in the entity and the relevant instance. Since we are using the defrecord classes as our Component types, we can use those classes to retrieve Entities from Brute.

For example, to retrieve all Entities that have Rectangle Components attached to them, it is simply a matter of using get-all-entities-with-component

From there, we can use get-component to return the actual Component instance, and any data it may hold, and can perform actions accordingly.

Systems become far more simple in Brute than they would when building an Entity System architecture on top of an Object Oriented language.

Systems in Brute are simply a function, that takes a delta argument, for the number of milliseconds that have occurred since the last processing of a game tick. This leaves the onus up to the game author to structure Systems how they like around this core concept, while still giving a simple and clean entry point into getting this done.

Brute maintains a sequence of System functions in a registry, which is very simple to add to through the appropriately named add-system-fn! function.

Here is my System function for keeping score:

Here we add it to the registry:

Finally, to call all registered system functions are fired, by using the function process-one-game-tick, which calls all registered System functions in the order they were registered – and in theory, your game should run!

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

As always, feedback is appreciated.

Looking for a Remote Job With Go?

Pretty exciting stuff – I’m co-founder of a new venture that is looking to do some interesting things in the e-commerce space.

We are building with Go and Google App Engine, for a variety of good reasons. Mostly because of how fast Go is (wow is it fast), and how many nice things GAE gives us out of the box that we can leverage.

No in depth details about the venture yet, but we are looking for like minded developers who love e-commerce, Go, and working from home. If this sounds like you, please have a look at our Careers Page, and send us through an email.

We will also be in Sydney the week of the 23rd of March (and will be attending the Go Sydney Meetup Event on the 26th), so if you are in the area and would like to talk face to face about the position, drop us a line via the email provided on our careers page, we would love to hear from you.

Writing AngularJS with ClojureScript and Purnam

I’ve got a project that I’ve been using to learn Clojure.  For the front end of this project, I wanted to use ClojureScript and AngularJS so I could share code between and my server and client (and have another avenue for learning Clojure), and also because Angular is totally awesome. Hunting around for details on how to integrate AngularJS with ClojureScript, I did find a few articles, but eventually I came across the library Purnam, and I knew I had a winner. Purnam is a few things wrapped up into one:

  1. A very nice extra layer for JavaScript interoperability above and beyond what ClojureScript gives you out of the box
  2. Both a Jasmine and Midje style test framework, with integration into the great Karma test runner
  3. ClojureScript implementations of AngularJS directives, controllers, services, etcetera which greatly reduce the amount of boilerplate code you would otherwise need
  4. ClojureScript implementations that make testing of AngularJS directives, controllers, services, etcetera a breeze to set up under the above test framework.
  5. Fantastic documentation.

I could go on in detail on each of these topics, but it’s covered very well in the documentation. Instead I’ll expand on how easy it was to get up and running with a simple AngularJS module and controller configuration, and also testing it with Jasmine and Karma to give you a taste of what Purnam can do for you.

To create an AngularJS controller, we can define it like so (I’ve already defined my module “chaperone.app” in chaperone.ng.core):

This shows off a few things:

  • def.controller : This is the Purnam macro for defining an AngularJS controller. Here I’ve created the controller AdminUserControl in the module chaperone.app.
  • You will notice the use of !. This is a Purnam construct that allows you to set a value to a Javascript property, but allows you to define that property with a dot syntax, as opposed to having to use something like (set! (.-init $scope) (fn [] ... ))  or (aset $scope "init" (fn [] ...))  which, you may not prefer.  Purnam has quite a few constructs that allow you to use dot notation with JavaScript interoperability, above and beyond this one.  I personally prefer the Purnam syntax, but you can always choose not to use it.

One thing I discovered very quickly, I needed to make sure I required chaperone.ng.core which contains my Angular module definition, in the above controller’s namespace, even though it is not actually used in the code.  This was so that the Angular module definition would show up before the controller definition in the final JavaScript output. Otherwise, Angular would throw an error because it could not find the module, as it had yet to be defined.

Purnam also makes it easy to run AngularJS unit tests. Here is a simple test I wrote to test a $scope value that should have been set after I ran the init function on my AdminUserCtrl controller

As you can see, Purnam takes away a lot of the usual Jasmine + AngularJS boilerplate code, and you end up with a nice, clean way to write AngularJS tests. Purnam also implicitly injects into your tests it’s ability to interpret dot notation on JavasScript objects for you, which is handy if you want to use it.

Purnam also has capabilities to test out Services, Directives and other aspects of the AngularJS ecosystem as well, though it’s describe.ng macro, which gives you full control over what Angular elements are created through Angular’s dependency injection capabilities.

Finally, Purnam integrates into Karma, which lets you run your tests in almost any javascript environment, be it NodeJS or inside a Chrome web browser.

Configuring Karma is as simple as running the standard karma init command, which asks you a series of questions about what test framework you want (Jasmine) and what platform you want to test on (Chrome for me), and results in a karma.conf.js file in the root of your directory.

One thing I found quickly when setting up my karma.conf.js file was that it was very necessary to specify which file you want included, and in what order. For example:

In my initial setup I had simply used a glob of test/*.js, which caused me all sorts of grief as angular-mocks.js needed to be loaded after angular.min.js (for example), but the resultant file list didn’t come out that way, and I got all sorts of very strange errors. Specifying exactly which files I needed to be using and in the correct order fixed all those issues for me.

I’m really enjoying working with the combination of ClojureScript and AngularJS, and Purnam gives me the glue to hook it all up together without getting in my way at all. Hopefully that gives you enough of a taste of Purnam to get your interest piqued, and you’ll take it for a spin as well!

Clojure EDN Walkthrough

When I set myself the task of learning how to use EDN, or Extensible Data Notation,  in Clojure, I couldn’t find a simple tutorial on how it worked, so I figured I would write one for anyone else having the same troubles I did. I was keen to learn EDN, as I am working on a Clojure based web application in my spare time, and wanted a serialisation format I could send data in that could be easily understood by both my Clojure and ClojureScript programs.

Disclaimer: This is my first blog post on Clojure, and I’m still learning the language. So if you see anything that can be improved / is incorrect, please let me know.

If you’ve never looked at EDN, it looks a lot like Clojure, which is not surprising, as it is actually a subset of Clojure notation.   For example, this is what a vector looks like in EDN:

Which should be fairly self explanatory.  If you want to know more about the EDN specification, have a read. You should be able to read through in around ten minutes, as it’s nice and lightweight.

The first place I started with EDN, was with the clojure.edn namespace, which has a very short API documentation and this was my first point of confusion. I could see a read and read-string method… but couldn’t see how I would actually write EDN?  Coming from a background that was used to JSON, I expected there to be some sort of equivalent Clojure to-edn function lying around, which I could not seem to find. The concept connection I was missing, was that the since EDN was a subset of Clojure, and the Clojure Reader supports, EDN, you only had to look at the IO Clojure functions to find that there are functions pr and prn whose job it is to take an object and “By default, pr and prn print in a way that objects can be read by the reader.”  Now pr and prn output to the current output stream. However we can use either pr-str or prn-str to give us a string output, which is far easier to use for our examples.  Let’s have a quick look at an example of that, with a normal Clojure Map:

Obviously this could have been written simpler, but I wanted to break it down into individual chunks to make learning a little bit easier. As you can see, to convert our sample-map into EDN, all we had to do was call prn-str on it to return the EDN string.  From there, to convert it back to EDN, it’s as simple as passing that string into the edn/read-string function, and we get back a new map with the same values as before.

So far, so good, but the next tricky bit (I found) comes when we want to actually extend EDN for our own usage. The immediate example that came to mind for me, was for use with defrecords. Out of the box, prn-str will convert a defrecord into EDN without any external intervention.  For example:

The nice thing here, is that prn-str provides us with a EDN tag for our defrecord out of the box: “#edn_example.core.Goat”. This let’s the EDN reader know that this is not a standard Clojure type, and that it will need to be handled differently from normal. The EDN reader makes it very easy to tell it how to handle this new EDN tag:

You can see that when we call edn/read-string we pass through an option of :readers with a map of our custom EDN tag as the key ( 'edn_example.core.Goat ), to functions that return what we finally want from our EDN deserialisation as the values ( map->Goat ).  This is the magic glue that tells the EDN reader what to do with your custom EDN tag, and we can tell it whatever we want it to do from that point forward. Since we have used custom EDN tags, if we didn’t do this, the EDN reader would throw an exception saying “No reader function for tag edn_example.core.Goat”, when we attempted to deserialise the EDN.

Therefore, as the EDN reader parses:

#edn_example.core.Goat{:stuff I love Goats, :things Goats are awesome}

It first looks at #edn_example.core.Goat, and matches that tag to the map->Goat function.  This function is then passed the deserialised value of {:stuff “I love Goats”, :things “Goats are awesome”} to it. map->Goat takes that map, and converts it into our Goat defrecord, and presto we are able to serialise and deserialise our new Goat defrecord.

This isn’t just limited to derecords, it could be any custom EDN we want to use. For example, if we wanted to write our own crazy EDN string for our Goat defrecord, rather than use the default, we could flatten out the map into a sequence, like so:

We can then apply the same strategy as we did before to convert it back:

Finally, there is the question of what do you do when you don’t know all the EDN tags that you will be required to parse? Thankfully the EDN reader handles this elegantly as well. You are able to provide the reader with a :default option, that gets called if a custom namespace can’t be found for the given EDN string, and gets passed the tag and the deserialised value as well.

For example, here is a function that simply converts the incoming EDN string into a map with :tag and :value values for the incoming EDN:

That about wraps up EDN. Hopefully that makes things easier for people who want to start using EDN, and encountered some of the stumbling blocks I did when trying to fit all the pieces together. EDN is a very nice data format for usage with Clojure communication.

The full source code for this example can be found on Github as well, if you want to have a look. Simply clone it and execute lein run to watch it go.

Provision Your Local Machines

This is an old photo, but my home workspace looks somewhat akin to this:

Twitter Desks

As you can see, I have a lot of monitors… and three separate laptops, running Synergy to share my keyboard and mouse across all the machines.

Since I have the three machines, migrating settings and software that I had set up on one machine to another started becoming more and more painful.  To add insult to injury, Ubuntu releases a new version every six months, and when I upgrade I tend to wipe my entire partition and reinstall from scratch. Therefore I have to reinstall all my software, which is isn’t that much fun once, let alone three times over.  For a while I was keeping setting files (tmux, zsh etc) in a Dropbox folder, but that still required manual intervention when setting up a new machine, didn’t cover software installation, and there were many pieces of software that doesn’t work for.

I was already experimenting with Ansible, and it seemed like it’s simple, no nonsense approach to machine provisioning was going to be a perfect fit for automating the installation and configuration of the software on my local machines.  Since it’s SSH based, and also allows local connections, and has no need of any server or client install, it’s also exceedingly lightweight, which again, was a perfect fit for my needs.

My final result is a combination of a couple of small bash scripts, to bootstrap the process, combined with several Ansible Playbooks (which defines what to actually install and configure) to do the actual provisioning.  You can find it all up on github.

Some interesting highlights include:

  • I have an install.sh to install the base dependencies to get going when I’ve got a bare machine.
  • The up.sh is the script to run to actually provision a machine. This does a few things:
    • Updates my local forked Ansible Git repository (So I can hack on Ansible when I want)
    • Updates my local git repository with all my Ansible Playbooks, so when the provision runs, it self updates to the latest version of my configuration.
    • The actual Playbook to run comes from a .playbook file that is not stored in the git repository. This means I can easily have specific playbooks to each machine. For example, the primary.yml playbook is for my main machine, but I have a variation of the seconday.yml for my other two (at time of writing, one secondary is running Ubuntu 13.10, while the other is running 13.04)
    • Ansible has a feature called Roles that gives you the ability to group together a set of configuration for reuse, and I’ve used several roles:
      • core – The configuration to be used across all machines
      • primary – The configuration to be used on my main development machine
      • secondary – The configuration the be used on my secondary (left and right) machines
      • ubuntu_x – This is so that I can have slightly different configuration as necessary for different ubuntu versions. For example, PPA software repositories will change depending on what version of Ubuntu you are on.
      • xmonad – I wasn’t sure I wanted to use xmonad, so I did it as a separate role while I was trying it out. Maybe one day I’ll merge it into core, since I’m pretty hooked on it now.

I’ve really enjoyed having my local machine provisioning, because:

  • I can take a completely clean machine, and have it up and running and ready for me to write code on in around an hour (depending on download speeds).
  • Wiping a partition and reinstalling Ubuntu is a clean and easy process now.
  • I can set up software and/or configuration tweaks on a secondary machine, and transport it across all my machines with ease.
  • If I want to test out software installation and configurations without committing to them, it’s only a git branch away.
  • It’s really easy to clear out cruft on any machine. I’ve no need for Thunderbird, or Empathy, and when I was running Unity, I could instantly remove all those annoying Shopping Lenses.
  • I never have to go back to old blog posts to remember how I installed / fixed / configured various things. It’s a write once, fixed forever type deal.

Strangely enough, it’s many of the same benefits you get when you automate the provisioning of your servers, but applied locally. If you are new to machine provisioning, starting by provisioning your local machine is also a low risk way to get up to speed on provisioning, while giving you a lot of benefits.

Hope that has been enough to inspire you to automate more on your local machines!

 

New version of Compound Theory is now live!

I can’t believe the last time I did an update to this blog’s design was 2005, which was a long, long, long time ago.

This site has now been migrated over to WordPress and thanks to the wonderful skills of my wife Amy , is sporting a much nicer look than I could ever have come up with myself.

I’m sure there will be some kinks to work out, and a few issues here and there, so if you find anything, please do drop me a note (you now find pretty much all my contact details and social doohickeys at the top of the site!)

I’m hoping this will lead to much more blogging as well, as I’ve been playing with some stuff I’m really enjoying – including ansible, clojure, graph databases, websockets and a touch of webrtc as well, which I’ve wanted to write about, but the horrible aesthetic of the previous incarnation of this blog seriously put me off.

Lots of new things on the horizon!!!

Multi-Monitor with One screen rotated with Nvidia Drivers – Ubuntu 12.10

I have this monitor and computer sitting on my right for a while now, and as you can see, the screen is rotated so it's in portrait mode. Not only is it great for Tweetdeck, it's also awesome for reading various API docs while I'm working.

I'd tried a few times to set this up with the Nvidia driver, but could never get it to work, and would just give up and use the Nouveau driver, as it was very simple to set up the way I would like.

I decided to come back at this, and did some reading around, and discovered the following solution (I've since lost the link on the Ubuntu forum that pointed this out, if anyone finds it, please add to the comments).

  1. Open Nvidia Settings and set up your monitors using Twinview as you would like them positioned, and hit apply.
  2. Open the Displays application (the one in the System Settings application), select the window you want to rotate, change it's rotation in the rotate dropdown, and hit apply.
  3. Go back to Nvidia Settings and save your configuration to your xorg.conf

That should be about it! Now you have 1 screen rotated!

JRuby, LWJGL & OpenGL – Getting Started with Shaders

In Part 2 we drew a triangle using a Vertex Buffer, and some basic shaders. While on the surface this can seem overtly complicated, it actually becomes the basis of a powerful OpenGL achitecture to enable you to leverage the GPU is a variety of very interesting ways without having to rely on the CPU.

We’ll be look at the example show_triangle_vert_frag_offset.rb, which can be run from the command bin/triangle_vert_frag_offset.

This also comes from the “OpenGL’s Moving Triangle” section of the Learning Modern 3D Graphics Programming online book.

With this code, we are going to take our original triangle, and we will make it move around a bit, and also change colour at the same time.

The clever thing about this is, we won’t be changing the vertex data stored in the buffer, but will instead be manipulating it with Fragment and Vertex shaders. This almost feels like Uber-CSS over the top of HTML.

This is pretty powerful stuff, as we can let the GPU do a lot of the processing by using shader programs, and passing them attributes to control the overall affect that we want.

Let’s look at our vertex shader offset_vertex.glsl

#version 330

layout(location = 0) in vec4 position;
uniform vec2 offset;

void main()
{
vec4 totalOffset = vec4(offset.x, offset.y, 0.0, 0.0);
gl_Position = position + totalOffset;
}

You can see we now have a uniform vec2 offset;. This defines a value that is going to get passed in from outside the Shader. It has the keyword uniform as the value stays the same on the same rendering frame.

The offset attribute will be expecting a vector with an x and y coordinate to be passed through (vec2) for the uniform value.

We convert the offset vec2 to a vec4, as you can’t add a vec2 to a vec4. Then we can add these two together (GLSL will do vector arithmatic for you out of the box) to get our final gl_Position vector.

This means we can change the position of our triangle with relative ease, just by changing the offset of each of the vertices.

To do this from our JRuby code, is quite straight forward. The first thing we need to do is find out the location / position of the offset attribute. This is done through:

@offset_location = GL20.gl_get_uniform_location(@program_id, "offset")

Now we have the capability to change this value as we need to. In our display function, we have:

x_offset, y_offset = compute_position_offsets(time)

compute_position_offsets calculates x and y offsets based on the current time. (Check out the code for more details, it’s just some fun trig). To set the uniform value, we then do:

GL20.gl_uniform2f(@offset_location, x_offset, y_offset)

That’s it. The shader then does the rest of the work. Our triangle will now go round and round in a circle.

We do similar things to change the colour of the triangle as it goes around in a circle. Here is our fragment shader:

#version 330
out vec4 outputColor;
uniform float fragLoopDuration;
uniform float time;

const vec4 firstColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
const vec4 secondColor = vec4(0.0f, 1.0f, 0.0f, 1.0f);

void main()
{
float currTime = mod(time, fragLoopDuration);
float currLerp = currTime / fragLoopDuration;
outputColor = mix(firstColor, secondColor, currLerp);
}

You can see we can define constants with the const keyword. This sets up two colours to mix between, in this case, white and green.

We have two uniform attributes to in this time, fragLoopDuration the loop duration of the colour change, and time, how many seconds have passed since the application began.

mix is a GLSL function that blends two colours together based on the third float that is passed through, and gives us the slow fade between the two as the time value changes.

Setting this up is almost exactly the same as before. We will set the fragLoopDuration in our init_program method, as it never changes across the execution of our code.

frag_loop_location = GL20.gl_get_uniform_location(@program_id, "fragLoopDuration")
@frag_loop = 50.0
GL20.gl_uniform1f(frag_loop_location, @frag_loop)

For the time uniform attribute, we have some code that tracks how much time passes between frames and we just add it all up, and then set the uniform attribute with gl_uniform like usual:

current_time = Sys.get_time
@elapsed_time += (current_time - @last_time)
@last_time = current_time
time = @elapsed_time / 1000.0

 

GL20.gl_uniform1f(@time_location, time)

That’s the basics of using shaders with vertex data. We now have a triangle that goes around in a circle!