Brute – Entity Component System Library for Clojure

Warning: If you are new to Entity Component Systems, I would highly recommend Adam Martin’s blog series on them, he goes into great detail about what problem they solve, and what is required to implement them.  I’m not going to discuss what Entity Component Systems are in this blog post, so you may want to read his series first.

Doing some more fun time with game development, I wanted to use a Entity Component System Library for my my next project. Since I’m quite enamoured with Clojure at the moment, I went looking to see what libraries currently existed to facilitate this.

I found simplecs, which is quite nice, but I wanted something that was far more lightweight, and used simple Clojure building blocks, such as defrecord and functions to build your Entities, Components and Systems.  To that end, I wrote a library called brute, which (I think) does exactly that.

I wrote a Pong Clone example application to test out Brute, and I feel that it worked out quite well for my objectives.  Below are a few highlights from developing my example application with Brute that should hopefully give you a decent overview of the library.

As we said before, we can use defrecords to specify the Components of the system. For example, the components for the Ball:

We have a:

  • Rectangle, which defines the position of the Ball, the dimensions of the rectangle to be rendered on screen, and its colour.
  • A Ball component as a marker to delineate an Entity is a Ball.
  • Velocity to determine what direction and speed the Ball is currently travelling.

As you can see, there is nothing special, we have just used regular old defrecord. Brute, by default will use the the Component instance’s class as the Component type, but this can be extended and/or modified (although we don’t do that here).

Therefore, to create the ball in the game, we have the following code:

This creates a Ball in the centre of the playing field, with a white rectangle ready for rendering, and a Velocity of 300 pointing in a random direction.

As you can see here, creating the entity (which is just a UUID), is a simple call to create-entity!. From there we can add components to the Entity, such as an instance of the Ball defrecord, by calling add-component! passing in the entity and the relevant instance. Since we are using the defrecord classes as our Component types, we can use those classes to retrieve Entities from Brute.

For example, to retrieve all Entities that have Rectangle Components attached to them, it is simply a matter of using get-all-entities-with-component

From there, we can use get-component to return the actual Component instance, and any data it may hold, and can perform actions accordingly.

Systems become far more simple in Brute than they would when building an Entity System architecture on top of an Object Oriented language.

Systems in Brute are simply a function, that takes a delta argument, for the number of milliseconds that have occurred since the last processing of a game tick. This leaves the onus up to the game author to structure Systems how they like around this core concept, while still giving a simple and clean entry point into getting this done.

Brute maintains a sequence of System functions in a registry, which is very simple to add to through the appropriately named add-system-fn! function.

Here is my System function for keeping score:

Here we add it to the registry:

Finally, to call all registered system functions are fired, by using the function process-one-game-tick, which calls all registered System functions in the order they were registered – and in theory, your game should run!

For more details on Brute, check out the full API documentation, as well as the Pong clone sample game that I wrote with the great play-clj framework (that sits on top if libGDX).

As always, feedback is appreciated.

Looking for a Remote Job With Go?

Pretty exciting stuff – I’m co-founder of a new venture that is looking to do some interesting things in the e-commerce space.

We are building with Go and Google App Engine, for a variety of good reasons. Mostly because of how fast Go is (wow is it fast), and how many nice things GAE gives us out of the box that we can leverage.

No in depth details about the venture yet, but we are looking for like minded developers who love e-commerce, Go, and working from home. If this sounds like you, please have a look at our Careers Page, and send us through an email.

We will also be in Sydney the week of the 23rd of March (and will be attending the Go Sydney Meetup Event on the 26th), so if you are in the area and would like to talk face to face about the position, drop us a line via the email provided on our careers page, we would love to hear from you.

Writing AngularJS with ClojureScript and Purnam

I’ve got a project that I’ve been using to learn Clojure.  For the front end of this project, I wanted to use ClojureScript and AngularJS so I could share code between and my server and client (and have another avenue for learning Clojure), and also because Angular is totally awesome. Hunting around for details on how to integrate AngularJS with ClojureScript, I did find a few articles, but eventually I came across the library Purnam, and I knew I had a winner. Purnam is a few things wrapped up into one:

  1. A very nice extra layer for JavaScript interoperability above and beyond what ClojureScript gives you out of the box
  2. Both a Jasmine and Midje style test framework, with integration into the great Karma test runner
  3. ClojureScript implementations of AngularJS directives, controllers, services, etcetera which greatly reduce the amount of boilerplate code you would otherwise need
  4. ClojureScript implementations that make testing of AngularJS directives, controllers, services, etcetera a breeze to set up under the above test framework.
  5. Fantastic documentation.

I could go on in detail on each of these topics, but it’s covered very well in the documentation. Instead I’ll expand on how easy it was to get up and running with a simple AngularJS module and controller configuration, and also testing it with Jasmine and Karma to give you a taste of what Purnam can do for you.

To create an AngularJS controller, we can define it like so (I’ve already defined my module “chaperone.app” in chaperone.ng.core):

This shows off a few things:

  • def.controller : This is the Purnam macro for defining an AngularJS controller. Here I’ve created the controller AdminUserControl in the module chaperone.app.
  • You will notice the use of !. This is a Purnam construct that allows you to set a value to a Javascript property, but allows you to define that property with a dot syntax, as opposed to having to use something like (set! (.-init $scope) (fn [] ... ))  or (aset $scope "init" (fn [] ...))  which, you may not prefer.  Purnam has quite a few constructs that allow you to use dot notation with JavaScript interoperability, above and beyond this one.  I personally prefer the Purnam syntax, but you can always choose not to use it.

One thing I discovered very quickly, I needed to make sure I required chaperone.ng.core which contains my Angular module definition, in the above controller’s namespace, even though it is not actually used in the code.  This was so that the Angular module definition would show up before the controller definition in the final JavaScript output. Otherwise, Angular would throw an error because it could not find the module, as it had yet to be defined.

Purnam also makes it easy to run AngularJS unit tests. Here is a simple test I wrote to test a $scope value that should have been set after I ran the init function on my AdminUserCtrl controller

As you can see, Purnam takes away a lot of the usual Jasmine + AngularJS boilerplate code, and you end up with a nice, clean way to write AngularJS tests. Purnam also implicitly injects into your tests it’s ability to interpret dot notation on JavasScript objects for you, which is handy if you want to use it.

Purnam also has capabilities to test out Services, Directives and other aspects of the AngularJS ecosystem as well, though it’s describe.ng macro, which gives you full control over what Angular elements are created through Angular’s dependency injection capabilities.

Finally, Purnam integrates into Karma, which lets you run your tests in almost any javascript environment, be it NodeJS or inside a Chrome web browser.

Configuring Karma is as simple as running the standard karma init command, which asks you a series of questions about what test framework you want (Jasmine) and what platform you want to test on (Chrome for me), and results in a karma.conf.js file in the root of your directory.

One thing I found quickly when setting up my karma.conf.js file was that it was very necessary to specify which file you want included, and in what order. For example:

In my initial setup I had simply used a glob of test/*.js, which caused me all sorts of grief as angular-mocks.js needed to be loaded after angular.min.js (for example), but the resultant file list didn’t come out that way, and I got all sorts of very strange errors. Specifying exactly which files I needed to be using and in the correct order fixed all those issues for me.

I’m really enjoying working with the combination of ClojureScript and AngularJS, and Purnam gives me the glue to hook it all up together without getting in my way at all. Hopefully that gives you enough of a taste of Purnam to get your interest piqued, and you’ll take it for a spin as well!

Clojure EDN Walkthrough

When I set myself the task of learning how to use EDN, or Extensible Data Notation,  in Clojure, I couldn’t find a simple tutorial on how it worked, so I figured I would write one for anyone else having the same troubles I did. I was keen to learn EDN, as I am working on a Clojure based web application in my spare time, and wanted a serialisation format I could send data in that could be easily understood by both my Clojure and ClojureScript programs.

Disclaimer: This is my first blog post on Clojure, and I’m still learning the language. So if you see anything that can be improved / is incorrect, please let me know.

If you’ve never looked at EDN, it looks a lot like Clojure, which is not surprising, as it is actually a subset of Clojure notation.   For example, this is what a vector looks like in EDN:

Which should be fairly self explanatory.  If you want to know more about the EDN specification, have a read. You should be able to read through in around ten minutes, as it’s nice and lightweight.

The first place I started with EDN, was with the clojure.edn namespace, which has a very short API documentation and this was my first point of confusion. I could see a read and read-string method… but couldn’t see how I would actually write EDN?  Coming from a background that was used to JSON, I expected there to be some sort of equivalent Clojure to-edn function lying around, which I could not seem to find. The concept connection I was missing, was that the since EDN was a subset of Clojure, and the Clojure Reader supports, EDN, you only had to look at the IO Clojure functions to find that there are functions pr and prn whose job it is to take an object and “By default, pr and prn print in a way that objects can be read by the reader.”  Now pr and prn output to the current output stream. However we can use either pr-str or prn-str to give us a string output, which is far easier to use for our examples.  Let’s have a quick look at an example of that, with a normal Clojure Map:

Obviously this could have been written simpler, but I wanted to break it down into individual chunks to make learning a little bit easier. As you can see, to convert our sample-map into EDN, all we had to do was call prn-str on it to return the EDN string.  From there, to convert it back to EDN, it’s as simple as passing that string into the edn/read-string function, and we get back a new map with the same values as before.

So far, so good, but the next tricky bit (I found) comes when we want to actually extend EDN for our own usage. The immediate example that came to mind for me, was for use with defrecords. Out of the box, prn-str will convert a defrecord into EDN without any external intervention.  For example:

The nice thing here, is that prn-str provides us with a EDN tag for our defrecord out of the box: “#edn_example.core.Goat”. This let’s the EDN reader know that this is not a standard Clojure type, and that it will need to be handled differently from normal. The EDN reader makes it very easy to tell it how to handle this new EDN tag:

You can see that when we call edn/read-string we pass through an option of :readers with a map of our custom EDN tag as the key ( 'edn_example.core.Goat ), to functions that return what we finally want from our EDN deserialisation as the values ( map->Goat ).  This is the magic glue that tells the EDN reader what to do with your custom EDN tag, and we can tell it whatever we want it to do from that point forward. Since we have used custom EDN tags, if we didn’t do this, the EDN reader would throw an exception saying “No reader function for tag edn_example.core.Goat”, when we attempted to deserialise the EDN.

Therefore, as the EDN reader parses:

#edn_example.core.Goat{:stuff I love Goats, :things Goats are awesome}

It first looks at #edn_example.core.Goat, and matches that tag to the map->Goat function.  This function is then passed the deserialised value of {:stuff “I love Goats”, :things “Goats are awesome”} to it. map->Goat takes that map, and converts it into our Goat defrecord, and presto we are able to serialise and deserialise our new Goat defrecord.

This isn’t just limited to derecords, it could be any custom EDN we want to use. For example, if we wanted to write our own crazy EDN string for our Goat defrecord, rather than use the default, we could flatten out the map into a sequence, like so:

We can then apply the same strategy as we did before to convert it back:

Finally, there is the question of what do you do when you don’t know all the EDN tags that you will be required to parse? Thankfully the EDN reader handles this elegantly as well. You are able to provide the reader with a :default option, that gets called if a custom namespace can’t be found for the given EDN string, and gets passed the tag and the deserialised value as well.

For example, here is a function that simply converts the incoming EDN string into a map with :tag and :value values for the incoming EDN:

That about wraps up EDN. Hopefully that makes things easier for people who want to start using EDN, and encountered some of the stumbling blocks I did when trying to fit all the pieces together. EDN is a very nice data format for usage with Clojure communication.

The full source code for this example can be found on Github as well, if you want to have a look. Simply clone it and execute lein run to watch it go.

Provision Your Local Machines

This is an old photo, but my home workspace looks somewhat akin to this:

Twitter Desks

As you can see, I have a lot of monitors… and three separate laptops, running Synergy to share my keyboard and mouse across all the machines.

Since I have the three machines, migrating settings and software that I had set up on one machine to another started becoming more and more painful.  To add insult to injury, Ubuntu releases a new version every six months, and when I upgrade I tend to wipe my entire partition and reinstall from scratch. Therefore I have to reinstall all my software, which is isn’t that much fun once, let alone three times over.  For a while I was keeping setting files (tmux, zsh etc) in a Dropbox folder, but that still required manual intervention when setting up a new machine, didn’t cover software installation, and there were many pieces of software that doesn’t work for.

I was already experimenting with Ansible, and it seemed like it’s simple, no nonsense approach to machine provisioning was going to be a perfect fit for automating the installation and configuration of the software on my local machines.  Since it’s SSH based, and also allows local connections, and has no need of any server or client install, it’s also exceedingly lightweight, which again, was a perfect fit for my needs.

My final result is a combination of a couple of small bash scripts, to bootstrap the process, combined with several Ansible Playbooks (which defines what to actually install and configure) to do the actual provisioning.  You can find it all up on github.

Some interesting highlights include:

  • I have an install.sh to install the base dependencies to get going when I’ve got a bare machine.
  • The up.sh is the script to run to actually provision a machine. This does a few things:
    • Updates my local forked Ansible Git repository (So I can hack on Ansible when I want)
    • Updates my local git repository with all my Ansible Playbooks, so when the provision runs, it self updates to the latest version of my configuration.
    • The actual Playbook to run comes from a .playbook file that is not stored in the git repository. This means I can easily have specific playbooks to each machine. For example, the primary.yml playbook is for my main machine, but I have a variation of the seconday.yml for my other two (at time of writing, one secondary is running Ubuntu 13.10, while the other is running 13.04)
    • Ansible has a feature called Roles that gives you the ability to group together a set of configuration for reuse, and I’ve used several roles:
      • core – The configuration to be used across all machines
      • primary – The configuration to be used on my main development machine
      • secondary – The configuration the be used on my secondary (left and right) machines
      • ubuntu_x – This is so that I can have slightly different configuration as necessary for different ubuntu versions. For example, PPA software repositories will change depending on what version of Ubuntu you are on.
      • xmonad – I wasn’t sure I wanted to use xmonad, so I did it as a separate role while I was trying it out. Maybe one day I’ll merge it into core, since I’m pretty hooked on it now.

I’ve really enjoyed having my local machine provisioning, because:

  • I can take a completely clean machine, and have it up and running and ready for me to write code on in around an hour (depending on download speeds).
  • Wiping a partition and reinstalling Ubuntu is a clean and easy process now.
  • I can set up software and/or configuration tweaks on a secondary machine, and transport it across all my machines with ease.
  • If I want to test out software installation and configurations without committing to them, it’s only a git branch away.
  • It’s really easy to clear out cruft on any machine. I’ve no need for Thunderbird, or Empathy, and when I was running Unity, I could instantly remove all those annoying Shopping Lenses.
  • I never have to go back to old blog posts to remember how I installed / fixed / configured various things. It’s a write once, fixed forever type deal.

Strangely enough, it’s many of the same benefits you get when you automate the provisioning of your servers, but applied locally. If you are new to machine provisioning, starting by provisioning your local machine is also a low risk way to get up to speed on provisioning, while giving you a lot of benefits.

Hope that has been enough to inspire you to automate more on your local machines!

 

New version of Compound Theory is now live!

I can’t believe the last time I did an update to this blog’s design was 2005, which was a long, long, long time ago.

This site has now been migrated over to WordPress and thanks to the wonderful skills of my wife Amy , is sporting a much nicer look than I could ever have come up with myself.

I’m sure there will be some kinks to work out, and a few issues here and there, so if you find anything, please do drop me a note (you now find pretty much all my contact details and social doohickeys at the top of the site!)

I’m hoping this will lead to much more blogging as well, as I’ve been playing with some stuff I’m really enjoying – including ansible, clojure, graph databases, websockets and a touch of webrtc as well, which I’ve wanted to write about, but the horrible aesthetic of the previous incarnation of this blog seriously put me off.

Lots of new things on the horizon!!!

Multi-Monitor with One screen rotated with Nvidia Drivers – Ubuntu 12.10

I have this monitor and computer sitting on my right for a while now, and as you can see, the screen is rotated so it's in portrait mode. Not only is it great for Tweetdeck, it's also awesome for reading various API docs while I'm working.

I'd tried a few times to set this up with the Nvidia driver, but could never get it to work, and would just give up and use the Nouveau driver, as it was very simple to set up the way I would like.

I decided to come back at this, and did some reading around, and discovered the following solution (I've since lost the link on the Ubuntu forum that pointed this out, if anyone finds it, please add to the comments).

  1. Open Nvidia Settings and set up your monitors using Twinview as you would like them positioned, and hit apply.
  2. Open the Displays application (the one in the System Settings application), select the window you want to rotate, change it's rotation in the rotate dropdown, and hit apply.
  3. Go back to Nvidia Settings and save your configuration to your xorg.conf

That should be about it! Now you have 1 screen rotated!

JRuby, LWJGL & OpenGL – Getting Started with Shaders

In Part 2 we drew a triangle using a Vertex Buffer, and some basic shaders. While on the surface this can seem overtly complicated, it actually becomes the basis of a powerful OpenGL achitecture to enable you to leverage the GPU is a variety of very interesting ways without having to rely on the CPU.

We’ll be look at the example show_triangle_vert_frag_offset.rb, which can be run from the command bin/triangle_vert_frag_offset.

This also comes from the “OpenGL’s Moving Triangle” section of the Learning Modern 3D Graphics Programming online book.

With this code, we are going to take our original triangle, and we will make it move around a bit, and also change colour at the same time.

The clever thing about this is, we won’t be changing the vertex data stored in the buffer, but will instead be manipulating it with Fragment and Vertex shaders. This almost feels like Uber-CSS over the top of HTML.

This is pretty powerful stuff, as we can let the GPU do a lot of the processing by using shader programs, and passing them attributes to control the overall affect that we want.

Let’s look at our vertex shader offset_vertex.glsl

#version 330

layout(location = 0) in vec4 position;
uniform vec2 offset;

void main()
{
vec4 totalOffset = vec4(offset.x, offset.y, 0.0, 0.0);
gl_Position = position + totalOffset;
}

You can see we now have a uniform vec2 offset;. This defines a value that is going to get passed in from outside the Shader. It has the keyword uniform as the value stays the same on the same rendering frame.

The offset attribute will be expecting a vector with an x and y coordinate to be passed through (vec2) for the uniform value.

We convert the offset vec2 to a vec4, as you can’t add a vec2 to a vec4. Then we can add these two together (GLSL will do vector arithmatic for you out of the box) to get our final gl_Position vector.

This means we can change the position of our triangle with relative ease, just by changing the offset of each of the vertices.

To do this from our JRuby code, is quite straight forward. The first thing we need to do is find out the location / position of the offset attribute. This is done through:

@offset_location = GL20.gl_get_uniform_location(@program_id, "offset")

Now we have the capability to change this value as we need to. In our display function, we have:

x_offset, y_offset = compute_position_offsets(time)

compute_position_offsets calculates x and y offsets based on the current time. (Check out the code for more details, it’s just some fun trig). To set the uniform value, we then do:

GL20.gl_uniform2f(@offset_location, x_offset, y_offset)

That’s it. The shader then does the rest of the work. Our triangle will now go round and round in a circle.

We do similar things to change the colour of the triangle as it goes around in a circle. Here is our fragment shader:

#version 330
out vec4 outputColor;
uniform float fragLoopDuration;
uniform float time;

const vec4 firstColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
const vec4 secondColor = vec4(0.0f, 1.0f, 0.0f, 1.0f);

void main()
{
float currTime = mod(time, fragLoopDuration);
float currLerp = currTime / fragLoopDuration;
outputColor = mix(firstColor, secondColor, currLerp);
}

You can see we can define constants with the const keyword. This sets up two colours to mix between, in this case, white and green.

We have two uniform attributes to in this time, fragLoopDuration the loop duration of the colour change, and time, how many seconds have passed since the application began.

mix is a GLSL function that blends two colours together based on the third float that is passed through, and gives us the slow fade between the two as the time value changes.

Setting this up is almost exactly the same as before. We will set the fragLoopDuration in our init_program method, as it never changes across the execution of our code.

frag_loop_location = GL20.gl_get_uniform_location(@program_id, "fragLoopDuration")
@frag_loop = 50.0
GL20.gl_uniform1f(frag_loop_location, @frag_loop)

For the time uniform attribute, we have some code that tracks how much time passes between frames and we just add it all up, and then set the uniform attribute with gl_uniform like usual:

current_time = Sys.get_time
@elapsed_time += (current_time - @last_time)
@last_time = current_time
time = @elapsed_time / 1000.0

 

GL20.gl_uniform1f(@time_location, time)

That’s the basics of using shaders with vertex data. We now have a triangle that goes around in a circle!

JRuby, LWJGL & OpenGL – Drawing a Triangle

In Part 1 we created a window, nothing too fancy. Now we get to actually display a triangle.

Just to follow along as well, I'm moving through the Learning Modern 3D Graphics Programming online book to learn OpenGL (again), so the OpenGL examples I will be displaying will be ports from the code that that is providing. I would suggest for the complete theory behind this code, read the linked section before going through the JRuby code. I expect my explanations to only be commentary on the information already provided in that series and discussion on some of the finer points on JRuby and Java library integration as well.

If you have any questions, please feel free to ask, but be aware, I'm very new to OpenGL, so writing this series is very much part of my learning experience. However, I will attempt to answer the best way I can. On the other hand, if you find anything wrong with what I'm written, please point it out so it can be corrected.

This example is from Hello, Triangle!.

The full code can be seen on Github here.

To run this example use bin/triangle

Making Vertex Data Available to OpenGL

When I first did OpenGL back in University, we used the glBegin() and glEnd() paradigm. This was definitely far easier that the more modern APIs, as it was very clear and easy to draw a simple polygon on the screen (example). However, it did mean more computation was occurring on the CPU and a larger use of the system RAM than the newer APIs. The newer APIs, while (far?) more complicated, shift much of the work to the GPU and also provide a far more flexible implementation. I liken it to working with HTML and tables back in early HTML days. Sure it worked, but CSS and semantic markup gives a clear separation and creates far more flexible implementation options (at least in theory ;) ).

So we have some basic vertex information to display a right angle triangle:

@vertex_positions = [
    0.75, 0.75, 0.0, 1.0,
    0.75, -0.75, 0.0, 1.0,
    -0.75, -0.75, 0.0, 1.0,
]

Each line of this array define the x, y and z coordinates of our triangle. You will notice there is a fourth coordinate (1.0) on each line. This defines the clip space. For now we'll just say this just means that the vertexes you see in the window has to have values between -1 and 1 on the x, y and z axis. More than that will render outside of the window.

As discussed previously, in old school OpenGL you would just look through this list of vertexes and say draw a triangle here, however, this is no longer the case!

I feel like modern OpenGL is almost like a database – you put some data into it, and have an id to reference that data that was placed in. Then you can work on that data that is stored on the GPU through some other techniques (that we will look at in a minute) via that id. This seems to be a concept that is used across the board.

The code that inserts our vertex data into the GPU can be seen in the method init_vertex_buffers

@buffer_id = GL15.gl_gen_buffers
GL15.gl_bind_buffer(GL15::GL_ARRAY_BUFFER, @buffer_id)

So first thing we do, is we generate an id for the vertex buffer, which is where we will store our vertex data. (gl_gen_buffers). Then we tell OpenGL, hey, this is the buffer we want to work with for the moment through gl_bind_buffer, passing in the specific @buffer_id we generated before. We also tell OpenGL that the buffer we are working with an GL_ARRAY_BUFFER, so it knows what data to expect.

In case you aren't aware, JRuby will convert Java static constants to Ruby constants, so we can access these static fields very easily.

To pass the vertex data into the new vertex array buffer, LWJGL has us use it's BufferUtils class to create a NIO buffer, and push the data into it, like so:

float_buffer = BufferUtils.create_float_buffer(@vertex_positions.size)
float_buffer.put(@vertex_positions.to_java(:float))
#MUST FLIP THE BUFFER! THIS PUTS IT BACK TO THE BEGINNING!
float_buffer.flip

A couple of interesting notes:

  1. You will noticed the .to_java(:float). That is the JRuby code for converting a Ruby array to a Java array. Passing in :float tells it to make it an array of primitive floats.
  2. The .flip at the end. This is very important (and took me a day to work out, as I'm not familiar with NIO buffers). Here is a great article that explains it more detail, but essentially the buffer tracks where it is at, and flip sends it back to the beginning. Without this, no data goes to our Vertex Array, and nothing happens!

GL15.gl_buffer_data(GL15::GL_ARRAY_BUFFER, float_buffer, GL15::GL_STATIC_DRAW)

This then pushes our vertex data to the bound vertex buffer.

#cleanup
GL15.gl_bind_buffer(GL15::GL_ARRAY_BUFFER, 0)

After we are done, we tell OpenGL not to be bound to any array buffers (0 works like NULL in OpenGL land). This could be considered optional, but ensures that weird things don't occur.

Now all we have to do is actually write the code that makes the data that displays the triangle!

Telling OpenGL how to Render the Vertexes

So now we have the vertex data stored on the GPU, we have to tell it how to render it, and to do that, we have to build a program of a couple of different types of shaders. Think of Shaders kind of like the CSS of HTML. They simply work on the existing data in the GPU and tell it how it to render (although that's a bit of an over simplification).

First thing, we'll write a simple vertex shader in GLSL, the language for writing shaders. This specifies to the GPU where the vertexes actually are from the data you entered earlier. We'll just say that it's correct and basically pass it through.

#version 330
layout(location = 0) in vec4 position;
void main()
{
    gl_Position = position;
}

Then we write a fragment shader to it what colours to make things. We'll just make everything white for convenience.

#version 330
out vec4 outputColor;
void main()
{
    outputColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}

To make life easier for myself, I wrote a quick little create_shader function, that loads the shader from a file loads it into memory and compiles it. I just want to make note of one line:

puts file_name, GL20.gl_get_shader_info_log(shader_id, 200)

Without this, you have no idea why your shader fails (if it does). Mine was failing, and I didn't realise it until I looked deeper. (Version 330 of GLSL wasn't supported on my Ultrabook on Linux with Mesa. I had to switch to my main laptop with the Nvidia graphics card)

Make sure to look at the output logs!

Creating the program to define how our data is output on the screen, is quite similar to what we did before. We generate an id and then link the shaders to the program like so:

vertex_shader = create_shader(GL20::GL_VERTEX_SHADER, 'basic_vertex.glsl')
frag_shader = create_shader(GL20::GL_FRAGMENT_SHADER, 'basic_fragment.glsl')

@program_id = GL20.gl_create_program

GL20.gl_attach_shader(@program_id, vertex_shader)
GL20.gl_attach_shader(@program_id, frag_shader)
GL20.gl_link_program(@program_id)
GL20.gl_validate_program(@program_id)

puts "Validate Program", GL20.gl_get_program_info_log(@program_id, 200)

A few things to note:

  1. create_shader returns the id of the shader. Much like a database, you reference everything you are creating using OpenGL by the generated ID.
  2. We attach the shaders to the program by the id we have for that program.
  3. gl_link_program essentially turns the program we're creating and makes it able to executedon the GPU.
  4. We also get the program info log with gl_get_program_info_log much like we did with the shader. Make sure to check those logs!

Finally, we can get to our display loop, which you can see in the display method.

#set the colour to clear.

GL11.gl_clear_color(0.0, 0.0, 0.0, 0.0)

We clear the page with a nice transparent colour.

GL11.gl_clear(GL11::GL_COLOR_BUFFER_BIT)

This clears the screen, so we can redraw with impunity.

GL20.gl_use_program(@program_id)

Tell it to use our shaders

GL15.gl_bind_buffer(GL15::GL_ARRAY_BUFFER, @buffer_id)

This tells it to use our vertex buffer

GL20.gl_enable_vertex_attrib_array(0)

Tells it to pass each value of the vertex buffer to the first argument of our vertex shader.

GL20.gl_vertex_attrib_pointer(0, 4, GL11::GL_FLOAT, false, 0, 0)

This tells OpenGL how the structure of the vertex buffer data matches is structured. Here we are saying that we have an array of floats, and every 3 elements is a x, y and then z member, with the fourth defining the clip space, as we saw earlier. (0 is the start, 4 is the size).

GL11.gl_draw_arrays(GL11::GL_TRIANGLES, 0, 3)

DRAW THE TRIANGLE! 0 is the start of the array of vertices you want to draw, and 3 is the number of indexes you want to process – in this case 3 to make a triangle.

GL20.gl_disable_vertex_attrib_array(0)
GL20.gl_use_program(0)

Finally we use the magic OpenGL 0 to clean things up again, and we are done!

We now have this amazing triangle.

Amazing Triangle!

Creating a window with LWJGL and JRuby

This is the most basic of steps before getting anything done with LWJGL and JRuby, so I figured I would document it to help people with the initial hurdles.

Before doing anything with LWJGL and JRuby, you need to set things up correctly so that LWJGL can find its native extensions. The easiest way I found to do this was the following (tested on Linux):

  1. Install JRuby with RVM
  2. Download LWJGL
  3. Put lwdgj.jar in ./lib/java
  4. Unzip the native extensions (flatten out the directory, ignore solaris) and put them into ./lib/java/native
  5. Create a .rvmrc in the root of the dir with the following code in it:

export JRUBY_OPTS="-J-Djava.library.path=lib/java/native"

This sets the JRUBY_OPTS environment variable, which tells JRuby to append these arguments to all JRuby operations. This obviously doesn’t work for deployment, but during development it makes thing very handy.

Now to write some Ruby code to get a window up and running!

First we need to require java, and the lwjgl jar file:

require 'java'
require 'java/lwjgl.jar'

Drawing the window is now very straightforward:

java_import org.lwjgl.opengl.Display
java_import org.lwjgl.opengl.DisplayMode
 
#
# Just a basic display using lwjgl
class OpenGL::BasicDisplay
 
# initialise
 
def initialize
   Display.display_mode = DisplayMode.new(800, 600)
 
   Display.create
 
   while(!Display.is_close_requested)
      Display.update()
   end
 
   Display.destroy
 
end
 
def self.start
   OpenGL::BasicDisplay.new
end
 
end

We can then write a little bin file to get this to run:

OpenGL::BasicDisplay.start

And there we go, we have a window!

Next, we’ll start to write some OpenGL!

The full source can be dowloaded from GitHub