The Haskell gl package
a pragmatic primer
This post demonstrates how to get to a useful, working foundation for an OpenGL application using the recent gl
package with the minimum of fuss. By the end of the post, this is what we’ll have:
Which is to say:
- A window, created and managed by
GLFW-b
. - A cube mesh, with positions, colours, normals, and uv co-ordinates.
- A single directional light, calculated using the fragment shader.
- A texture, alpha blended with the underlying colours.
- Some very simple animation (the cube spins).
When trying to get set up with OpenGL, I’ve found that while there are a lot of resources out there, I’ve often had to piece together various blog posts in order to get a working application that I can build off. Many of these blog posts also make use of immediate mode, which may be quick and easy to learn, but is quite outdated and ultimately sets you down the wrong path if you want to learn modern OpenGL programming. This post aims to give you a solid jumping-off point to start on the interesting stuff straight away.
As well as that, this post is an opportunity for me to try the gl package, introduced relatively recently by Edward Kmett and others. gl
attempts to be a low-level but complete set of bindings to the OpenGL API – as opposed to the rather more longstanding OpenGL package, which tries to be a bit more “Haskelly” but at the cost of certain missing parts of the OpenGL specification.
OpenGL is built on the OpenGLRaw package, which as the name implies is supposed to be a “raw” binding for OpenGL much as gl is. As I understand it, the problems with this package are as follows:
- It doesn’t work well as an “escape hatch” for the higher-level OpenGL package because many of the abstractions don’t translate between the two libraries.
- It is not as complete as gl in terms of the number of extensions it supports.
- Because it is part of the Haskell Platform, fixes to the above issues can take a year to make their way into the library.
For more information about the reasons behind the creation of the gl package, this video makes for interesting viewing.
To me, though, the greatest advantage of the gl package is that you can google it. Because it is machine-generated from the actual OpenGL API, all the symbol names match, and you use them in the same way as you would in C. The vast majority of OpenGL tutorials on the internet are written in C or C++, so having a common vocabulary with them is immensely useful.
Setting up the project
In case you want to follow along, here is the relevant part of my cabal file:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
I am including absolute version numbers here so that you can see exactly what I was working with, but you could probably be a lot more lenient with your own projects.
This post is a literate Haskell file – lines preceded by >
are executable code, so you should be able to run and test the file directly.
Breakdown of tasks
This post is, by necessity, quite long. There is a lot that needs to be set up in order to get a spinning cube on the screen! This is basically how I’ve started every games/graphics project I’ve done in the last ten years, and every time I spend the majority of my time staring into the abyss of a bright pink window with nothing rendering in it, wondering which trivial step I’ve forgotten in my initial setup which is breaking everything. By collecting all the steps together in this one, massive blog post, I hope to save others (as well as my future self) from this pain.
To help navigate, here’s a breakdown of what we’re going to be doing:
- Set up language pragmas / import the required modules
- Set up some handy error handling utilities
- Create a window and associate it with our GL context
- Define the mesh for our cube
- Load Resources:
- Initialise OpenGL
- Update state every frame to rotate the cube
- Actually render the scene
- Cleanup
Let’s get started!
Preliminaries
We start with language pragmas. We will overload both string and list syntax to provide us with convenient access to Haskell’s faster Text
and Vector
containers.
|
|
The gl package makes quite heavy use of pattern synonyms to reproduce GL’s native enums.
|
I’m also going to make use of Unicode symbols in this file.
|
|
|
Obviously we’ll begin by importing the Graphics.GL
namespace exposed by the gl
package. This package follows the GL C API convention of prefixing its function names with gl
, so I won’t bother with a qualified import; for all other modules I will either import them qualified or explicitly name the imported symbols, so that you can see where they’re coming from.
|
To make things easier, I’m going to make use of GLFW to deal with opening the window and getting keypresses. This will allow us to concentrate on the GL side of things.
|
Edward Kmett’s linear library is a nice, flexible library for vector and matrix maths, and the Storable
instances it supplies for everything make it a good fit for working with GL.
|
|
|
|
|
|
|
|
The following two functions also come in handy when working with Linear. distribute
gives you the transpose of a matrix, and (^.)
will give you access to certain fields which are expressed as lenses.
|
|
|
|
I’m going to use the JuicyPixels library for loading texture data.
|
|
|
|
After this we import some standard libraries which we’ll be making use of later.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We’ll be working with strings a little bit to load our shaders and send them into GL, so we’ll need Data.Text
and the Data.Text.Foreign
utilities for communicating with C. We’ll also include Data.Vector
while we’re at it.
|
|
|
|
|
|
|
|
Because gl
works at quite a low level, you have to do quite a lot of marshalling between Haskell and C. Haskell provides a number of convenient utilities for doing this within the Foreign
hierarchy. Data.Vector.Storable
also gives us a directly serializable form of Vector
.
|
|
|
|
|
|
|
|
|
|
|
Finally, some monad transformers which will ease some of the boilerplate.
|
|
|
|
|
|
|
|
Error-handling utilities
I don’t actually make use of these functions anywhere within this post, but you can bet I used them while I was writing it! Debugging graphical issues can be extremely frustrating as the GPU doesn’t have anything akin to a printf
, and GL itself is basically a gigantic state machine where subtle mistakes can lead to strange errors down the line. Sometimes it can be useful to take a scattergun approach and just sprinkle error-checking facilities throughout your code in the hope of getting a clue as to what might be the problem. These functions help you do that.
First, getErrors
collects all the errors GL is currently reporting. Since GL allows certain operations to be performed concurrently, it holds multiple error registers, and a single call to glGetError
just gives you the value from one of them. Here, we keep calling it until there are no errors left, at which point we return all the available errors as a list.
|
|
|
|
|
|
|
|
The errors themselves are, like most things in GL, just a GLuint
that maps to some enumerated value. The documentation for glGetError
gives us a clue as to what values might be returned, so we can use that to convert the errors to a more useful String
value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Finally, printErrors
is the function we’ll actually use. It uses the above two functions to collect the errors and output them. I found it useful just to crash straight away at these point, so I report the errors using error
. If you wanted to try and continue despite the errors you could use putStrLn
instead.
|
|
|
|
|
Note the prefix
parameter, which just lets you put in a little string describing where in the code the error occurred. Armed with this function, you can scatter error checks all over the place to help narrow down the cause of a problem to specific regions of code.
Setting up the window
The main
function of our application begins by setting up the window using GLFW and binding it to our current GL context. Once that’s done, it can hand off to our initialisation and main loop to do the bulk of the work.
Because I want to keep the distinction between GLFW and OpenGL quite strong, I’ve chosen not to mix them up in this post. This section, which deals with window setup and initialisation, uses GLFW exclusively and makes no direct GL calls at all. Once this section is over, we won’t touch GLFW again and it will be pure GL from then on.
|
|
|
Not strictly necessary, but I begin here by setting stdout
to use LineBuffering. This means any output will be flushed on every newline, which can be invaluable for debugging.
Next, we need to initialise GLFW.
|
|
|
|
If GLFW won’t initialise we might as well give up, otherwise we can continue on into our program.
We need to provide GLFW with some hints to tell it how to set up the window. These will vary depending on the architecture you want to support.
|
|
|
|
|
|
|
|
I’ve rather arbitrarily opted for OpenGL 3.2 here, which is not outrageously out-of-date but is still widely supported. More information about the available window hints can be found in the GLFW documentation.
We’re now ready to make the window. Again, it’s possible this may fail, so we’ll just drop out with an error if that happens.
|
|
|
|
|
|
|
|
|
OK, we have a window! First things first, let’s associate the current GL context with this window so that any GL calls we make from now on will apply to it.
|
The next step is to hook into GLFW’s callbacks. In reality, I don’t think the GLFW design of responding to callbacks fits the Haskell mindset very well as you necessarily have to have callbacks modify some sort of global state, but since we’re using GLFW we’re stuck with it. For a serious game project I would probably just do the window handling myself and take a different approach.
So, we start off by setting up handling of the “close” button. We create an IORef
to tell us whether the window has been closed, which we set to True
when the close button is pressed. That way we can check at any time during our game loop whether we need to shut down. We could also close the window on a keypress simply by setting the same IORef
value. It’s quick and dirty, but it works:
|
|
|
We’ll also want to hook into GLFW’s WindowSizeCallback
to avoid our image getting stretched when we resize the window. Again, we’ll make use of an IORef
to store the calculated projection matrix so that we can access it from the render loop. We’ll cover calculateProjectionMatrix
later; for now on just assume it’s a function which takes a tuple of (width, height)
and returns the projection matrix we need for that aspect ratio.
|
|
|
We’ll look into the details of what resize
does later, but for now we just tell GLFW to call it when the window is resized. Since I don’t want to have any GLFW-specific code in the main portion of this demo, I drop the GLFW.Window
parameter using const
(I actually did the same for the WindowCloseCallback
above, too).
|
|
Let’s also make a quick helper function which swaps the draw buffers for the current window, so we don’t have to expose win
to the rest of the program. We also put in a call to GLFW.pollEvents
while we’re at it, so that window events and keypresses (if there were any) are handled properly.
|
That pretty much covers it for GLFW’s setup – we’re now ready to initialise and run our demo. We’ll have our main function just drop out when it’s done, so we can terminate once it’s complete.
|
|
One last thing before I leave GLFW aside entirely – I’ll want to be able to access the time delta within my main loop. GLFW provides us with a convenient way to query this in a platform-agnostic manner.
|
|
|
|
|
This very simple implementation obviously assumes we will only be querying the delta time once per frame.
We now have a window set up and all the platform-specific stuff we might want handled. There’s just one more thing we need to get out of the way before we can begin looking at the actual GL side of things and the gl
package in particular.
Constructing our cube mesh
We’re going to construct our mesh in code to avoid having to worry about model formats and so forth. This section has little to do with actual GL code, so if you’re keen to see the gl
library in action you can safely skip it.
A mesh can be thought of as simply a collection of vertex data, and a set of indices into that data. For this demo, the information we need about each vertex is:
- Its position relative to the model
- Its colour
- Its texture co-ordinates (Called UV Co-ordinates)
- Its normal vector
We can store these in a structure with a Vector
for each piece of data, along with an index Vector
.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We’re going to be defining a lot of these values all at once. Unfortunately, this starts to look pretty ugly in Haskell because negative numbers have to be wrapped in brackets, so that the vector \((0, -1, 0)\) is expressed V3 0 (-1) 0
. To try and ease the pain here, let’s define an alternate constructor for V3
values which takes a tuple instead of three parameters.
|
|
This allows us to define a function to generate a cuboid of any dimension. The function will take the dimensions of the cuboid and fill a MeshSpec
with the required data.
|
|
|
|
I named my input parameters l'
, h'
, and d'
because although I take the length, height, and depth as input, I generally want to use these values halved, so that I can treat them as an offset from the origin in the centre of the cuboid. These halved values, then, I give the more accessible names of l
, h
, and d
. Here’s how I use them to define a quad for each face of the cube:
|
|
|
|
|
|
|
Each line here is a single face: The right, top, front, left, bottom and back faces respectively. I’m going to colour them so that the \((r, g, b)\) values are mapped to the (normalised) \((x, y, z)\) values. So the left, bottom, back point \((-l, -h, -d)\) is black, the right, bottom, back point \((l, -h, -d)\) is red, the right, top, front point \((l, h, d)\) is white… and so forth. This can be done by saying that for a particular point \((x, y, z)\) its RGB value can be calculated thus:
\[ \left(\frac{x + l}{l'}, \frac{y + h}{h'}, \frac{z + d}{d'}\right) \]
This can be expressed quite succinctly in Haskell.
|
For the normals, we can simply take the normal vector for each axis, and the negations of those vectors. Since each face is composed of four vertices, and we want to share the same normal vector across the face, we replicate each normal four times – one for each vertex.
|
|
The texture co-ordinates for this shape are quite simple – they simply stretch from \((0, 0)\) in the bottom-left corner to \((1, 1)\) in the top-right. We want the same set of co-ordinates across each face, which again we can do using replicate
.
|
|
Finally we set up the indices for our shape. The quads we defined in positions
above follow a regular pattern: \((0, 1, 2, 3)\), \((4, 5, 6, 7)\)… essentially we just make a 4-tuple of incrementing numbers from an offset of \(faceIndex \times 4\).
|
|
|
|
|
OpenGL doesn’t work with quads, though, it uses triangles. The quads
function we just used takes the quads and splits them up into triangles.
|
|
|
|
|
|
|
|
…and that gives us the MeshSpec
for our cube!
Considering each of these sets of vertex data separately is convenient when constructing the mesh, especially when you’re hard-coding like I did here. You’ll get better performance, though, if you combine them into a single, interleaved array (at least if you’re not deforming or otherwise modifying the mesh). This would just be a flat stream of GLfloat
s, like this:
|
The indices are also represented as a flat list, an unpacked version of the tuple representation we use in MeshSpec
above. The following type gives a representation closer to what we’d like to feed to GL.
|
|
|
|
|
|
Unpacking the indices to fit into the above structure is reasonably simple.
|
|
|
|
|
|
|
|
Interleaving the vertex data isn’t too much harder thanks to zipWith4
.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Be careful here, as by interleaving the vertex streams into a single array of GLfloats
you are, of course, leaving type safety behind you. When writing this post, I had a bug where my lighting looked all wrong. It turned out I had put the normals and the uv co-ordinates in the wrong order in my combine
function – a fact that would have been caught by the typechecker straight away!
Now we can use these functions to convert from a MeshSpec
to a MeshData
.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This gets us from an easy-to-define “mesh specification” to the raw data that we’d like to give to GL. Here, we’ve defined the mesh in code, but you could just as well load the data from a file and read it into MeshData
directly if you wanted.
Resource loading
OK, our basic setup is complete, it’s time to get down and dirty with OpenGL! First we need to load and prepare our resources.
Our aim here is to get a textured, spinning cube on the screen using modern OpenGL. To that end, the very least we will need is the texture, a shader program to do the rendering, and of course the mesh itself. Let’s define some datatypes to store these in.
First, the mesh. We use a Vertex Array Object to do the rendering, which contains references to all the data making up the mesh as well as settings describing the layout of the data (which parts of the stream to bind to which attributes in the shader).
The data itself is stored in Vertex Buffer Objects, which I have named VBO
for the vertices and IBO
for the indices. we bind these into the Vertex Array Object, so we don’t actually need them for rendering, but we keep hold of them for cleanup later.
More information on Vertex Buffer Objects and Vertex Array Objects can be found on the OpenGL wiki.
|
|
|
|
|
|
|
|
|
|
For the shader, We want the ID for the shader program, and alongside that we’ll store the locations of all the constants and attributes. These will be different per-shader, but since in this case we only have one shader we can just assume they’ll always be the same.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This represents all the data we need to send to our shader. Note that contained within this structure are not the actual values (the positions, colours, matrices, etc), but the location at which the values are stored in the shader program. We’ll use these locations to set the values when we draw.
Finally, there’s the texture, which is simple – just the ID GL uses to refer to the texture is enough.
|
Packaging these two structures together , we create a Resources
type representing all the resources this demo requires.
|
|
|
|
|
|
|
|
The job of our Initialise function, then, will be to load these resources and initialise them ready for use by GL. Loading and preparing resources for usage by GL may fail at any point, so I’m going to wrap the entire process in the MaybeT
monad transformer so it drops out early on failure.
|
|
For a more complex application, EitherT
/ErrorT
might be a better choice so that we can report what failed.
Loading the texture
First let’s set up the texture. Here’s the texture we’re going to use; you can download it if you’re following along.
Loading the data is very simple. As it happens, I know that this image is an ImageRGBA8
, so that’s all I’m going to handle – in reality you may need to handle various pixel formats depending on your input data.
|
|
|
|
|
|
|
|
We now have the raw pixel buffer data for the texture. All that remains is to pass it to GL. First we generate the texture name which we’ll use to refer to it (although GL calls these “names”, it is just a GLuint
ID, really).
|
|
|
The idiom of allocating a temporary variable, passing it to GL to be filled, and then returning the filled value doesn’t feel very “Haskelly”, but it is exactly what GL expects. It means that when following along with a GL tutorial intended for C, you can pretty much switch the syntax and the examples will all work.
Writing out these three lines does get old pretty fast though, so let’s define a utility function which simplifies it. The following function assumes that the variable to be filled is the last one passed to the function.
|
Now we have the texture name we can use it to bind and set up the texture. We use unsafeWith
to get access to the raw pixel data from the Vector
.
|
|
|
|
|
|
|
|
|
|
|
|
|
Why is unsafeWith
unsafe? Because it gives you a pointer to the underlying memory the Vector
is pointing to. This is potentially unsafe because the C function you pass it to could hold onto this pointer and modify it at any time, breaking referential transparency. Stored pointers like this are also not tracked by the garbage collector, so if you hold onto it and try to use it after the original Vector
has gone out of scope the garbage collector may already have cleaned it up.
In this case, we know that glTexImage2D
will upload the data to the GPU without modifying it, meaning that neither of these issues should concern us, so it is safe to use.
Loading the shaders
Next up are the shaders. The shader code itself is included at the end of this post; For now let’s just assume they are loaded into vertexShader.glsl
and fragmentShader.glsl
respectively.
Loading and compiling the two shaders is basically identical, so let’s create a utility function to help us.
|
|
|
|
|
|
|
First we request GL to create a shader object for us.
|
After that we load the shader file and bind its contents to our new shader object. glShaderSource
is looking for an array of C-style strings, or in other words a pointer to a pointer of GLchar
, expressed in C as const GLchar**
. This is where working at such a low level starts to get a bit fiddly in Haskell – you can certainly do it, but it’s not quite as succinct as it would be in C.
|
|
|
|
|
Compare the four lines starting T.withCStringLen
with the C equivalent:
|
Admittedly this isn’t an entirely fair comparison – it assumes NULL
-terminated strings which we weren’t using in Haskell, and of course the file loading directly preceding it would have been more arduous in C. Still, the point stands that what is a simple operator in C (&
) requires a call to with
and a lambda function in Haskell.
Fortunately, Haskell offers a number of techniques for abstracting some of this awkwardness. One of these is the ContT
monad, which allows you to take a series of nested callback functions such as the one above and transform it into a monad, which can be expressed very readably using do
-notation.
Here’s how the above code looks using ContT
:
|
|
|
|
|
|
|
|
Given this, you could imagine that with a bit of effort and applicative notation, it might be possible to get something like,
|
which isn’t so far from the C after all!
Anyway, now that the shader’s loaded into memory, our utility function can compile it and check that compilation succeeded.
|
|
|
If compilation failed, we output the info log to tell us what happened. We have to do a little bit of marshalling between C and Haskell datatypes to access the log as a Text
object for printing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Having done that, we can return the ID of our compiled shader object if compilation was successful, or Nothing
otherwise.
|
|
|
|
Our helper function complete, let’s use it to load the vertex and fragment shaders. We wrap the calls to loadAndCompileShader
in MaybeT
so that this function will drop out automatically if either of them fail.
|
|
|
|
|
|
|
|
|
|
|
|
Now we need to generate our shader program and link the two shader objects into it.
|
|
|
|
|
|
Here again we output the log and drop out of the initialisation with Nothing
if linkStatus
is GL_FALSE
.
|
|
|
|
|
|
|
|
|
|
|
Having linked the shader program, we can throw away the individual shader objects that went into it, which will make cleaning up later easier.
|
|
We now know that we have a valid, correctly linked program identified by programID
. We can query this for the locations of its attributes and constants which we’ll use to set their values later.
To ease the marshalling between Haskell and C I’m going to define a couple of helper functions here. The first, unsign
, takes the C idiom of returning negative numbers on failure and converts it into the Haskell Maybe
type.
|
|
|
|
|
|
|
The second helper function deals with marshalling strings to C. I’m going to use ContT
to reduce the reliance on callback functions. The forString
function will take in a function expecting a program ID and a C string, along with a Text
object with the actual string we want to use. It will transform this into an action wrapped in ContT
and MaybeT
, representing the fact that it is run as part of a sequence of callbacks, any of which might fail, in which case they should all fail. Since we’re always going to be querying the same shader program we’ll just refer to it directly in fromString
so that we don’t have to pass it in every time. Finally we use unsign
to return Nothing
on failure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Armed with forString
, what would have been a tedious process of marshalling C strings through a series of callbacks can be expressed quite idiomatically:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It is important that the names of the attributes and uniforms above match those that are actually used in the shaders, otherwise they won’t be found and we’ll drop out with Nothing
here.
Confession: I originally tried to define forString
with the type ContT r (MaybeT IO) GLuint
rather than the current MaybeT (ContT r IO) GLuint
, but I couldn’t figure it out. Doing this would mean we could avoid unwrapping the MaybeT
with runMaybeT
and then wrapping it up again with MaybeT
at the end, which would be a bit nicer. It does rather change the meaning of what’s being expressed though, and I think for that reason it might be impossible.
Loading the mesh
Finally, here’s the mesh. We’ll initialise a MeshSpec
describing a \(1\times1\times1\) cube and convert that to MeshData
using the functions described in the previous section. At that point we’ll have some raw data, such as might have been read in from a model file if we were drawing something more complicated than a cube.
|
We need to create two buffer objects: our VBO and our IBO. We create buffer objects using glGenBuffers
; this in turn will give as an ID for each buffer with which we can refer to it.
glGenBuffers
takes an array and a length and fills the values in that array with that many buffers. We use the facilities in Foreign.Marshal.Array
to allocate the array and pull out the values at the end.
|
|
|
We’ll start by setting up the vertex buffer. First we need to bind the buffer ID we just got to the GL_ARRAY_BUFFER
target so that GL knows what we intend to do with it. Then we fill it with data. Finally, we bind 0 to GL_ARRAY_BUFFER
to free it up.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Again we use unsafeWith
to get the data into C. We have to convert the vector into a Storable vector using Data.Vector.Storable.convert
before we can do this.
Setting up the index buffer is similar, only this time the target is GL_ELEMENT_ARRAY_BUFFER
.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Now that our buffer objects are set up for vertices and indices, we can wrap them up together in a Vertex Array Object. This collects the data together with properties about how it should be used. First we generate and bind the vertex array object, much as we did the vertex buffer objects earlier.
|
|
|
|
Next we bind the buffer objects we made for the vertex and index data to this new vertex array object.
|
|
We need to enable all four of the attributes our shader uses.
|
|
|
|
And finally, we fill in the attributes, which tells GL the actual layout of the data within the buffer. When talking about the layout, we’re mainly talking about two things: The offset and the stride. The offset tells us how far into the array that chunk of data begins, while the stride tells us the difference from the start of one set of values to the start of the next. Since we have all our data in one interleaved array, the stride will be the same for each kind of data: 11 * sizeof(GLfloat)
.
|
|
|
|
|
|
|
|
|
Now we can set the values for each type using glVertexAttribPointer
.
|
|
|
|
|
|
|
|
Our buffer objects are now set up and loaded onto the GPU ready to use. The last thing to do is to put them in the Mesh
structure ready to be added to our Resources
.
|
Now that we have everything we need, we call initGL
and then return the Resources
.
|
And we’re done! Our Resources
handle should now contain all the data we need, unless there was a problem, in which case we’ll fail gracefully.
Setting up GL
That call to initGL
at the end of initialise
allows us to give GL its basic settings.
|
|
|
|
|
|
|
|
|
|
|
|
While we’re here, remember the resize
function we gave to GLFW at the start? Let’s get the definition of that out of the way.
|
|
resize
takes the IORef
we made to store the projection matrix, and the new width and height. It has two jobs: it needs to update the viewport, so that GL rendering can fill the window, and it needs to update the projection matrix, so that the aspect ratio doesn’t get ruined.
Setting the viewport is simple – just pass the origin (0, 0)
, and the full width and height of the window. Of course. if we only wanted to draw into a subset of the window that’s what we’d pass. The projection matrix is calculated using the same function we used in main
: calculateProjectionMatrix
. It is then written to the IORef
so that we can access it from within our main loop.
|
|
Here’s the calculateProjectionMatrix
function itself. We use the perspective
function from linear
to do the work for us. π/3
radians gives us a field of view of 60°.
|
|
|
Handling state
This demo is very simple, so there isn’t much state to deal with. Nevertheless, the cube does spin, so we will need to keep track of its angle. As well as that, I’m going to include the camera position within the state structure even though it remains constant throughout the demo, as it’s convenient to hold the data together, and in real life you’re almost certainly going to be moving the camera at some point anyway.
Our state, then, can be represented as follows:
|
|
|
|
|
|
|
|
And the default state is simply:
|
|
|
|
|
|
|
|
|
There are a number of ways to handle varying state in Haskell, and which to use is an interesting choice which can have wide-reaching implications for your application. For this demo, though, I’m keeping it simple, as I want to keep the focus on use of the gl
library. So we’ll just have a simple update function which takes the previous state and a time delta, and returns the new state.
|
|
|
|
|
|
|
|
|
|
The main loop
Our runDemo
function will comprise the main loop for this demo. It takes the two IORef
s we created at the start, a callback we can use to swap the framebuffers, and the Resources
we just loaded. If the resources failed to load it just drops out straight away.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Otherwise it runs loop
, which runs the frame unless the value pointed to by closed
is True
. When it is True
, loop
drops out and cleanup
gets run.
|
|
|
|
|
|
The frame itself is comprised of essentially two phases, the update and the draw phase. We pass the delta time value to the update, but not the draw. Finally we call loop
, which again checks closed
and runs the next frame.
|
|
|
|
|
The order here might look a bit funny. The following is more usual:
- Get delta time
- Run update
- Draw scene
- Swap buffers
- Loop
However when you look at the function above, you’ll realise they’re equivalent. The reason I’ve done it this way is to avoid introducing a variable s'
for the updated state, which is easy to make mistakes with (using s
instead of s'
), and makes the code just a little less clean.
Actually drawing things
At long last, we’re ready to implement the draw
function, which actually renders the graphics to the screen. This function is actually surprisingly simple. A lot of the work in graphics programming goes into efficiently moving data between the CPU and the GPU – that and the shader code, of course. The actual rendering part is doing little more than passing constants to the shader to work with.
|
|
Note that we are taking the projection matrix directly here, rather than the IORef
. We let the main loop deal with the fact that this might be modified in a callback – as far as the draw
function is concerned, this is the projection matrix it is dealing with and it will not change – not during this frame, at least.
We have the projection matrix, but there are a number of other matrices we’ll need to calculate. The view matrix offsets everything based on the position of the camera. The model matrix then applies the model transformations (in this case just rotation).
|
|
|
|
|
|
|
|
It is convenient to precalculate the products of these matrices.
|
|
|
|
|
|
Finally the normal matrix is used to interpolate normals across faces. Since not all matrices have a valid inverse, I’ve chosen to fall back on the identity matrix in case inv33
returns Nothing
.
|
|
|
|
|
|
|
|
|
Now we have all the data we need, we can start sending it to GL. We start by clearing both the colour and the depth buffers.
|
|
|
|
|
|
Next, we bind the shader program, mesh, and texture ready for use. The texture is bound to texture unit 0, which will become important in a minute.
|
|
|
|
We pass in the required uniforms to the shader. First the matrices, where the Storable
instance for M44
helps us a lot.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
And the light and texture data, which is simple. For the texture, the number passed is the index of the texture unit that texture is bound to; we specified GL_TEXTURE0
a minute ago so we put 0
here.
|
|
|
|
|
|
|
|
|
|
|
|
Finally, we’re ready to draw!
|
|
|
|
|
|
|
|
If you’ve got this far you should have a spinning cube on the screen! Pat yourself on the back; you’re ready to go.
Resource cleanup
OK, we’ve had our fun, now we need to clean up after ourselves. Actually, since we’re on our way out of the application we don’t as the OS will no doubt take care of it for us, but I’m going to anyway for the sake of completeness if nothing else.
|
|
|
|
|
|
|
|
|
|
|
|
The gist of this is pretty simple: for every glCreate*
or glGen*
function there is a glDelete*
equivalent which we have to call.
Final thoughts
Whew, well, that was a pretty long post! I hope that it will come in handy for anyone who, like me, wants to fiddle about with OpenGL in Haskell but doesn’t want to spend hours getting the basic pipeline up and running. Obviously you will want to build your own abstractions on top of this and presumably draw something more interesting than a rubbish cube. But at least with this as a starting point you’ll be able to build it up from a program that works.
If you liked this post, please drop me a tweet @danielpwright! If it’s popular, I might explore some other libraries in a similar way. Similarly, if you found anything lacking, please let me know.
As I mentioned, this was also my first time using the gl
library. Having played with it a bit now, I must say that I like it, despite the annoyance of having to marshal data into C manually. This process is quite easy to abstract into something easier to use, and if it’s me doing the abstraction I can be sure it will be well-suited to my application.
Apart from that, coming from a traditional games background (my day job is as a console games programmer in C++), we tend to be quite obssessive over what our memory is doing. Even having garbage collection feels a bit… free and easy, let alone all the other high-level constructs Haskell offers! Knowing that I’m dealing with raw GL bindings and being able to see exactly how data is marshalled between Haskell and C gives me a reassuring sense that, at least as far as my graphics pipeline is concerned, I am in control of my data. There’s nothing worse than getting a little way into something and then realising that something you hadn’t anticipated about the abstraction you’re working with prevents you from doing the thing you want to do.
There is probably still a place for a package like OpenGL. The level of abstraction there feels much more natural for a Haskell library. But I think, for my part, I’d rather set the level of abstraction myself, so as to best match the needs of the project I’m working on, so I will be using the gl package for any graphics projects I do from now on.
Appendix: shader code
Here is the code for the two shaders I use in this demo. They are cobbled together from a variety of tutorials on the internet, and aren’t really very useful for any sort of production use, given that they only allow for a single directional light, they assume coloured vertices and alpha-blended textures, and so on. The goal here wasn’t really to explore interesting shader code or graphics techniques, but rather to give an absolute baseline working environment in GL.
So, I assume that once you have this up and running one of the first things you’ll want to do is throw away these shaders and replace them with something more useful, possibly by following one of the many tutorials on the internet for working with OpenGL in C/C++, since the code samples translate quite naturally when using the gl
package.
I include these two shaders, therefore, without comment.
vertexShader.glsl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
fragmentShader.glsl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|