Getting Started with OpenGL ES 2.0 On The iPhone 3GS
They don’t even provide a barebones sample or XCode template to get your started with OpenGL ES 2.0. If you want to take advantage of the new graphics capabilities, apparently it’s up to you to figure out how to use it. Don’t be fooled into thinking that OpenGL ES 2.0 is a minor upgrade over OpenGL ES 1.1 with only a couple new available functions. It’s a whole new beast! The fixed-function pipeline is gone, and it requires you to use shaders and be more familiar with the basics of computer graphics before you can even get a triangle on screen.
Given the total lack of documentation, I set out to create the most barebones app using OpenGL ES 2.0 on the iPhone. Something that could be used as a starting point by other developers in their own apps. I debated whether to create an app that displayed a spinning teapot or some other simple mesh, but I didn’t want to get lost in the details of loading a model or doing different transforms. So in the end, I decided to simply update the OpenGL ES 1.1 app that comes as part of the XCode template. The completed code is available for download here.
Yes, I know, not terribly exciting. It’s just a spinning quad! But it will be enough to cover the basics of initializing OpenGL, creating shaders, hooking them up to the program, and using them. In the end, we’ll even throw a little twist to add something that you could never do with OpenGL ES 1.1. Ready?
Initializing
OpenGL initialization is almost exactly the same as with OpenGL ES 1.1. The only difference is that you need to tell it the new API version for ES 2.0:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
The rest of the details as far as EAGLView, creating back buffers, and such is all the same as it was before, so I’m not going to cover them here (but I’ll cover those things in detail in my upcoming Two-Day OpenGL Programming Class).
Keep in mind that when you initialize OpenGL ES 2.0, you won’t be able to call functions that are specific to OpenGL ES 1.1 If you try to use them, you’ll get a crash at runtime because they haven’t been set up properly. That means that if you want to take advantage of the 3GS graphics capabilities, but you also want to run on older models, you need to check at runtime which kind of device you’re using and enable either OpenGL ES 1.1 or 2.0 and have two very different code paths for each of them
Writing Shaders
OpenGL ES 1.1 uses the fixed-function pipeline to render polygons. With OpenGL ES 2.0, in order to render anything on screen, we need to write shaders. Shaders are little programs that run on the dedicated graphics hardware that transform the input data (vertices and states) into an image on screen. They are written in the OpenGL Shader Language (or GLSL for short), which will be very familiar to those of us used to C. You should learn the details of the language to take full advantage of its capabilities, but this is a short overview of the major concepts and how they’re related.
There are two types of shaders: vertex shaders which are executed at each vertex, and fragment shaders, which are executed for every pixel — well, technically, they’re executed at every fragment, which might not correspond to a pixel if you use antialiasing for example. For now, we can safely think of a fragment shader executing at each rendered pixel.
A vertex shader computes the position for a vertex in clip space. It can optionally compute other values so they can be used in a fragment shader.
There are two main types of inputs for a vertex shader: uniform and attribute inputs.
- Uniform inputs are values that are set once from the main program and applied to all the vertices processed by the vertex shader during a draw call. For example, the world view transform is a uniform input.
- Attribute inputs can vary for each vertex in the same draw call. For example, position, normal, or color information are attribute inputs.
Additionally, a vertex shader has two kinds of outputs:
- An implicit position output in the variable gl_Position. That’s where the vertex should be in clip space (later gets transformed into viewport space).
- Varying outputs. Variables defined with the varying attribute are going to be interpolated between vertices, and the resulting values will be passed as inputs to fragment shaders.
The vertex shader for the sample program simply transforms the vertex position from model space into clip space and passes the vertex color to be interpolated for the fragment shader:
uniform mat4 u_mvpMatrix; attribute vec4 a_position; attribute vec4 a_color; varying vec4 v_color; void main() { gl_Position = u_mvpMatrix * a_position; v_color = a_color; }
Fragment shaders compute the color of a fragment (pixel). Their input parameters are varying variables that were generated from the vertex shader, in addition to the gl_Position variable. The color computation can be as simple as writing a constant into gl_Position, or it can be looking up a texel value into a texture based on uv coordinates, or it can be a complex operation taking the lighting environment into account.
Our sample fragment shader couldn’t be any easier. It takes the color handed down from the vertex shader and applies it that fragment.
varying vec4 v_color; void main() { gl_FragColor = v_color; }
Maybe all of this sounds too vague and free-form, but that’s the beauty of shaders: There’s nothing pre-set, and how things are rendered is completely up to you (and the limitations of the hardware).
Compiling Shaders
We have some shaders ready to go. How do we run them? We need to go through a few steps, the first of which is to compile and link them.
At runtime, we’ll need to load up the text for the source code for each vertex and fragment shaders, and compile with a few OpenGL calls.
const unsigned int shader = glCreateShader(type); glShaderSource(shader, 1, (const GLchar**)&source, NULL); glCompileShader(shader);
Once the shaders are compiled, we need to create a shader program, add the shaders, and link them together.
m_shaderProgram = glCreateProgram(); glAttachShader(m_shaderProgram, vertexShader); glAttachShader(m_shaderProgram, fragmentShader); glLinkProgram(m_shaderProgram);
The linking steps is what hooks up the outputs of the vertex shader with the expected inputs of the fragment shader.
You can detect if there were any errors during compilation or linking and display a message explaining the cause of the error.
int success; glGetShaderiv(shader, GL_COMPILE_STATUS, &success); if (success == 0) glGetShaderInfoLog(shader, sizeof(errorMsg), NULL, errorMsg);
If the idea of compiling and linking programs at runtime bothers you, you’re not alone. Ideally, this should be a step that is done offline, just like compiling the source code for your main program in Objective C. Unfortunately Apple is trying to keep things open and allow for the format to change in the future, so they’re forcing us to compile and link on the fly. This is not just annoying, but it’s potentially quite slow once you start accumulating several shaders. Who wants to wait for a few more seconds while their app starts? For now, we just need to put up with this annoyance as part of the price to pay to use shaders on the iPhone.
Hooking Things Up
We’re almost ready to start using our shaders, but before we do that, we need to find out how to set the correct inputs. The vertex shader expects a mvp (model-view-projection) matrix to be set, as well as a stream of vertex data with positions and colors.
We do this by querying the shader program for the parameters we need by name. It returns a handle that we keep around so we can use it to set the values right before render the model.
m_a_positionHandle = glGetAttribLocation(m_shaderProgram, "a_position"); m_a_colorHandle = glGetAttribLocation(m_shaderProgram, "a_color"); m_u_mvpHandle = glGetUniformLocation(m_shaderProgram, "u_mvpMatrix");
Using The Shaders
Finally we’re ready to render some polygons with our shaders. All we have to do is enable the shader program…
glUseProgram(m_shaderProgram);
…and set the correct input values using the handles to the input parameters we queried earlier:
glVertexAttribPointer(m_a_positionHandle, 2, GL_FLOAT, GL_FALSE, 0, squareVertices); glEnableVertexAttribArray(m_a_positionHandle); glVertexAttribPointer(m_a_colorHandle, 4, GL_FLOAT, GL_FALSE, 0, squareColors); glEnableVertexAttribArray(m_a_colorHandle); glUniformMatrix4fv(m_u_mvpHandle, 1, GL_FALSE, (GLfloat*)&mvp.m[0] );
And now, we just call any of the render functions we’re familiar with from OpenGL ES 1.1 (glDrawArrays or glDrawElements):
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
If all went well, you should see your model rendered correctly on screen. In this case it’s just a quad, but you can use this as a starting point to render your own models with your own transforms, shaders, and effects.
A Little Something Extra
I admit that getting all the way here just to have the same spinning quad we had in the template for OpenGL ES 1.1 is quite underwhelming. Sure, it’s a necessary stepping stone to get started, but still. So, to show off how easy it is to create different effects with GLSL, here’s a modified pixel shader that spices things up a bit.
float odd = floor(mod(gl_FragCoord.y, 2.0)); gl_FragColor = vec4(v_color.x, v_color.y, v_color.z, odd);
This version of the fragment shader checks whether this pixel is at an even or odd line, and renders even lines as fully transparent, giving it an interlaced aspect on screen space. This is an example of an effect that would be quite difficult to achieve with OpenGL ES 1.1, but it’s just a couple of simple lines with 2.0.
Armed with the sample code and an idea of how shaders work, you should be able to start creating your own shaders and come up with some interesting and unique visuals in your games.
The completed code is available for download here.
Nice article. When I eventually can afford the upgrade to a 3GS I look forward to testing it out!
One question – I’d be interested to know how you plan to handle support for older hardware. Do you just ignore the older hardware? How practical is it to create a rendering pipeline able to handle both OpenGL 2 and 1.1? And what impact would this have on your art asset pipeline?
George, you hit the main problem with doing OpenGL ES 2.0 on the iPhone. Both the graphics rendering pipeline and the asset pipeline itself would have to be almost completely different. Since there’s a huge 3G install base, unless you’re doing tech demos, you can’t really ignore OpenGL ES 1.1.
So in my mind you have three options (in order of complexity)
– Stick with the least common denominator (ES 1.1)
– Develop for 1.1 but write a 2.0 pipeline that mimics 1.1 and throw a bit of eye candy at the end (a few bloom or blur shaders)
– Develop two different pipelines and take full advantage of 2.0.
The effort involved in the last option is almost double, so not many people are going to be doing that yet. Or maybe that’s one way that larger teams with more resources can make their games stand out above everybody else’s.
Great article Noel!
Following on George’s question… are there any gotchas related to doing #2 of your options. I was considering this path for my next project (mimic 1.1 in 2.0 and add a layer of sugar). Is it actually possible to support both in a single app build?
Hi Patrick, Yes, you can definitely support both in the same build very easily. You just need to detect if 2.0 is supported and choose the right code path.
Most of the work will come creating a set of shaders that implement all the fixed-function pipeline you’re using in your 1.1 version. It also means you need to create an abstraction layer on your renderer so you’re not using OpenGL calls directly. Very similar to the situation we had on PC games for many years supporting both OpenGL and DirectX renderers.
Great post, Noel!
Looking forward to test it next week in the work.
I work on a 3D pie charts now and done it with OpenGL ES 1.1..
The target device of the application is iPhone 3GS, so I can use the OpenGL ES 2.0.
Actually I didn’t think that the difference is significant…
By the way, is there a chance that it is possible to implement anti-aliasing with OpenGL ES 2.0? – the pies don’t look good now – you can see all the pixels…
I have searched the internet for a solution for nearly a day and didn’t find one…
There was one forum where I read that it’s possible to draw the screen 2 times bigger that the view and then to resize it, but I didn’t figure out how to implement it.
—
Michael
Thanks, Michael. One way to do antialiasing is through the glGenRenderbuffers functions. You need to create a large render target (maybe twice the original size), draw your pie charts there, and then draw it at the original size with some nice filtering to reduce aliasing artifacts. You can even do that with OpenGL ES 1.1. What 2.0 allows you to do is to create a “better” filter when you’re rendering the pie chart back down to the original size.
Another way would be to create a shader that detects the edges of your pie charts and is smart enough to add some alpha depending on how much if that pixel is actually covered by the pie (that would be 2.0 only obviously).
Hi,
I’m actually trying to port your tutorial code to C++ for another OpenGL ES 2 device (I know your code is Objective-C, but the GL functions are the same), but I’ve run into a bit of trouble. In the glUniformMatrix4fv function under the “Using the Shaders” header in your post, where does the mvp variable come from, and what is its value supposed to be?
I figured it out, never mind.