O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Lighting Up

Drawing is deception.

M. C. Escher

The foundations of real-time graphics are rarely based on principles from physics and optics. In a way, the lighting equations we’ll cover in this section are cheap hacks, simple models based on rather shallow empirical observations. We’ll be demonstrating three different lighting models: ambient lighting (subtle, monotone light), diffuse lighting (the dull matte component of reflection), and specular lighting (the shiny spot on a fresh red apple). Figure 4-8 shows how these three lighting models can be combined to produce a high-quality image.

Ambient + diffuse + specular = final

Figure 4-8. Ambient + diffuse + specular = final

Of course, in the real world, there are no such things as “diffuse photons” and “specular photons.” Don’t be disheartened by this pack of lies! Computer graphics is always just a great big hack at some level, and knowing this will make you stronger. Even the fact that colors are ultimately represented by a red-green-blue triplet has more to do with human perception than with optics. The reason we use RGB? It happens to match the three types of color-sensing cells in the human retina! A good graphics programmer can think like a politician and use lies to his advantage.

Ho-Hum Ambiance

Realistic ambient lighting, with the soft, muted shadows that it conjures up, can be very complex to render (you can see an example of ambient occlusion in Baked Lighting), but ambient lighting in the context of OpenGL usually refers to something far more trivial: a solid, uniform color. Calling this “lighting” is questionable since its intensity is not impacted by the position of the light source or the orientation of the surface, but it is often combined with the other lighting models to produce a brighter surface.

Matte Paint with Diffuse Lighting

The most common form of real-time lighting is diffuse lighting, which varies its brightness according to the angle between the surface and the light source. Also known as lambertian reflection, this form of lighting is predominant because it’s simple to compute, and it adequately conveys depth to the human eye. Figure 4-9 shows how diffuse lighting works. In the diagram, L is the unit length vector pointing to the light source, and N is the surface normal, which is a unit-length vector that’s perpendicular to the surface. We’ll learn how to compute N later in the chapter.

Diffuse lighting

Figure 4-9. Diffuse lighting

The diffuse factor (known as df in Figure 4-9) lies between 0 and 1 and gets multiplied with the light intensity and material color to produce the final diffuse color, as shown in Equation 4-1.

Equation 4-1. Diffuse color

Diffuse color

df is computed by taking the dot product of the surface normal with the light direction vector and then clamping the result to a non-negative number, as shown in Equation 4-2.

Equation 4-2. Diffuse coefficient

Diffuse coefficient

The dot product is another operation that you might need a refresher on. When applied to two unit-length vectors (which is what we’re doing for diffuse lighting), you can think of the dot product as a way of measuring the angle between the vectors. If the two vectors are perpendicular to each other, their dot product is zero; if they point away from each other, their dot product is negative. Specifically, the dot product of two unit vectors is the cosine of the angle between them. To see how to compute the dot product, here’s a snippet from our C++ vector library (see the appendix for a complete listing):

template <typename T>
struct Vector3 {
    // ...
    T Dot(const Vector3& v) const
        return x * v.x + y * v.y + z * v.z;
    // ...
    T x, y, z;


Don’t confuse the dot product with the cross product! For one thing, cross products produce vectors, while dot products produce scalars.

With OpenGL ES 1.1, the math required for diffuse lighting is done for you behind the scenes; with 2.0, you have to do the math yourself in a shader. You’ll learn both methods later in the chapter.

The L vector in Equation 4-2 can be computed like this:

Diffuse coefficient

In practice, you can often pretend that the light is so far away that all vertices are at the origin. The previous equation then simplifies to the following:

Diffuse coefficient

When you apply this optimization, you’re said to be using an infinite light source. Taking each vertex position into account is slower but more accurate; this is a positional light source.

Give It a Shine with Specular

I guess you could say the Overlook Hotel here has somethin’ almost like “shining.”

Mr. Hallorann to Danny, The Shining

Diffuse lighting is not affected by the position of the camera; the diffuse brightness of a fixed point stays the same, no matter which direction you observe it from. This is in contrast to specular lighting, which moves the area of brightness according to your eye position, as shown in Figure 4-10. Specular lighting mimics the shiny highlight seen on polished surfaces. Hold a shiny apple in front of you, and shift your head to the left and right; you’ll see that the apple’s shiny spot moves with you. Specular is more costly to compute than diffuse because it uses exponentiation to compute falloff. You choose the exponent according to how you want the material to look; the higher the exponent, the shinier the model.

Specular lighting

Figure 4-10. Specular lighting

The H vector in Figure 4-10 is called the half-angle because it divides the angle between the light and the camera in half. Much like diffuse lighting, the goal is to compute an intensity coefficient (in this case, sf) between 0 and 1. Equation 4-3 shows how to compute sf.

Equation 4-3. Specular lighting

Specular lighting

In practice, you can often pretend that the viewer is infinitely far from the vertex, in which case the E vector is substituted with (0, 0, 1). This technique is called infinite viewer. When E is used, this is called local viewer.

Adding Light to ModelViewer

We’ll first add lighting to the OpenGL ES 1.1 backend since it’s much less involved than the 2.0 variant. Example 4-10 shows the new Initialize method (unchanged portions are replaced with ellipses for brevity).

Example 4-10. ES1::RenderingEngine::Initialize

void RenderingEngine::Initialize(const vector<ISurface*>& surfaces)
    vector<ISurface*>::const_iterator surface;
    for (surface = surfaces.begin(); surface != surfaces.end(); ++surface) {
        // Create the VBO for the vertices.
        vector<float> vertices;
        (*surface)->GenerateVertices(vertices, VertexFlagsNormals);1
        GLuint vertexBuffer;
        glGenBuffers(1, &vertexBuffer);
        glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
                     vertices.size() * sizeof(vertices[0]),
        // Create a new VBO for the indices if needed.

    // Set up various GL state.

    // Set up the material properties.4
    vec4 specular(0.5f, 0.5f, 0.5f, 1);
    glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular.Pointer());
    glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, 50.0f);

    m_translation = mat4::Translate(0, 0, -7);

Tell the ParametricSurface object that we need normals by passing in the new VertexFlagsNormals flag.


Enable two vertex attributes: one for position, the other for surface normal.


Enable lighting, and turn on the first light source (known as GL_LIGHT0). The iPhone supports up to eight light sources, but we’re using only one.


The default specular color is black, so here we set it to gray and set the specular exponent to 50. We’ll set diffuse later.

Example 4-10 uses some new OpenGL functions: glMaterialf and glMaterialfv. These are useful only when lighting is turned on, and they are unique to ES 1.1—with 2.0 you’d use glVertexAttrib instead. The declarations for these functions are the following:

void glMaterialf(GLenum face, GLenum pname, GLfloat param);
void glMaterialfv(GLenum face, GLenum pname, const GLfloat *params);

The face parameter is a bit of a carryover from desktop OpenGL, which allows the back and front sides of a surface to have different material properties. For OpenGL ES, this parameter must always be set to GL_FRONT_AND_BACK.

The pname parameter can be one of the following:


This specifies the specular exponent as a float between 0 and 128. This is the only parameter that you set with glMaterialf; all other parameters require glMaterialfv because they have four floats each.


This specifies the ambient color of the surface and requires four floats (red, green, blue, alpha). The alpha value is ignored, but I always set it to one just to be safe.


This specifies the specular color of the surface and also requires four floats, although alpha is ignored.


This specifies the emission color of the surface. We haven’t covered emission because it’s so rarely used. It’s similar to ambient except that it’s unaffected by light sources. This can be useful for debugging; if you want to verify that a surface of interest is visible, set its emission color to white. Like ambient and specular, it requires four floats and alpha is ignored.


This specifies the diffuse color of the surface and requires four floats. The final alpha value of the pixel originates from the diffuse color.


Using only one function call, this allows you to specify the same color for both ambient and diffuse.

When lighting is enabled, the final color of the surface is determined at run time, so OpenGL ignores the color attribute that you set with glColor4f or GL_COLOR_ARRAY (see Table 2-2). Since you’d specify the color attribute only when lighting is turned off, it’s often referred to as nonlit color.


As an alternative to calling glMaterialfv, you can embed diffuse and ambient colors into the vertex buffer itself, through a mechanism called color material. When enabled, this redirects the nonlit color attribute into the GL_AMBIENT and GL_DIFFUSE material parameters. You can enable it by calling glEnable(GL_COLOR_MATERIAL).

Next we’ll flesh out the Render method so that it uses normals, as shown in Example 4-11. New/changed lines are in bold. Note that we moved up the call to glMatrixMode; this is explained further in the callouts that follow the listing.

Example 4-11. ES1::RenderingEngine::Render

void RenderingEngine::Render(const vector<Visual>& visuals) const
    glClearColor(0.5f, 0.5f, 0.5f, 1);
    vector<Visual>::const_iterator visual = visuals.begin();
    for (int visualIndex = 0; 
         visual != visuals.end(); 
         ++visual, ++visualIndex) 
        // Set the viewport transform.
        ivec2 size = visual->ViewportSize;
        ivec2 lowerLeft = visual->LowerLeft;
        glViewport(lowerLeft.x, lowerLeft.y, size.x, size.y);
        // Set the light position.1
        vec4 lightPosition(0.25, 0.25, 1, 0);
        glLightfv(GL_LIGHT0, GL_POSITION, lightPosition.Pointer());
        // Set the model-view transform.
        mat4 rotation = visual->Orientation.ToMatrix();
        mat4 modelview = rotation * m_translation;
        // Set the projection transform.
        float h = 4.0f * size.y / size.x;
        mat4 projection = mat4::Frustum(-2, 2, -h / 2, h / 2, 5, 10);
        // Set the diffuse color.2
        vec3 color = visual->Color * 0.75f;
        vec4 diffuse(color.x, color.y, color.z, 1);
        glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse.Pointer());

        // Draw the surface.
        int stride = 2 * sizeof(vec3);
        const Drawable& drawable = m_drawables[visualIndex];
        glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
        glVertexPointer(3, GL_FLOAT, stride, 0);
        const GLvoid* normalOffset = (const GLvoid*) sizeof(vec3);3
        glNormalPointer(GL_FLOAT, stride, normalOffset);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
        glDrawElements(GL_TRIANGLES, drawable.IndexCount, GL_UNSIGNED_SHORT, 0);

Set the position of GL_LIGHT0. Be careful about when you make this call, because OpenGL applies the current model-view matrix to the position, much like it does to vertices. Since we don’t want the light to rotate along with the model, here we’ve reset the model-view before setting the light position.


The nonlit version of the app used glColor4f in the section that started with the comment // Set the color. We’re replacing that with glMaterialfv. More on this later.


Point OpenGL to the right place in the VBO for obtaining normals. Position comes first, and it’s a vec3, so the normal offset is sizeof(vec3).

That’s it! Figure 4-11 depicts the app now that lighting has been added. Since we haven’t implemented the ES 2.0 renderer yet, you’ll need to enable the ForceES1 constant at the top of GLView.mm.

ModelViewer with lighting

Figure 4-11. ModelViewer with lighting

Using Light Properties

Example 4-11 introduced a new OpenGL function for modifying light parameters, glLightfv:

void glLightfv(GLenum light, GLenum pname, const GLfloat *params);

The light parameter identifies the light source. Although we’re using only one light source in ModelViewer, up to eight are allowed (GL_LIGHT0GL_LIGHT7).

The pname argument specifies the light property to modify. OpenGL ES 1.1 supports 10 light properties:


As you’d expect, each of these takes four floats to specify a color. Note that light colors alone do not determine the hue of the surface; they get multiplied with the surface colors specified by glMaterialfv.


The position of the light is specified with four floats. If you don’t set the light’s position, it defaults to (0, 0, 1, 0). The W component should be 0 or 1, where 0 indicates an infinitely distant light. Such light sources are just as bright as normal light sources, but their “rays” are parallel. This is computationally cheaper because OpenGL does not bother recomputing the L vector (see Figure 4-9) at every vertex.


You can restrict a light’s area of influence to a cone using these parameters. Don’t set these parameters if you don’t need them; doing so can degrade performance. I won’t go into detail about spotlights since they are somewhat esoteric to ES 1.1, and you can easily write a shader in ES 2.0 to achieve a similar effect. Consult an OpenGL reference to see how to use spotlights (see the section Further Reading).


These parameters allow you to dim the light intensity according to its distance from the object. Much like spotlights, attenuation is surely covered in your favorite OpenGL reference book. Again, be aware that setting these parameters could impact your frame rate.

You may’ve noticed that the inside of the cone appears especially dark. This is because the normal vector is facing away from the light source. On third-generation iPhones and iPod touches, you can enable a feature called two-sided lighting, which inverts the normals on back-facing triangles, allowing them to be lit. It’s enabled like this:


Use this function with caution, because it is not supported on older iPhones. One way to avoid two-sided lighting is to redraw the geometry at a slight offset using flipped normals. This effectively makes your one-sided surface into a two-sided surface. For example, in the case of our cone shape, we could draw another equally sized cone that’s just barely “inside” the original cone.


Just like every other lighting function, glLightModelf doesn’t exist under ES 2.0. With ES 2.0, you can achieve two-sided lighting by using a special shader variable called gl_FrontFacing.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required