O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Surface Normals

Before we can enable lighting, there’s yet another prerequisite we need to get out of the way. To perform the math for lighting, OpenGL must be provided with a surface normal at every vertex. A surface normal (often simply called a normal) is simply a vector perpendicular to the surface; it effectively defines the orientation of a small piece of the surface.

Feeding OpenGL with Normals

You might recall that normals are one of the predefined vertex attributes in OpenGL ES 1.1. They can be enabled like this:

// OpenGL ES 1.1
glNormalPointer(GL_FLOAT, stride, offset);

// OpenGL ES 2.0
glVertexAttribPointer(myNormalSlot, 3, GL_FLOAT, normalize, stride, offset);

I snuck in something new in the previous snippet: the GL_NORMALIZE state in ES 1.1 and the normalize argument in ES 2.0. Both are used to control whether OpenGL processes your normal vectors to make them unit length. If you already know that your normals are unit length, do not turn this feature on; it incurs a performance hit.


Don’t confuse normalize, which refers to making any vector into a unit vector, and normal vector, which refers to any vector that is perpendicular to a surface. It is not redundant to say “normalized normal.”

Even though OpenGL ES 1.1 can perform much of the lighting math on your behalf, it does not compute surface normals for you. At first this may seem rather ungracious on OpenGL’s part, but as you’ll see later, stipulating the normals yourself give you the power to render interesting effects. While the mathematical notion of a normal is well-defined, the OpenGL notion of a normal is simply another input with discretionary values, much like color and position. Mathematicians live in an ideal world of smooth surfaces, but graphics programmers live in a world of triangles. If you were to make the normals in every triangle point in the exact direction that the triangle is facing, your model would looked faceted and artificial; every triangle would have a uniform color. By supplying normals yourself, you can make your model seem smooth, faceted, or even bumpy, as we’ll see later.

The Math Behind Normals

We scoff at mathematicians for living in an artificially ideal world, but we can’t dismiss the math behind normals; we need it to come up with sensible values in the first place. Central to the mathematical notion of a normal is the concept of a tangent plane, depicted in Figure 4-5.

The diagram in Figure 4-5 is, in itself, perhaps the best definition of the tangent plane that I can give you without going into calculus. It’s the plane that “just touches” your surface at a given point P. Think like a mathematician: for them, a plane is minimally defined with three points. So, imagine three points at random positions on your surface, and then create a plane that contains them all. Slowly move the three points toward each other; just before the three points converge, the plane they define is the tangent plane.

The tangent plane can also be defined with tangent and binormal vectors (u and v in Figure 4-5), which are easiest to define within the context of a parametric surface. Each of these correspond to a dimension of the domain; we’ll make use of this when we add normals to our ParametricSurface class.

Finding two vectors in the tangent plane is usually fairly easy. For example, you can take any two sides of a triangle; the two vectors need not be at right angles to each other. Simply take their cross product and unitize the result. For parametric surfaces, the procedure can be summarized with the following pseudocode:

p = Evaluate(s, t)
u = Evaluate(s + ds, t) - p
v = Evaluate(s, t + dt) - p
N = Normalize(u × v)
Normal vector with tangent plane

Figure 4-5. Normal vector with tangent plane

Don’t be frightened by the cross product; I’ll give you a brief refresher. The cross product always generates a vector perpendicular to its two input vectors. You can visualize the cross product of A with B using your right hand. Point your index finger in the direction of A, and then point your middle finger toward B; your thumb now points in the direction of A×B (pronounced “A cross B,” not “A times B”). See Figure 4-6.

Righthand rule

Figure 4-6. Righthand rule

Here’s the relevant snippet from our C++ library (see the appendix for a full listing):

template <typename T>
struct Vector3 {
    // ...
    Vector3 Cross(const Vector3& v) const
        return Vector3(y * v.z - z * v.y,
                       z * v.x - x * v.z,
                       x * v.y - y * v.x);
    // ...
    T x, y, z;

Normal Transforms Aren’t Normal

Let’s not lose focus of why we’re generating normals in the first place: they’re required for the lighting algorithms that we cover later in this chapter. Recall from Chapter 2 that vertex position can live in different spaces: object space, world space, and so on. Normal vectors can live in these different spaces too; it turns out that lighting in the vertex shader is often performed in eye space. (There are certain conditions in which it can be done in object space, but that’s a discussion for another day.)

So, we need to transform our normals to eye space. Since vertex positions get transformed by the model-view matrix to bring them into eye space, it follows that normal vectors get transformed the same way, right? Wrong! Actually, wrong sometimes. This is one of the trickier concepts in graphics to understand, so bear with me.

Look at the heart shape in Figure 4-7, and consider the surface normal at a point in the upper-left quadrant (depicted with an arrow). The figure on the far left is the original shape, and the middle figure shows what happens after we translate, rotate, and uniformly shrink the heart. The transformation for the normal vector is almost the same as the model’s transformation; the only difference is that it’s a vector and therefore doesn’t require translation. Removing translation from a 4×4 transformation matrix is easy. Simply extract the upper-left 3×3 matrix, and you’re done.

Normal transforms

Figure 4-7. Normal transforms

Now take a look at the figure on the far right, which shows what happens when stretching the model along only its x-axis. In this case, if we were to apply the upper 3×3 of the model-view matrix to the normal vector, we’d get an incorrect result; the normal would no longer be perpendicular to the surface. This shows that simply extracting the upper-left 3×3 matrix from the model-view matrix doesn’t always suffice. I won’t bore you with the math, but it can be shown that the correct transform for normal vectors is actually the inverse-transpose of the model-view matrix, which is the result of two operations: first an inverse, then a transpose.

The inverse matrix of M is denoted M-1; it’s the matrix that results in the identity matrix when multiplied with the original matrix. Inverse matrices are somewhat nontrivial to compute, so again I’ll refrain from boring you with the math. The transpose matrix, on the other hand, is easy to derive; simply swap the rows and columns of the matrix such that M[i][j] becomes M[j][i].

Transposes are denoted MT, so the proper transform for normal vectors looks like this:

Normal transforms

Don’t forget the middle shape in Figure 4-7; it shows that, at least in some cases, the upper 3×3 of the original model-view matrix can be used to transform the normal vector. In this case, the matrix just happens to be equal to its own inverse-transpose; such matrices are called orthogonal. Rigid body transformations like rotation and uniform scale always result in orthogonal matrices.

Why did I bore you with all this mumbo jumbo about inverses and normal transforms? Two reasons. First, in ES 1.1, keeping nonuniform scale out of your matrix helps performance because OpenGL can avoid computing the inverse-transpose of the model-view. Second, for ES 2.0, you need to understand nitty-gritty details like this anyway to write sensible lighting shaders!

Generating Normals from Parametric Surfaces

Enough academic babble; let’s get back to coding. Since our goal here is to add lighting to ModelViewer, we need to implement the generation of normal vectors. Let’s tweak ISurface in Interfaces.hpp by adding a flags parameter to GenerateVertices, as shown in Example 4-7. New or modified lines are shown in bold.

Example 4-7. Modifying ISurface with support for normals

enum VertexFlags {
    VertexFlagsNormals = 1 << 0,
    VertexFlagsTexCoords = 1 << 1,

struct ISurface {
    virtual int GetVertexCount() const = 0;
    virtual int GetLineIndexCount() const = 0;
    virtual int GetTriangleIndexCount() const = 0;
    virtual void GenerateVertices(vector<float>& vertices,
                                  unsigned char flags = 0) const = 0;
    virtual void GenerateLineIndices(vector<unsigned short>& indices) const = 0;
    virtual void 
      GenerateTriangleIndices(vector<unsigned short>& indices) const = 0;
    virtual ~ISurface() {}

The argument we added to GenerateVertices could have been a boolean instead of a bit mask, but we’ll eventually want to feed additional vertex attributes to OpenGL, such as texture coordinates. For now, just ignore the VertexFlagsTexCoords flag; it’ll come in handy in the next chapter.

Next we need to open ParametricSurface.hpp and make the complementary change to the class declaration of ParametricSurface, as shown in Example 4-8. We’ll also add a new protected method called InvertNormal, which derived classes can optionally override.

Example 4-8. ParametricSurface class declaration

class ParametricSurface : public ISurface {
    int GetVertexCount() const;
    int GetLineIndexCount() const;
    int GetTriangleIndexCount() const;
    void GenerateVertices(vector<float>& vertices, unsigned char flags) const;
    void GenerateLineIndices(vector<unsigned short>& indices) const;
    void GenerateTriangleIndices(vector<unsigned short>& indices) const;
    void SetInterval(const ParametricInterval& interval);
    virtual vec3 Evaluate(const vec2& domain) const = 0;
    virtual bool InvertNormal(const vec2& domain) const { return false; }
    vec2 ComputeDomain(float i, float j) const;
    vec2 m_upperBound;
    ivec2 m_slices;
    ivec2 m_divisions;

Next let’s open ParametericSurface.cpp and replace the implementation of GenerateVertices, as shown in Example 4-9.

Example 4-9. Adding normals to ParametricSurface::GenerateVertices

void ParametricSurface::GenerateVertices(vector<float>& vertices,
                                         unsigned char flags) const
    int floatsPerVertex = 3;
    if (flags & VertexFlagsNormals)
        floatsPerVertex += 3;

    vertices.resize(GetVertexCount() * floatsPerVertex);
    float* attribute = (float*) &vertices[0];

    for (int j = 0; j < m_divisions.y; j++) {
        for (int i = 0; i < m_divisions.x; i++) {

            // Compute Position1
            vec2 domain = ComputeDomain(i, j);
            vec3 range = Evaluate(domain);
            attribute = range.Write(attribute);2

            // Compute Normal
            if (flags & VertexFlagsNormals) {
                float s = i, t = j;

                // Nudge the point if the normal is indeterminate.3
                if (i == 0) s += 0.01f;
                if (i == m_divisions.x - 1) s -= 0.01f;
                if (j == 0) t += 0.01f;
                if (j == m_divisions.y - 1) t -= 0.01f;
                // Compute the tangents and their cross product.4
                vec3 p = Evaluate(ComputeDomain(s, t));
                vec3 u = Evaluate(ComputeDomain(s + 0.01f, t)) - p;
                vec3 v = Evaluate(ComputeDomain(s, t + 0.01f)) - p;
                vec3 normal = u.Cross(v).Normalized();
                if (InvertNormal(domain))5
                    normal = -normal;
                attribute = normal.Write(attribute);6

Compute the position of the vertex by calling Evaluate, which has a unique implementation for each subclass.


Copy the vec3 position into the flat floating-point buffer. The Write method returns an updated pointer.


Surfaces might be nonsmooth in some places where the normal is impossible to determine (for example, at the apex of the cone). So, we have a bit of a hack here, which is to nudge the point of interest in the problem areas.


As covered in Feeding OpenGL with Normals, compute the two tangent vectors, and take their cross product.


Subclasses are allowed to invert the normal if they want. (If the normal points away from the light source, then it’s considered to be the back of the surface and therefore looks dark.) The only shape that overrides this method is the Klein bottle.


Copy the normal vector into the data buffer using its Write method.

This completes the changes to ParametricSurface. You should be able to build ModelViewer at this point, but it will look the same since we have yet to put the normal vectors to good use. That comes next.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required