O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 4. Adding Depth and Realism

Lumos!

Harry Potter and the Chamber of Secrets, J.K. Rowling

When my wife and I go out to see a film packed with special effects, I always insist on sitting through the entire end credits, much to her annoyance. It never ceases to amaze me how many artists work together to produce a Hollywood blockbuster. I’m often impressed with the number of artists whose full-time job concerns lighting. In Pixar’s Up, at least five people have the title “lighting technical director,” four people have the title “key lighting artist,” and another four people have the honor of “master lighting artist.”

Lighting is obviously a key aspect to understanding realism in computer graphics, and that’s much of what this chapter is all about. We’ll refurbish the wireframe viewer sample to use lighting and triangles, rechristening it to Model Viewer. We’ll also throw some light on the subject of shaders, which we’ve been glossing over until now (in ES 2.0, shaders are critical to lighting). Finally, we’ll further enhance the viewer app by giving it the ability to load model files so that we’re not stuck with parametric surfaces forever. Mathematical shapes are great for geeking out, but they’re pretty lame for impressing your 10-year-old!

Examining the Depth Buffer

Before diving into lighting, let’s take a closer look at depth buffers, since we’ll need to add one to wireframe viewer. You might recall the funky framebuffer object (FBO) setup code in the HelloCone sample presented in Example 2-7, repeated here in Example 4-1.

Example 4-1. Depth buffer setup

// Create the depth buffer.
glGenRenderbuffersOES(1, &m_depthRenderbuffer);1
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_depthRenderbuffer);2
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,3
                         GL_DEPTH_COMPONENT16_OES,
                         width,
                         height);
    
// Create the framebuffer object; attach the depth and color buffers.
glGenFramebuffersOES(1, &m_framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
                             GL_COLOR_ATTACHMENT0_OES,
                             GL_RENDERBUFFER_OES,
                             m_colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,4
                             GL_DEPTH_ATTACHMENT_OES,
                             GL_RENDERBUFFER_OES,
                             m_depthRenderbuffer);
    
// Bind the color buffer for rendering.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);

glViewport(0, 0, width, height);
glEnable(GL_DEPTH_TEST);5

...
1

Create a handle to the renderbuffer object that stores depth.

2

Bind the newly created handle, making it affected by subsequent renderbuffer commands.

3

Allocate storage for the depth buffer using 16-bit precision.

4

Attach the depth buffer to the framebuffer object.

5

Enable depth testing—we’ll explain this shortly.

Why does HelloCone need a depth buffer when wireframe viewer does not? When the scene is composed of nothing but monochrome lines, we don’t care about the visibility problem; this means we don’t care which lines are obscured by other lines. HelloCone uses triangles rather than lines, so the visibility problem needs to be addressed. OpenGL uses the depth buffer to handle this problem efficiently.

Figure 4-1 depicts ModelViewer’s depth buffer in grayscale: white pixels are far away, black pixels are nearby. Even though users can’t see the depth buffer, OpenGL needs it for its rendering algorithm. If it didn’t have a depth buffer, you’d be forced to carefully order your draw calls from farthest to nearest. (Incidentally, such an ordering is called the painter’s algorithm, and there are special cases where you’ll need to use it anyway, as you’ll see in Blending Caveats.)

Depth buffer in ModelViewer

Figure 4-1. Depth buffer in ModelViewer

OpenGL uses a technique called depth testing to solve the visibility problem. Suppose you were to render a red triangle directly in front of the camera and then draw a green triangle directly behind the red triangle. Even though the green triangle is drawn last, you’d want to the red triangle to be visible; the green triangle is said to be occluded. Here’s how it works: every rasterized pixel not only has its RGB values written to the color buffer but also has its Z value written to the depth buffer. OpenGL “rejects” occluded pixels by checking whether their Z value is greater than the Z value that’s already in the depth buffer. In pseudocode, the algorithm looks like this:

void WritePixel(x, y, z, color)
{
    if (DepthTestDisabled || z < DepthBuffer[x, y]) {
        DepthBuffer[x, y] = z;
        ColorBuffer[x, y] = color;
    }
}

Beware the Scourge of Depth Artifacts

Something to watch out for with depth buffers is Z-fighting, which is a visual artifact that occurs when overlapping triangles have depths that are too close to each other (see Figure 4-2).

Z-fighting in the Möbius strip

Figure 4-2. Z-fighting in the Möbius strip

Recall that the projection matrix defines a viewing frustum bounded by six planes (Setting the Projection Transform). The two planes that are perpendicular to the viewing direction are called the near plane and far plane. In ES 1.1, these planes are arguments to the glOrtho or glPerspective functions; in ES 2.0, they’re passed to a custom function like the mat4::Frustum method in the C++ vector library from the appendix.

It turns out that if the near plane is too close to the camera or if the far plane is too distant, this can cause precision issues that result in Z-fighting. However this is only one possible cause for Z-fighting; there are many more. Take a look at the following list of suggestions if you ever see artifacts like the ones in Figure 4-2.

Push out your near plane.

For perspective projections, having the near plane close to zero can be detrimental to precision.

Pull in your far plane.

Similarly, the far plane should still be pulled in as far as possible without clipping away portions of your scene.

Scale your scene smaller.

Try to avoid defining an astronomical-scale scene with huge extents.

Increase the bit width of your depth buffer.

All iPhones and iPod touches (at the time of this writing) support 16-bit and 24-bit depth formats. The bit width is determined according to the argument you pass to glRenderbufferStorageOES when allocating the depth buffer.

Are you accidentally rendering coplanar triangles?

The fault might not lie with OpenGL but with your application code. Perhaps your generated vertices are lying on the same Z plane because of a rounding error.

Do you really need depth testing in the first place?

In some cases you should probably disable depth testing anyway. For example, you don’t need it if you’re rendering a 2D heads-up display. Disabling the depth test can also boost performance.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required