O'Reilly logo

iPhone 3D Programming by Philip Rideout

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Anti-Aliasing Tricks with Offscreen FBOs

The iPhone’s first-class support for framebuffer objects is perhaps its greatest enabler of unique effects. In every sample presented so far in this book, we’ve been using a single FBO, namely, the FBO that represents the visible Core Graphics layer. It’s important to realize that FBOs can also be created as offscreen surfaces, meaning they don’t show up on the screen unless bound to a texture. In fact, on most platforms, FBOs are always offscreen. The iPhone is rather unique in that the visible layer is itself treated as an FBO (albeit a special one).

Binding offscreen FBOs to textures enables a whole slew of interesting effects, including page-curling animations, light blooming, and more. We’ll cover some of these techniques later in this book, but recall that one of the topics of this chapter is anti-aliasing. Several sneaky tricks with FBOs can be used to achieve full-scene anti-aliasing, even though the iPhone does not directly support anti-aliasing! We’ll cover two of these techniques in the following subsections.

Note

One technique not discussed here is performing a postprocess on the final image to soften it. While this is not true anti-aliasing, it may produce good results in some cases. It’s similar to the bloom effect covered in Chapter 8.

A Super Simple Sample App for Supersampling

The easiest and crudest way to achieve full-scene anti-aliasing on the iPhone is to leverage bilinear texture filtering. Simply render to an offscreen FBO that has twice the dimensions of the screen, and then bind it to a texture and scale it down, as shown in Figure 6-4. This technique is known as supersampling.

Supersampling

Figure 6-4. Supersampling

To demonstrate how to achieve this effect, we’ll walk through the process of extending the stencil sample to use supersampling. As an added bonus, we’ll throw in an Apple-esque flipping animation, as shown in Figure 6-5. Since we’re creating a secondary FBO anyway, flipping effects like this come virtually for free.

Flipping transition with FBO

Figure 6-5. Flipping transition with FBO

Example 6-8 shows the RenderingEngine class declaration and related type definitions. Class members that carry over from previous samples are replaced with an ellipses for brevity.

Example 6-8. RenderingEngine declaration for the anti-aliasing sample

struct Framebuffers {1
    GLuint Small;
    GLuint Big;
};

struct Renderbuffers {2
    GLuint SmallColor;
    GLuint BigColor;
    GLuint BigDepth;
    GLuint BigStencil;
};

struct Textures {
    GLuint Marble;
    GLuint RhinoBackground;
    GLuint TigerBackground;
    GLuint OffscreenSurface;3
};

class RenderingEngine : public IRenderingEngine {
public:
    RenderingEngine(IResourceManager* resourceManager);
    void Initialize();
    void Render(float objectTheta, float fboTheta) const;4
private:
    ivec2 GetFboSize() const;5
    Textures m_textures;
    Renderbuffers m_renderbuffers;
    Framebuffers m_framebuffers;
    // ...
};
1

The “small” FBO is attached to the visible EAGL layer (320×480). The “big” FBO is the 640×960 surface that contains the 3D scene.

2

The small FBO does not need depth or stencil attachments because the only thing it contains is a full-screen quad; the big FBO is where most of the 3D rendering takes place, so it needs depth and stencil.

3

The 3D scene requires a marble texture for the podium and one background for each side of the animation (Figure 6-5). The fourth texture object, OffscreenSurface, is attached to the big FBO.

4

The application layer passes in objectTheta to control the rotation of the podium and passes in fboTheta to control the flipping transitions.

5

GetFboSize is a new private method for conveniently determining the size of the currently bound FBO. This method helps avoid the temptation to hardcode some magic numbers or to duplicate state that OpenGL already maintains.

First let’s take a look at the GetFboSize implementation (Example 6-9), which returns a width-height pair for the size. The return type is an instance of ivec2, one of the types defined in the C++ vector library in the appendix.

Example 6-9. GetFboSize() implementation

ivec2 RenderingEngine::GetFboSize() const
{
    ivec2 size;
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
                                    GL_RENDERBUFFER_WIDTH_OES, &size.x);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
                                    GL_RENDERBUFFER_HEIGHT_OES, &size.y);
    return size;
}

Next let’s deal with the creation of the two FBOs. Recall the steps for creating the on-screen FBO used in almost every sample so far:

  1. In the RenderingEngine constructor, generate an identifier for the color renderbuffer, and then bind it to the pipeline.

  2. In the GLView class (Objective-C), allocate storage for the color renderbuffer like so:

    [m_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:eaglLayer]
  3. In the RenderingEngine::Initialize method, create a framebuffer object, and attach the color renderbuffer to it.

  4. If desired, create and allocate renderbuffers for depth and stencil, and then attach them to the FBO.

For the supersampling sample that we’re writing, we still need to perform the first three steps in the previous sequence, but then we follow it with the creation of the offscreen FBO. Unlike the on-screen FBO, its color buffer is allocated in much the same manner as depth and stencil:

glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, width, height);

See Example 6-10 for the Initialize method used in the supersampling sample.

Example 6-10. Initialize() for supersampling

void RenderingEngine::Initialize()
{
    // Create the on-screen FBO.
    
    glGenFramebuffersOES(1, &m_framebuffers.Small);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffers.Small);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_COLOR_ATTACHMENT0_OES,
                                 GL_RENDERBUFFER_OES, 
                                 m_renderbuffers.SmallColor);
    
    // Create the double-size off-screen FBO.
    
    ivec2 size = GetFboSize() * 2;

    glGenRenderbuffersOES(1, &m_renderbuffers.BigColor);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.BigColor);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES,
                             size.x, size.y);

    glGenRenderbuffersOES(1, &m_renderbuffers.BigDepth);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.BigDepth);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT24_OES,
                             size.x, size.y);

    glGenRenderbuffersOES(1, &m_renderbuffers.BigStencil);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.BigStencil);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_STENCIL_INDEX8_OES,
                             size.x, size.y);

    glGenFramebuffersOES(1, &m_framebuffers.Big);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffers.Big);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_COLOR_ATTACHMENT0_OES,
                                 GL_RENDERBUFFER_OES, 
                                 m_renderbuffers.BigColor);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_DEPTH_ATTACHMENT_OES,
                                 GL_RENDERBUFFER_OES, 
                                 m_renderbuffers.BigDepth);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, 
                                 GL_STENCIL_ATTACHMENT_OES,
                                 GL_RENDERBUFFER_OES,
                                  m_renderbuffers.BigStencil);

    // Create a texture object and associate it with the big FBO.
    
    glGenTextures(1, &m_textures.OffscreenSurface);
    glBindTexture(GL_TEXTURE_2D, m_textures.OffscreenSurface);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size.x, size.y, 0,
                 GL_RGBA, GL_UNSIGNED_BYTE, 0);
    glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES,
                              GL_TEXTURE_2D, m_textures.OffscreenSurface, 0);

    // Check FBO status.
    
    GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
    if (status != GL_FRAMEBUFFER_COMPLETE_OES) {
        cout << "Incomplete FBO" << endl;
        exit(1);
    }
    
    // Load textures, create VBOs, set up various GL state.
    ...
}

You may have noticed two new FBO-related function calls in Example 6-10: glFramebufferTexture2DOES and glCheckFramebufferStatusOES. The formal function declarations look like this:

void glFramebufferTexture2DOES(GLenum target, 
                               GLenum attachment, GLenum textarget,
                               GLuint texture, GLint level);

GLenum glCheckFramebufferStatusOES(GLenum target);

(As usual, the OES suffix can be removed for ES 2.0.)

The glFramebufferTexture2DOES function allows you to cast a color buffer into a texture object. FBO texture objects get set up just like any other texture object: they have an identifier created with glGenTextures, they have filter and wrap modes, and they have a format that should match the format of the FBO. The main difference with FBO textures is the fact that null gets passed to the last argument of glTexImage2D, since there’s no image data to upload.

Note that the texture in Example 6-10 has non-power-of-two dimensions, so it specifies clamp-to-edge wrapping to accommodate third-generation devices. For older iPhones, the sample won’t work; you’d have to change it to POT dimensions. Refer to Dealing with Size Constraints for hints on how to do this. Keep in mind that the values passed to glViewport need not match the size of the renderbuffer; this comes in handy when rendering to an NPOT subregion of a POT texture.

The other new function, glCheckFramebufferStatusOES, is a useful sanity check to make sure that an FBO has been set up properly. It’s easy to bungle the creation of FBOs if the sizes of the attachments don’t match up or if their formats are incompatible with each other. glCheckFramebufferStatusOES returns one of the following values, which are fairly self-explanatory:

  • GL_FRAMEBUFFER_COMPLETE

  • GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT

  • GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT

  • GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS

  • GL_FRAMEBUFFER_INCOMPLETE_FORMATS

  • GL_FRAMEBUFFER_UNSUPPORTED

Next let’s take a look at the render method of the supersampling sample. Recall from the class declaration that the application layer passes in objectTheta to control the rotation of the podium and passes in fboTheta to control the flipping transitions. So, the first thing the Render method does is look at fboTheta to determine which background image should be displayed and which shape should be shown on the podium. See Example 6-11.

Example 6-11. Render() for supersampling

void RenderingEngine::Render(float objectTheta, float fboTheta) const
{
    Drawable drawable;
    GLuint background;
    vec3 color;

    // Look at fboTheta to determine which "side" should be rendered:
    //   1) Orange Trefoil knot against a Tiger background
    //   2) Green Klein bottle against a Rhino background

    if (fboTheta > 270 || fboTheta < 90) {
        background = m_textures.TigerBackground;
        drawable = m_drawables.Knot;
        color = vec3(1, 0.5, 0.1);
    } else {
        background = m_textures.RhinoBackground;
        drawable = m_drawables.Bottle;
        color = vec3(0.5, 0.75, 0.1);
    }

    // Bind the double-size FBO.
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffers.Big);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.BigColor);
    ivec2 bigSize = GetFboSize();
    glViewport(0, 0, bigSize.x, bigSize.y);

    // Draw the 3D scene - download the example to see this code.
    ...

    // Render the background.
    glColor4f(0.7, 0.7, 0.7, 1);
    glBindTexture(GL_TEXTURE_2D, background);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glFrustumf(-0.5, 0.5, -0.5, 0.5, NearPlane, FarPlane);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glTranslatef(0, 0, -NearPlane * 2);
    RenderDrawable(m_drawables.Quad);
    glColor4f(1, 1, 1, 1);
    glDisable(GL_BLEND);

    // Switch to the on-screen render target.
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffers.Small);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_renderbuffers.SmallColor);
    ivec2 smallSize = GetFboSize();
    glViewport(0, 0, smallSize.x, smallSize.y);

    // Clear the color buffer only if necessary.
    if ((int) fboTheta % 180 != 0) {
        glClearColor(0, 0, 0, 1);
        glClear(GL_COLOR_BUFFER_BIT);
    }

    // Render the offscreen surface by applying it to a quad.
    glDisable(GL_DEPTH_TEST);
    glRotatef(fboTheta, 0, 1, 0);
    glBindTexture(GL_TEXTURE_2D, m_textures.OffscreenSurface);
    RenderDrawable(m_drawables.Quad);
    glDisable(GL_TEXTURE_2D);
}

Most of Example 6-11 is fairly straightforward. One piece that may have caught your eye is the small optimization made right before blitting the offscreen FBO to the screen:

// Clear the color buffer only if necessary.
if ((int) fboTheta % 180 != 0) {
    glClearColor(0, 0, 0, 1);
    glClear(GL_COLOR_BUFFER_BIT);
}

This is a sneaky little trick. Since the quad is the exact same size as the screen, there’s no need to clear the color buffer; unnecessarily issuing a glClear can hurt performance. However, if a flipping animation is currently underway, the color buffer needs to be cleared to prevent artifacts from appearing in the background; flip back to Figure 6-5 and observe the black areas. If fboTheta is a multiple of 180, then the quad completely fills the screen, so there’s no need to issue a clear.

Left: normal rendering; right: 2× supersampling

Figure 6-6. Left: normal rendering; right: 2× supersampling

That’s it for the supersampling sample. The quality of the anti-aliasing is actually not that great; you can still see some “stair-stepping” along the bottom outline of the shape in Figure 6-6. You might think that creating an even bigger offscreen buffer, say quadruple-size, would provide higher-quality results. Unfortunately, using a quadruple-size buffer would require two passes; directly applying a 1280×1920 texture to a 320×480 quad isn’t sufficient because GL_LINEAR filtering only samples from a 2×2 neighborhood of pixels. To achieve the desired result, you’d actually need three FBOs as follows:

  • 1280×1920 offscreen FBO for the 3D scene

  • 640×960 offscreen FBO that contains a quad with the 1280×1920 texture applied to it

  • 320×480 on-screen FBO that contains a quad with the 640×960 texture applied to it

Not only is this laborious, but it’s a memory hog. Older iPhones don’t even support textures this large! It turns out there’s another anti-aliasing strategy called jittering, and it can produce high-quality results without the memory overhead of supersampling.

Jittering

Jittering is somewhat more complex to implement than supersampling, but it’s not rocket science. The idea is to rerender the scene multiple times at slightly different viewpoints, merging the results along the way. You need only two FBOs for this method: the on-screen FBO that accumulates the color and the offscreen FBO that the 3D scene is rendered to. You can create as many jittered samples as you’d like, and you still need only two FBOs. Of course, the more jittered samples you create, the longer it takes to create the final rendering. Example 6-12 shows the pseudocode for the jittering algorithm.

Example 6-12. Jitter pseudocode

BindFbo(OnscreenBuffer)
glClear(GL_COLOR_BUFFER_BIT)

for (int sample = 0; sample < SampleCount; sample++) {
   BindFbo(OffscreenBuffer)

   vec2 offset = JitterTable[sample]

   SetFrustum(LeftPlane + offset.x, RightPlane + offset.x,
              TopPlane + offset.y, BottomPlane + offset.y,
              NearPlane, FarPlane)

   Render3DScene()

   f = 1.0 / SampleCount
   glColor4f(f, f, f, 1)
   glEnable(GL_BLEND)
   glBlendFunc(GL_ONE, GL_ONE)

   BindFbo(OnscreenBuffer)
   BindTexture(OffscreenBuffer)
   RenderFullscreenQuad()
} 

The key part of Example 6-12 is the blending configuration. By using a blend equation of plain old addition (GL_ONE, GL_ONE) and dimming the color according to the number of samples, you’re effectively accumulating an average color.

An unfortunate side effect of jittering is reduced color precision; this can cause banding artifacts, as shown in Figure 6-7. On some platforms the banding effect can be neutralized with a high-precision color buffer, but that’s not supported on the iPhone. In practice, I find that creating too many samples is detrimental to performance anyway, so the banding effect isn’t usually much of a concern.

2×, 4×, 8×, 16×, and 32× jittering

Figure 6-7. 2×, 4×, 8×, 16×, and 32× jittering

Determining the jitter offsets (JitterTable in Example 6-12) is a bit of black art. Totally random values don’t work well since they don’t guarantee uniform spacing between samples. Interestingly, dividing up each pixel into an equally spaced uniform grid does not work well either! Example 6-13 shows some commonly used jitter offsets.

Example 6-13. Popular jitter offsets

const vec2 JitterOffsets2[2] =
{
    vec2(0.25f, 0.75f), vec2(0.75f, 0.25f),
};

const vec2 JitterOffsets4[4] =
{
    vec2(0.375f, 0.25f), vec2(0.125f, 0.75f),
    vec2(0.875f, 0.25f), vec2(0.625f, 0.75f),
};

const vec2 JitterOffsets8[8] =
{
    vec2(0.5625f, 0.4375f), vec2(0.0625f, 0.9375f),
    vec2(0.3125f, 0.6875f), vec2(0.6875f, 0.8125f),
    
    vec2(0.8125f, 0.1875f), vec2(0.9375f, 0.5625f),
    vec2(0.4375f, 0.0625f), vec2(0.1875f, 0.3125f),
};

const vec2 JitterOffsets16[16] =
{
    vec2(0.375f, 0.4375f), vec2(0.625f, 0.0625f),
    vec2(0.875f, 0.1875f), vec2(0.125f, 0.0625f),
    
    vec2(0.375f, 0.6875f), vec2(0.875f, 0.4375f),
    vec2(0.625f, 0.5625f), vec2(0.375f, 0.9375f),
    
    vec2(0.625f, 0.3125f), vec2(0.125f, 0.5625f),
    vec2(0.125f, 0.8125f), vec2(0.375f, 0.1875f),
    
    vec2(0.875f, 0.9375f), vec2(0.875f, 0.6875f),
    vec2(0.125f, 0.3125f), vec2(0.625f, 0.8125f),
};

Let’s walk through the process of creating a simple app with jittering. Much like we did with the supersample example, we’ll include a fun transition animation. (You can download the full project from the book’s website at http://oreilly.com/catalog/9780596804831.) This time we’ll use the jitter offsets to create a defocusing effect, as shown in Figure 6-8.

Defocus transition with jitter

Figure 6-8. Defocus transition with jitter

To start things off, let’s take a look at the RenderingEngine class declaration and related types. It’s not unlike the class we used for supersampling; the main differences are the labels we give to the FBOs. Accumulated denotes the on-screen buffer, and Scene denotes the offscreen buffer. See Example 6-14.

Example 6-14. RenderingEngine declaration for the jittering sample

struct Framebuffers {
    GLuint Accumulated;
    GLuint Scene;
};

struct Renderbuffers {
    GLuint AccumulatedColor;
    GLuint SceneColor;
    GLuint SceneDepth;
    GLuint SceneStencil;
};

struct Textures {
    GLuint Marble;
    GLuint RhinoBackground;
    GLuint TigerBackground;
    GLuint OffscreenSurface;
};

    
class RenderingEngine : public IRenderingEngine {
public:
    RenderingEngine(IResourceManager* resourceManager);
    void Initialize();
    void Render(float objectTheta, float fboTheta) const;
private:
    void RenderPass(float objectTheta, float fboTheta, vec2 offset) const;
    Textures m_textures;
    Renderbuffers m_renderbuffers;
    Framebuffers m_framebuffers;
    // ...
};

Example 6-14 also adds a new private method called RenderPass; the implementation is shown in Example 6-15. Note that we’re keeping the fboTheta argument that we used in the supersample example, but now we’re using it to compute a scale factor for the jitter offset rather than a y-axis rotation. If fboTheta is 0 or 180, then the jitter offset is left unscaled, so the scene is in focus.

Example 6-15. RenderPass method for jittering

void RenderingEngine::RenderPass(float objectTheta, float fboTheta, vec2 offset) const
{
    // Tweak the jitter offset for the defocus effect:
    
    offset -= vec2(0.5, 0.5);
    offset *= 1 + 100 * sin(fboTheta * Pi / 180);

    // Set up the frustum planes:

    const float AspectRatio = (float) m_viewport.y / m_viewport.x;
    const float NearPlane = 5;
    const float FarPlane = 50;
    const float LeftPlane = -1;
    const float RightPlane = 1;
    const float TopPlane = -AspectRatio;
    const float BottomPlane = AspectRatio;

    // Transform the jitter offset from window space to eye space:
    
    offset.x *= (RightPlane - LeftPlane) / m_viewport.x;
    offset.y *= (BottomPlane - TopPlane) / m_viewport.y;
    
    // Compute the jittered projection matrix:

    mat4 projection = mat4::Frustum(LeftPlane + offset.x, 
                                    RightPlane + offset.x, 
                                    TopPlane + offset.y, 
                                    BottomPlane + offset.y,
                                    NearPlane, FarPlane);
    
    // Render the 3D scene - download the example to see this code.
    ...
}

Example 6-16 shows the implementation to the main Render method. The call to RenderPass is shown in bold.

Example 6-16. Render method for jittering

void RenderingEngine::Render(float objectTheta, float fboTheta) const
{

    // This is where you put the jitter offset declarations 
    // from Example 6-13.
    
    const int JitterCount = 8;
    const vec2* JitterOffsets = JitterOffsets8;
    
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffers.Accumulated);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES,
                          m_renderbuffers.AccumulatedColor);

    glClearColor(0, 0, 0, 1);
    glClear(GL_COLOR_BUFFER_BIT);
    
    for (int i = 0; i < JitterCount; i++) {
        
        glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_framebuffers.Scene);
        glBindRenderbufferOES(GL_RENDERBUFFER_OES, 
                              m_renderbuffers.SceneColor);

        RenderPass(objectTheta,
                   fboTheta, JitterOffsets[i]);
        
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        
        const float NearPlane = 5, FarPlane = 50;
        glFrustumf(-0.5, 0.5, -0.5, 0.5, NearPlane, FarPlane);
        
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        glTranslatef(0, 0, -NearPlane * 2);
        
        float f = 1.0f / JitterCount;
        f *= (1 + abs(sin(fboTheta * Pi / 180)));
        glColor4f(f, f, f, 1);

        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE); 
        glBindFramebufferOES(GL_FRAMEBUFFER_OES,
                             m_framebuffers.Accumulated);
        glBindRenderbufferOES(GL_RENDERBUFFER_OES,
                              m_renderbuffers.AccumulatedColor);
        glDisable(GL_DEPTH_TEST);
        glBindTexture(GL_TEXTURE_2D, m_textures.OffscreenSurface);
        RenderDrawable(m_drawables.Quad);
        glDisable(GL_TEXTURE_2D);
        glDisable(GL_BLEND);
    }
}

Example 6-16 might give you sense of déjà vu; it’s basically an implementation of the pseudocode algorithm that we already presented in Example 6-12. One deviation is how we compute the dimming effect:

float f = 1.0f / JitterCount;
f *= (1 + abs(sin(fboTheta * Pi / 180)));
glColor4f(f, f, f, 1);

The second line in the previous snippet is there only for the special transition effect. In addition to defocusing the scene, it’s also brightened to simulate pupil dilation. If fboTheta is 0 or 180, then f is left unscaled, so the scene has its normal brightness.

Other FBO Effects

An interesting variation on jittering is depth of field, which blurs out the near and distant portions of the scene. To pull this off, compute the viewing frustum such that a given slice (parallel to the viewing plane) stays the same with each jitter pass; this is the focus plane.

Yet another effect is motion blur, which simulates the ghosting effect seen on displays with low response times. With each pass, make incremental adjustments to your animation, and gradually fade in the alpha value using glColor.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required